Readit News logoReadit News
drilbo commented on Mushroom learns to crawl after being given robot body (2024)   the-independent.com/tech/... · Posted by u/Anon84
theflyestpilot · a month ago
This combo reminds me of a character in a recent anime called Scavengers Reign.

https://www.wikiwand.com/en/articles/Scavengers_Reign

This limited series blew my mind. Total master piece.

In favor of integrating fungus with robotics(i think).

drilbo · a month ago
erm acktually not an anime
drilbo commented on Claude 4   anthropic.com/news/claude... · Posted by u/meetpateltech
DonHopkins · 3 months ago
Yeah, it seems pretty up-to-date with Elon's latest White Genocide and Holocaust Denial conspiracy theories, but it's so heavy handed about bringing them up out of the blue and pushing them in the middle of discussions about the Zod 4 and Svelte 5 and Tailwind 4 that I think those topics are coming from its prompts, not its training.
drilbo · 3 months ago
while this is obviously a very damning example, tbf it does seem to be an extreme outlier.
drilbo commented on France Endorses UN Open Source Principles   social.numerique.gouv.fr/... · Posted by u/bzg
Zambyte · 3 months ago
I'm a little confused by the context of this since I don't speak French, but it seems like you're unfamiliar with the OpenELM family of models.

https://huggingface.co/apple/OpenELM

drilbo · 3 months ago
that looks like a dinosaur now
drilbo commented on LTXVideo 13B AI video generation   ltxv.video/... · Posted by u/zoudong376
ericrallen · 4 months ago
Seems a bit unfair (or maybe just ill-informed?) to lump this in with the confusing mess that is model naming at OpenAI.

The parameter count is more more useful and concrete information than anything OpenAI or their competitors have put into the name of their models.

The parameter count gives you a heuristic for estimating if you can run this model on your own hardware, and how capable you might expect it to be compared to the broader spectrum of smaller models.

It also allows you to easily distinguish between different sizes of model trained in the same way, but with more parameters. It’s likely there is a higher parameter count model in the works and this makes it easy to distinguish between the two.

drilbo · 4 months ago
>It’s likely their is a higher parameter count model in the works and this makes it easy to distinguish between the two.

in this case it looks like this is the higher parameter count version, the 2b was released previously. (Not that it excludes them from making an even larger one in the future, altho that seems atypical of video/image/audio models)

re: GP: I sincerely wish 'Open'AI were this forthcoming with things like param count. If they have a 'b' in their naming, it's only to distingish it from the previous 'a' version, and don't ask me what an 'o' is supposed to mean.

drilbo commented on So Much Blood   dynomight.net/blood/... · Posted by u/debesyla
TeMPOraL · 4 months ago
Temporarily, on the margin. A human would need multiple Snickers bars per day to survive, and can't survive on Snickers bars alone for more than couple days or weeks.

Also no human is anywhere close to being as knowledgeable and skilled as LLMs at all the things at the same time, so it hardly even compares.

drilbo · 4 months ago
days? Pretty sure I could survive at least a couple years off snicker bars
drilbo commented on Mistral ships Le Chat – enterprise AI assistant that can run on prem   mistral.ai/news/le-chat-e... · Posted by u/_lateralus_
omneity · 4 months ago
> Need local stuff? Llama(maybe Gemma)

You probably want to replace Llama with Qwen in there. And Gemma is not even close.

> Mistral has been consistently last place, or at least last place among ChatGPT, Claude, Llama, and Gemini/Gemma.

Mistral held for a long time the position of "workhorse open-weights base model" and nothing precludes them from taking it again with some smart positioning.

They might not currently be leading a category, but as an outside observer I could see them (like Cohere) actively trying to find innovative business models to survive, reach PMF and keep the dream going, and I find that very laudable. I expect them to experiment a lot during this phase, and that probably means not doubling down on any particular niche until they find a strong signal.

drilbo · 4 months ago
>You probably want to replace Llama with Qwen in there. And Gemma is not even close.

Have you tried the latest, gemma3? I've been pretty impressed with it. Altho I do agree that qwen3 quickly overshadowed it, it seems too soon to dismiss it altogether. EG, the 3~4b and smaller versions of gemma seem to freak out way less frequently than similar param size qwen versions, tho I haven't been able to rule out quant and other factors in this just yet.

It's very difficult to fault anyone for not keeping up with the latest SOTA in this space. The fact we have several options that anyone can serviceably run, even on mobile, is just incredible.

Anyway, i agree that Mistral is worth keeping an eye on. They played a huge part in pushing the other players toward open weights and proving smaller models can have a place at the table. While I personally can't get that excited about a closed model, it's definitely nice to see they haven't tapped out.

drilbo commented on Show HN: Clippy – 90s UI for local LLMs   felixrieseberg.github.io/... · Posted by u/felixrieseberg
xyc · 4 months ago
Actually this is a good way to find product ideas. I placed a query in Grok to find posts about what people want, similar to this. Then it performs multiple searches on X including embedding search, and suggested people want stuff like tamagotchi, ICQ etc. back.
drilbo · 4 months ago
I feel like these are all great examples of things people think they want. Making a post about it is one thing, actually buying or using a product, I think the majority of nostalgic people will quickly remember why they don't actually want it in their adult life.
drilbo commented on xAI dev leaks API key for private SpaceX, Tesla LLMs   krebsonsecurity.com/2025/... · Posted by u/todsacerdoti
rcarmo · 4 months ago
One thing that sticks out to me is that there is an incorrect assumption from the journalists that having the API keys to an LLM can lead to injecting data.

People still don’t know how LLMs work and think they can be trained by interacting with them at the API level.

drilbo · 4 months ago
unless I somehow skimmed over it, they only appear to refer to "prompt injection"
drilbo commented on Internet in a Box   internet-in-a-box.org/... · Posted by u/homebrewer
minhazm · 4 months ago
What would they be plugging these USB flash drives into? Underdeveloped countries / regions have very low penetration with traditional laptops / desktop computers. But nearly everyone has a smartphone of some kind, which has WiFi. That's the reason the form factor of these are mobile hotspots.
drilbo · 4 months ago
Well, I have a few flash drives with USB-C connections built in, and 1 or 2 with USB-B
drilbo commented on A $20k American-made electric pickup with no paint, no stereo, no screen   theverge.com/electric-car... · Posted by u/kwindla
lenkite · 4 months ago
Many politicians campaigning for green energy (aka AOC) also fly on private jets everywhere so that they can fight the oligarchy - this behavior isn't restricted to wealthy businessmen alone.
drilbo · 4 months ago
Maybe you shouldn't base your assumptions of the world on politically charged clickbait headlines... Did her and Bernie use a private jet? Quite possibly. Does that mean they fly "everywhere" on private jets? Certifiably false.

u/drilbo

KarmaCake day123October 8, 2024View Original