Readit News logoReadit News
ricardobayes commented on Everything is correlated (2014–23)   gwern.net/everything... · Posted by u/gmays
ricardobayes · 6 days ago
Not commenting on the topic at hand, but my goodness, what a beautiful blog. That drop cap, the inline comments on the right hand side that appear on larger screens, the progress bar, chef's kiss. This is how a love project looks like.
ricardobayes commented on 1976 Soviet edition of 'The Hobbit' (2015)   mashable.com/archive/sovi... · Posted by u/us-merul
ricardobayes · 14 days ago
In Hungary, the Lord of the Rings book was translated by Göncz Árpád who later went on to become President of Hungary.

Deleted Comment

ricardobayes commented on GPT-5   openai.com/gpt-5/... · Posted by u/rd
RealityVoid · 21 days ago
I mean, in certain US cities you can take a waymo right now. It seems that adage where we overestimate change in the short term and underestimate change in the long term fits right in here.
ricardobayes · 21 days ago
Of course. My point being "AI is going to take dev jobs" is very much like saying "Self driving will take taxi driver jobs". Never happened and likely won't happen or on a very, very long time scale.
ricardobayes commented on GPT-5   openai.com/gpt-5/... · Posted by u/rd
highfrequency · 21 days ago
It is frequently suggested that once one of the AI companies reaches an AGI threshold, they will take off ahead of the rest. It's interesting to note that at least so far, the trend has been the opposite: as time goes on and the models get better, the performance of the different company's gets clustered closer together. Right now GPT-5, Claude Opus, Grok 4, Gemini 2.5 Pro all seem quite good across the board (ie they can all basically solve moderately challenging math and coding problems).

As a user, it feels like the race has never been as close as it is now. Perhaps dumb to extrapolate, but it makes me lean more skeptical about the hard take-off / winner-take-all mental model that has been pushed.

Would be curious to hear the take of a researcher at one of these firms - do you expect the AI offerings across competitors to become more competitive and clustered over the next few years, or less so?

ricardobayes · 21 days ago
AGI in 5/10 years is similar to "we won't have steering wheels in cars" or "we'll be asleep driving" in 5/10 years. Remember that? What happened to that? It looked so promising.
ricardobayes commented on GPT-5   openai.com/gpt-5/... · Posted by u/rd
kybernetikos · 21 days ago
ChatGPT5 in this demo:

> For an airplane wing (airfoil), the top surface is curved and the bottom is flatter. When the wing moves forward:

> * Air over the top has to travel farther in the same amount of time -> it moves faster -> pressure on the top decreases.

> * Air underneath moves slower -> pressure underneath is higher

> * The presure difference creates an upward force - lift

Isn't that explanation of why wings work completely wrong? There's nothing that forces the air to cover the top distance in the same time that it covers the bottom distance, and in fact it doesn't. https://www.cam.ac.uk/research/news/how-wings-really-work

Very strange to use a mistake as your first demo, especially while talking about how it's phd level.

ricardobayes · 21 days ago
To me, it's weird to call it "PhD-level". That, to me, means to be able to take in existing information on a certain very niche area and able to "push the boundary". I might be wrong but to date I've never seen any LLM invent "new science", that makes PhD, really PhD. It also seems very confusing to me that many sources mention "stone age" and "PhD-level" in the same article. Which one is it?

People seem to overcomplicate what LLM's are capable of, but at their core they are just really good word parsers.

ricardobayes commented on Open models by OpenAI   openai.com/open-models/... · Posted by u/lackoftactics
captainregex · 23 days ago
I’m still trying to understand what is the biggest group of people that uses local AI (or will)? Students who don’t want to pay but somehow have the hardware? Devs who are price conscious and want free agentic coding?

Local, in my experience, can’t even pull data from an image without hallucinating (Qwen 2.5 VI in that example). Hopefully local/small models keep getting better and devices get better at running bigger ones

It feels like we do it because we can more than because it makes sense- which I am all for! I just wonder if i’m missing some kind of major use case all around me that justifies chaining together a bunch of mac studios or buying a really great graphics card. Tools like exo are cool and the idea of distributed compute is neat but what edge cases truly need it so badly that it’s worth all the effort?

ricardobayes · 22 days ago
I would say, any company who doesn't have their own AI developed. You always hear companies "mandating" AI usage, but for the most part it's companies developing their own solutions/agents. No self-respecting company with a tight opsec would allow a random "always-online" LLM that could just rip your codebase either piece by piece or the whole thing at once if it's a IDE addon (or at least I hope that's the case). So yeah, I'd say locally deployed LLM's/Agents are a gamechanger.
ricardobayes commented on Show HN: FFlags – Feature flags as code, served from the edge   fflags.com... · Posted by u/tushr
ricardobayes · 22 days ago
Why does this need to be a dependency? In my view feature flags are core enough not to be outsourced to a third party. Although, there are companies using libraries for "isEven" so there might be a market for it.
ricardobayes commented on Why is GitHub UI getting slower?   yoyo-code.com/why-is-gith... · Posted by u/lr0
ricardobayes · 23 days ago
There is a whole new paradigm in UX design now: get rid of immediate page changes and loading states and instead keep the user on the same page until things load, and show it when it's ready.
ricardobayes commented on Ferrari Status   collabfund.com/blog/ferra... · Posted by u/surprisetalk
trynumber9 · a month ago
>why don’t more companies follow Ferrari’s lead

Ford did try before. For example, the 2006 Ford GT. In comparison, the similarly-priced Ferrari F430 of the era was the mass market automobile with about 5x as many made.

So I wonder how much one should draw from these rationalizations of prestige marketing. At the end of the day Ferrari still uses its larger super-car production capacity to keep even Ford only an occasional entrant into the low-end of market.

ricardobayes · a month ago
Yes, although "larger" here might not be understood by many people. The 360, which was said to be a "larger" production car (by Ferrari standards), only approximately 17,000 of those were ever made.

u/ricardobayes

KarmaCake day1932August 14, 2020View Original