Readit News logoReadit News
atleastoptimal commented on Where is the exponential growth part of AI?    · Posted by u/anon191928
atleastoptimal · 17 hours ago
LLM ability to complete long tasks is increasing at an exponential rate

https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...

atleastoptimal commented on Ask HN: Best codebases to study to learn software design?    · Posted by u/pixelworm
atleastoptimal · 20 hours ago
Yanderedev source code
atleastoptimal commented on Ask HN: What is the biggest problem LLMs solved in your life/work    · Posted by u/mrs6969
atleastoptimal · a day ago
I used to get a deep sense of dread trying to write apps or code complex projects. I could always do well at LeetCode style programming challenges, but making full-blown web apps and managing all the set up, initialization issues and bug fixes was a headache that turned me off from software engineering.

However now all that is way easier with LLM's and stuff like Claude Code, I don't have that dread anymore because I can always just increase/decrease the amount I rely on LLM's and use them as a Hail Mary so I am not spending hours searching a super specific weird bug.

I know it means I may not be learning as much, but I see it as a worthwhile exchange because otherwise I probably would have not gone into making apps or doing anything ambitious in the first place.

atleastoptimal commented on AI Is Not a Dev    · Posted by u/tudorizer
tudorizer · 6 days ago
As long as you're not going too far down the "the hammer will start using itself" path.

Recursive feedback loops and fast pace of improvements are priced in.

atleastoptimal · 3 days ago
They're not priced in inasmuch as many people consider it fundamentally impossible for AGI to be reached
atleastoptimal commented on AI Is Not a Dev    · Posted by u/tudorizer
malfist · 6 days ago
> one that is improving at an exponential rate

I don't know what AI you've been looking at but GPT-5 is not twice as good as GPT-4 which wasn't twice as good as GPT-3

atleastoptimal · 3 days ago
I'm referring to the long-horizon task benchmark which has been exponential since GPT-2

https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...

atleastoptimal commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
zdragnar · 3 days ago
Claude code is neither sentient nor sapient.

I suspect most people envision AGI as at least having sentience. To borrow from Star Trek, the Enterprise's main computer is not at the level of AGI, but Data is.

The biggest thing that is missing (IMHO) is a discrete identity and notion of self. It'll readily assume a role given in a prompt, but lacks any permanence.

atleastoptimal · 3 days ago
Any claim of sentience is neither provable nor falsifiable. Caring about its definition has nothing to do with capabilities.
atleastoptimal commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
hyfgfh · 3 days ago
Svgs, date management, Http, so many simpler things we dont have solve and somehow people believe they will do it by putting enough money in LLMs when it cant count

Why some people understood when they tried it with blockchain, nfts, web3, AR, ... any good engineer should know principle of energy efficiency instead of having faith in the Infinite monkey theorem

atleastoptimal · 3 days ago
LLM’s can count and the best can do mathematics at quite a high level now.

Not sure why people insist that the state of AI 2-3 years ago still applies today.

atleastoptimal commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
atleastoptimal · 3 days ago
Of course someone building AI scaffolding and infrastructure tools will say that AI scaffolding and infrastructure tools are the most important.

IME it’s both though. Better models, bigger models, and infrastructure all help get to AGI.

atleastoptimal commented on AI Is Not a Dev    · Posted by u/tudorizer
atleastoptimal · 7 days ago
It's a new hammer

but one that is improving at an exponential pace and is developing capabilities to use itself with increasing reliability

It's easy to look at AI and draw a simple analogy to existing tools, because in most cases it is used as a tool, but the properties of intelligence and its ability to make things in the world is very unique and not comparable to any other tool.

All tools are useful because they require intelligence to use, and the tool magnifies the aim of intelligence. When the tools become intelligent themselves, certain recursive feedback loops will start to appear. Simply look at the quality of AI code outputs from 2 years ago compared to today.

atleastoptimal commented on AI is different   antirez.com/news/155... · Posted by u/grep_it
dmead · 10 days ago
Is there anything you can tell me that will help me drop the nagging feeling that gradient descent trained models will just never be good?

I understand all of what you said, but I can't get over that fact that the term AI is being used for these architectures. It seems like the industry is just trying to do a cool parlor trick in convincing the masses this is somehow AI from science fiction.

Maybe I'm being overly cynical, but a lot of this stinks.

atleastoptimal · 10 days ago
The thing is AI is already "good" for a lot of things. It all depends on your definition of "good" and what you require of an AI model.

It can do a lot of things that are generally very effective. High reliability semantic parsing from images is just one thing that modern LLM's are very reliable at.

u/atleastoptimal

KarmaCake day3788February 23, 2023View Original