However now all that is way easier with LLM's and stuff like Claude Code, I don't have that dread anymore because I can always just increase/decrease the amount I rely on LLM's and use them as a Hail Mary so I am not spending hours searching a super specific weird bug.
I know it means I may not be learning as much, but I see it as a worthwhile exchange because otherwise I probably would have not gone into making apps or doing anything ambitious in the first place.
Recursive feedback loops and fast pace of improvements are priced in.
I don't know what AI you've been looking at but GPT-5 is not twice as good as GPT-4 which wasn't twice as good as GPT-3
https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...
I suspect most people envision AGI as at least having sentience. To borrow from Star Trek, the Enterprise's main computer is not at the level of AGI, but Data is.
The biggest thing that is missing (IMHO) is a discrete identity and notion of self. It'll readily assume a role given in a prompt, but lacks any permanence.
Why some people understood when they tried it with blockchain, nfts, web3, AR, ... any good engineer should know principle of energy efficiency instead of having faith in the Infinite monkey theorem
Not sure why people insist that the state of AI 2-3 years ago still applies today.
IME it’s both though. Better models, bigger models, and infrastructure all help get to AGI.
but one that is improving at an exponential pace and is developing capabilities to use itself with increasing reliability
It's easy to look at AI and draw a simple analogy to existing tools, because in most cases it is used as a tool, but the properties of intelligence and its ability to make things in the world is very unique and not comparable to any other tool.
All tools are useful because they require intelligence to use, and the tool magnifies the aim of intelligence. When the tools become intelligent themselves, certain recursive feedback loops will start to appear. Simply look at the quality of AI code outputs from 2 years ago compared to today.
I understand all of what you said, but I can't get over that fact that the term AI is being used for these architectures. It seems like the industry is just trying to do a cool parlor trick in convincing the masses this is somehow AI from science fiction.
Maybe I'm being overly cynical, but a lot of this stinks.
It can do a lot of things that are generally very effective. High reliability semantic parsing from images is just one thing that modern LLM's are very reliable at.
https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...