Fair enough but he doesn't give much factual reasoning to support that. If you believe the brain is a biological computer and AI computing keeps advancing, at some point it will be able to do the same stuff or better, which is what most people think of as AGI.
I wrote about that for my uni entrance exam 43 years ago and it's always just seemed obvious common sense to me. I know Turing wrote about it before then but I never read that - it's just seems kind of obvious it'll happen.
I have made some minor games in JS with my kids with one for example, and managed to get it to produce a game of asteroids and pong with them (probably heavily based on tutorials scraped from the web of course). I had less success trying to build frogger (again probably because there are not so many complete examples). Anything truly creative/new they really struggle with, and it becomes apparent they are pattern matching machines without true understanding.
I wouldn't describe LLMs as useful at present and do not consider them intelligent in any sense, but they are certainly interesting.
"Right, so what the hell is this cursed nonsense? Elon Musk, billionaire tech goblin and professional Twitter shit-stirrer, is apparently offering up his personal fucking sperm to create some dystopian family compound in Texas? Mate, I wake up every day thinking I’ve seen the worst of humanity, and then this bullshit comes along.
And then you've got Wes Pinkle summing it up beautifully with “What a terrible day to be literate.” And yeah, too fucking right. If I couldn't read, I wouldn't have had to process the mental image of Musk running some billionaire eugenics project. Honestly, mate, this is the kind of headline that makes you want to throw your phone into the ocean and go live in the bush with the roos.
Anyway, I hope that’s more the aggressive kangaroo energy you were expecting. You good, or do you need me to scream about something else?"
Recognizing concepts, grouping and manipulating similar concepts together, is what “abstraction” is. It's the fundamental essence of both "building a world model" and "thinking".
> Nothing in the link you provided is even close to "neurons, model of the world, thinking" etc.
I really have no idea how to address your argument. It’s like you’re saying,
“Nothing you have provided is even close to a model of the world or thinking. Instead, the LLM is merely building a very basic model of the world and performing very basic reasoning”.
The off-topic mentioning of graphics programming was because I tend to type as I think, then make corrections, and as I re-read it now the paragraph isn't great. The intent was to give an example of how I keep myself sharp, and challenging what many consider "settled" knowledge, where graphics programming happens to be my most recent example.
For what it's worth, English isn't my native language, and you'll have to take my word for it that no chat bots were used in generating any of the responses I've made in this thread. The fact that people are already uncertain about who's a human and who's not is worrisome.
Roger Penrose made an argument like we can know Gödel's theorem is true without being able to prove it but AI can't, but I think you can figure both are guessing in a similar pattern recognising kind of way.