Readit News logoReadit News
teleforce · 2 years ago
Stephen Wolfram in his tutorial article on ChatGPT, in his conclusions on the main differences between human and ChatGPT learning approaches [1]:

When it comes to training (AKA learning) the different “hardware” of the brain and of current computers (as well as, perhaps, some undeveloped algorithmic ideas) forces ChatGPT to use a strategy that’s probably rather different (and in some ways much less efficient) than the brain. And there’s something else as well: unlike even in typical algorithmic computation, ChatGPT doesn’t internally “have loops” or “recompute on data”. And that inevitably limits its computational capability - even with respect to current computers, but definitely with respect to the brain.

[1] What Is ChatGPT Doing and Why Does It Work:

https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...

mitthrowaway2 · 2 years ago
> LLMs produce their answers with a fixed amount of computation per token

I'm not that confident that humans don't do this. Neurons are slow enough that we can't really have a very large number of sequential steps behind a given thought. Longer complex considerations are difficult (for me at least) without at least thinking out loud to cache my thoughts in audible memory, or having a piece of paper to store and review my reasoning steps. I'm not sure this is very different than a LLM prompted to reason step by step.

The main difference I can think of is that humans can learn, while LLMs have fixed weights after training. For example, once I've thought carefully and convinced myself through step-by-step reasoning, I'll remember that conclusion and fit it into my knowledge framework, potentially re-evaluating other beliefs. That's something today's LLMs don't do, but mainly for practical reasons, rather than theoretical ones.

I believe the extent of world modelling done by LLMs still remains an open question.

aeternum · 2 years ago
Yes, this is key. This idea that humans also require sequences to think was popularized by Jeff Hawkins even before all the LLM hype.

He was able to show that the equivalent of place cells (normally used to determine one's physical location) fire sequentially when humans perform tasks like listening to music or imagine feeling along the rim of a coffee cup.

The think step-by-step trick might just be scratching the surface of the various mechanisms we can use to give LLMs this kind of internal voice.

TillE · 2 years ago
The "world model" is basically the old school idea of AI, which has been mostly abandoned because you can get incredibly good results from just ingesting gobs of text. But I agree that it's a necessity for AGI; you need to be able to model concepts beyond just words or pixels.
PH95VuimJjqBqy · 2 years ago
The answer is that humans have genitalia.

And while that may seem trite, it's really not. you can't separate humans thinking from the underlying hardware.

Until LLM's are able to experience real emotion, and emotion here really means a stick by which to lead the LLM, it will always be different from humans.

steve1977 · 2 years ago
I guess the more important aspect (although not totally unrelated) is that humans are mortal.
weregiraffe · 2 years ago
Not all humans have genetalia.
PH95VuimJjqBqy · 2 years ago
Not all humans have feet.
nittanymount · 2 years ago
Lecun's voice in this post, it sounds like he knows the answers for sure, haha ...
lucubratory · 2 years ago
That's his default tone. Occasionally he has something interesting to say, but the level of arrogance coming from the leader of the second-best AI group at Meta is grating.
resource0x · 2 years ago
What makes you so giggly? Fairly reasonable post IMO.
lagrange77 · 2 years ago
More of a scaling issue: Humans do continuous* online learning, while LLMs get retrained once in a while.

* I'm no expert, 'continuous' might be oversimplified.

cc101 · 2 years ago
subjective experience

Dead Comment