Readit News logoReadit News
torginus · 5 days ago
Considering Stallman worked in the MIT AI lab in the era of symbolic AI, and has written GCC (an optimizing compiler is a kind of symbolic reasoner imo), I think he has a deeper understanding of the question than most famous people in tech.
armchairhacker · 5 days ago
Symbolic AI (GOFAI) and neural networks are very different techniques to solve the same problems. An analogy is someone who specializes in Photoshop making broad claims about painting (or vice versa) because they both create art.
torginus · 5 days ago
I have not claimed the techniques are similar - going by your example, theres a large set of overlapping skills for both Photoshop and using a paintbrush (color theory, anatomy, perspective etc.), and from the PoV of the end-user, the merits of the piece are what's important - not process.

I'm sure while the AI lab folks didn't have the techniques and compute to do ML like we do now, they have thought a lot about the definition of intelligence and what it takes to achieve it, going from the narrow task-specific AIs to truly generic intelligence.

ML/RL allowed us to create systems that given a train/test set, can learn underlying patterns and discover connections in data without a developer having to program it.

Transformers/LLMs are a scaled up version of the approach (I don't want to get into the weeds as it's beside the point)

Stallman asserts LLMs fall short of general intelligence, which I think he has a much better understanding of what that entails than most people give him credit for. Considering his AI past, I'd be surprised if he didn't keep up with the advancements and techniques in the field at least to some degree (or that understanding said techniques would be beyond his ability).

Because of this, I'm running with the theory that he knows what he's talking about even he doesn't elaborate on it here.

german_dong · 5 days ago
No, modern approaches have zero overlap with Chomsky's deterministic methodology..
keybored · 5 days ago
What does Chomsky’s work have to do with AI?
ajross · 5 days ago
No, he's missed the mark here. The retreat to imprecise non-statements like "ChatGPT cannot know or understand anything, so it is not intelligence" is a flag that you're reading ideology and not analysis. I mean, it's true as far as it goes. But it's no more provable than it is for you or I, it's just a dance around the semantics of the words "understand" and "know".

In particular the quip that it's really just a "bullshit generator" is 100% correct. But also true for, y'know, all the intelligent humans on the planet.

At the end of the day AI gets stuff wrong, as far as we can tell, for basically the same reasons that we get stuff wrong. We both infer from intuition to make our statements about life, and bolt "reasoning" and "logic" on as after the fact optimizations that need to be trained as skills.

(I'm a lot more sympathetic to the free software angle, btw. The fact that all these models live and grow only within extremely-well-funded private enclosures is for sure going to have some very bad externalities as the technology matures.)

IanCal · 4 days ago
I'd argue that his point is worse than this.

He's not comparing them to humans, he attributes knowledge/understanding (enough to pass his bar for "is AI") to yolov5 models, xgboost trained trees and as far as I can tell closed source transformer based models too. But not ChatGPT.

gaigalas · 5 days ago
> But also true for, y'know, all the intelligent humans on the planet.

That's not true.

> At the end of the day AI gets stuff wrong, as far as we can tell, for basically the same reasons that we get stuff wrong.

Also not true.

> We both infer from intuition [...]

Also not true.

classified · 5 days ago
That never stopped any know-it-all from dunning-krugering.

Dead Comment

pupppet · 5 days ago
Or he’s just shaking his fist at the clouds again.
spongebobism · 5 days ago
why not both?
Synaesthesia · 5 days ago
He's not wrong. It's not intelligence. It's a simulacrum of intelligence. It can be useful but ought to not be trusted completely.

And it's certainly not a boon for freedom and openness.

fluidcruft · 5 days ago
"Simulacrum of intelligence" is just "artificial intelligence" with a fancier word.
JKCalhoun · 5 days ago
> ChatGPT is not "intelligence", so please don't call it "AI".

Acting Intelligent, works for me.

solumunus · 5 days ago
How do we know we're not just acting intelligent?
tigrezno · 5 days ago
So can we consider "intelligence" when that simulacrum is orders of magnitude stronger?
Synaesthesia · 5 days ago
It's brilliant at recapitulating the daya it's trained on. It can be extremely useful. But it's still nowhere close the capability of the human brain, not that I expect it to be.

Don't get me wrong I think they are remarkable but I still prefer to call it LLM rather than AI.

IanCal · 4 days ago
He's not talking about intelligence though, he's saying it has no knowledge or understanding, whereas something like a decision tree or neural net object recognition model does.
fooker · 5 days ago
How do I know you are not a 'simulacrum of intelligence'?
Synaesthesia · 5 days ago
We are still the standard by which intelligence is judged.
brainless · 5 days ago
I prefer using LLM. But many people will ask what is an LLM and then I use AI and they get it. Unfortunate.

At the same time, LLMs are not a bullshit generator. They do not know the meaning of what they generate but the output is important to us. It is like saying a cooker knows the egg is being boiled. I care about the egg, cooker can do its job without knowing what an egg is. Still very valuable.

Totally agree with the platform approach. More models should be available to be run own own hardware. At least 3rd party cloud provider hardware. But Chinese models have dominated this now.

ChatGPT may not last long unless they figure out something, given the "code red" situation is already in their company.

H8crilA · 5 days ago
I also do not know the meaning of what I generate. Especially applicable to internal states, such as thoughts and emotions, which often become fully comprehensible only after a significant delay - up to a few years. There's even a process dedicated to doing this consistently called journaling.
saltwatercowboy · 5 days ago
While I grasp your point, I find the idea that human consciousness is in any way comparable to generated content incredibly demeaning.
contrast · 5 days ago
"They do not know the meaning of what they generate but the output is important to us."

Isn't that a good definition of what bullshit is?

mort96 · 5 days ago
Frankly, bullshit is the perfect term for it because ChatGPT doesn't know that it's wrong. A bullshit artist isn't someone whose primary goal is to lie. A bullshit artist is someone whose primary goal is to achieve something (a sale, impressing someone, appearing knowledgable, whatever) without regard for the truth. The act of bullshitting isn't the same as the act of lying. You can e.g bullshit your way through a conversation on a technical topic you know nothing about and be correct by happenstance.
rvz · 5 days ago
Before someone replies and does a fallacious comparison along the lines like: "But humans also do 'bullshitting' as well, humans also 'hallucinate' just like LLMs do".

Except that LLMs have no mechanism for transparent reasoning and also have no idea about what they don't know and will go to great lengths to generate fake citations to convince you that it is correct.

lifthrasiir · 5 days ago
That interpretation is too generous, the word "bullshit" is generally a value judgement and implies that you are almost always wrong, even though you might be correct from time to time. Current LLMs are way past that threshold, making them much more dangerous for a certain group of people.
card_zero · 5 days ago
I guess it's a fair point that slop has its own unique flavor, like eggs.
hulitu · 5 days ago
> At the same time, LLMs are not a bullshit generator. They do not know the meaning of what they generate but the output is important to us.

They are a bullshit generator. And "the output" is only important for the CIA.

dkyc · 5 days ago
Absolutely hilarious that he has a "What's bad about" section as a main navigation, very self-aware.
poisonborz · 5 days ago
"Posting on Reddit requires running nonfree JavaScript code."

I have much respect for him but this is at the level of old-man-shouting-at. Criticism should be more targeted and not just rehashing the same arguments, even if true.

mvid · 5 days ago
Well, in the Reddit case, they used to have APIs you could build free, OSS clients against, and specifically removed it
fooker · 5 days ago
Self aware would be be having the "What's bad about ->" {Richard Stallman, GPL, GNU, Emacs} entries.
mabedan · 5 days ago
It’s a little like saying calculators cannot do math because they don’t really understand numbers or arithmetic and they just do bit operations.

I understand the sentiment, but reality is that it does with words pretty much what you’d expect a person to do. It lacks some fundamentals like creativity and that’s why it’s not doing real problem solving tasks, but it’s well capable of doing mundane tasks that the average person gets paid to do.

And when it comes to trust and accuracy, if I ask it a question about German tax system, it will look up sources and may give an answer with an inaccuracy or two but it will for sure be more accurate than whatever I will be able to do after two hours of research.

m463 · 5 days ago
> calculators cannot do math because they don’t really understand numbers

I don't think that's an appropriate analogy at all.

He's not saying that AI is not useful. He's saying that it doesn't understand or reason, so it does not generate the truth.

So you can ask chatgpt a question about the german tax system, but it would be a mistake to have it do your taxes.

in the same way, a calculator could help with your taxes, because it has been engineered to give precise answers about some math operations, but it cannot do your taxes.

mabedan · 5 days ago
> so it does not generate the truth.

It’s equally true for humans, benchmarks of intelligence. Most shortcomings in our working life is from miscommunications and misunderstanding requirements, and then simply by incompetence and making trivial mistakes.

torginus · 5 days ago
If we define intelligence as the ability to understand an unfamiliar phenomenon, create a mental model of it, these models are not intelligent (at least at inference time), as they cannot update their own weights.

I'm not sure if these models are trained using unsupervised learning and are capable of training themselves to some degree, but even if so, the learning process of gradient descent is very inefficient, so by the commonly understood definition of intelligence (the ability to figure out and unfamiliar situation), the intelligence of an inference only model is zero. Models that do test time training might be intelligent to some degree, but I wager their current intelligence is marginal at best.

IanCal · 5 days ago
Intelligence and learning really seem distinct. Does someone who only has a short term memory no longer count as intelligent?

But also he does count much simpler systems as AI, so it's not about learning on the fly or being anything like human intelligence.

torginus · 5 days ago
An AI/person that can solve novel problems (either by teaching it or not) is a more general kind of intelligence than one that can not.

It's a a qualitatively better intelligence.

An intelligence that can solve problems that fall into its training set better is quantitatively better.

Likewise, an intelligence that learns faster is quantitatively better.

To give a concrete and simple example, take a simple network trained to recognized digits. The network can be of arbitrary quality, it can be robust or not, fast or slow, but it can't do more than digits.

Another NN that can learn to recognize more symbols is a more general kind of AI, which again introduces another set of qualitative measures, namely how much training does it need to learn a new symbol robustly.

'Intelligence' is a somewhat vague term as any of the previous measures I've defined could be called intelligence (training accuracy, learning speed, inference speed etc., coverage of training set).

You could claim a narrower kind of intelligence that exists without learning (which is what ChatGPT is and what you gave as example with the person that only has a short-term memory) is still intelligence, but then we are arguing semantics.

Inference only LLMs are clearly missing something and are lacking in generality.

hulitu · 5 days ago
> Intelligence and learning really seem distinct.

Yeah. Some people are intelligent, but never learn. /s

am17an · 5 days ago
All a LLM does is hallucinate, some hallucinations are useful. -someone on the internet
JKCalhoun · 5 days ago
Agree. I confess to having hallucinated through a good portion of my life though (not medicinally, mind you).
fluidcruft · 5 days ago
Boy are we going to have egg on our faces when we finally all agree that consciousness and qialia are nothing but hallucinations.
fooker · 5 days ago
> ChatGPT cannot know or understand anything, so it is not intelligence. It does not know what its output means. It has no idea that words can mean anything.

This argument does a great job anthropomorphizing ChatGPT while trying to discredit it.

The part of this rant I agree with is "Doing your own computing via software running on someone else's server inherently trashes your computing freedom."

It's sad that these AI advancements are being largely made on software you can not easily run or develop on your own.