Readit News logoReadit News
entropyneur · 3 months ago
This article seems to fall straight into the trap it aims to warn us about. All this talk about "true" understanding, embodiment, etc. is needless antropomorphizing.

A much better framework for thinking about intelligence is simply as the ability to make predictions about the world (including conditional ones like "what will happen if we take this action"). Whether it's achieved through "true understanding" (however you define it; I personally doubt you can) or "mimicking" bears no relevance for most of the questions about the impact of AI we are trying to answer.

keiferski · 3 months ago
It matters if your civilizational system is built on assigning rights or responsibilities to things because they have consciousness or "interiority." Intelligence fits here just as well.

Currently many of our legal systems are set up this way, if in a fairly arbitrary fashion. Consider for example how sentience is used as a metric for whether an animal ought to receive additional rights. Or how murder (which requires deliberate, conscious thought) is punished more harshly than manslaughter (which can be accidental or careless.)

If we just treat intelligence as a descriptive quality and apply it to LLMs, we quickly realize the absurdity of saying a chatbot is somehow equivalent, consciously, to a human being. At least, to me it seems absurd. And it indicates the flaws of grafting human consciousness onto machines without analyzing why.

AIPedant · 3 months ago
"Making predictions about the world" is a reductive and childish way to describe intelligence in humans. Did David Lynch make Mulholland Drive because he predicted it would be a good movie?

The most depressing thing about AI summers is watching tech people cynically try to define intelligence downwards to excuse failures in current AI.

entropyneur · 3 months ago
> Did David Lynch make Mulholland Drive because he predicted it would be a good movie?

He made it because he predicted that it will have some effects enjoyable to him. Without knowing David Lynch personally I can assume that he made it because he predicted other people will like it. Although of course, it might have been some other goal. But unless he was completely unlike anyone I've ever met, it's safe to assume that before he started he had a picture of a world with Mullholland Drive existing in it that is somehow better than the current world without. He might or might not have been aware of it though.

Anyway, that's too much analysis of Mr. Lynch. The implicit question is how soon an AI will be able to make a movie that you, AIPedant, will enjoy as much as you've enjoyed Mulholland Drive. And I stand that how similar AI is to human intelligence or how much "true understanding" it has is completely irrelevant to answering that question.

koonsolo · 3 months ago
I look at it the complete opposite way: humans are defining intelligence upwards to make sure they can perceive themselves better than a computer.

It's clear that humans consider humans as intelligent. Is a monkey intelligent? A dolphin? A crow? An ant?

So I ask you, what is the lowest form of intelligence to you?

(I'm also a huge David Lynch fan by the way :D)

throwawayqqq11 · 3 months ago
Well yes, any creation tries to anticipate some reaction, be it audience, environment, or only the creators one.

A prediction is just a reaction to a present state, which is the simplest definition of intelligence: The ability to (sense and) react to something. I like to use this definition, instead of "being able to predict", because its more generic.

The more sophisticated (and directed) the reaction is, the more intelligent the system must be. Following this logic, even a traffic light is intelligent, at least more intelligent than a simple rock.

From that perspective, the question of why a creator produced a piece of art becomes unimportant to determine intelligence, since the simple fact that he did is sign of intelligence already.

simianwords · 3 months ago
"David Lynch made Mullholland Drive because he was intelligent" is also absurd.
MrScruff · 3 months ago
It may be reductive but that doesn't make it incorrect. I would certainly agree that creating and appreciating art are highly emergent phenomena in humans (as is for example humour) but that doesn't mean I don't think they're rooted in fitness functions and our evolved brains desire for approval from our tribal peer group.

Reductive arguments may not give us an immediate forward path to reproducing these emergent phenomena in artificial brains, but it's also the case that emergent phenomena are by definition impossible to predict - I don't think anyone predicted the current behaviours of LLMs for example.

keeda · 3 months ago
> "Making predictions about the world" is a reductive and childish way to describe intelligence in humans.

It also happens to be a leading theory in neuroscience: https://news.ycombinator.com/item?id=45058056

pu_pe · 3 months ago
How would you define intelligence? Surely not by the ability to make a critically acclaimed movie, right?
WithinReason · 3 months ago
He was trying to predict what movie would create the desired reaction from his own brain. That's how creativity works, it's just prediction.
DonaldFisk · 3 months ago
I think that intelligence requires, or rather, is the development and use of a model of the problem while the problem is being solved, i.e. it involves understanding the problem. Accurate predictions, based on extrapolations made by systems trained using huge quantities of data, are not enough.
ACCount37 · 3 months ago
From a practical standpoint, all the talk of "true understanding", "sentience" and the likes is pointless.

The only real and measurable thing is performance. And the performance of AI systems only goes up.

vrighter · 3 months ago
But only goes up in the sense that it's getting closer to a horizontal asymptote. Which is not really that good.
cantor_S_drug · 3 months ago
Imagine LLM is conscious (as Anthropic wants us to believe). Imagine LLM is made to train on so much data which is far beyond what its parameter count allows for. Am I hurting the LLM by causing it intensive cognitive strain?
entropyneur · 3 months ago
I agree that whether AI is conscious is an important question. In fact, I think it's the most important question above our own existential crisis. Unfortunately, it's also completely hopeless at our current level of knowledge.
adastra22 · 3 months ago
Why would that hurt?
wagwang · 3 months ago
Predict and create, that's all that matters.
theturtlemoves · 3 months ago
I've always had the feeling that AI researchers want to build their own human without having to change diapers being part of the process. Just skip to adulthood please, and learn to drive a car without having experience in bumping into things and hurting yourself.

> Language doesn't just describe reality; it creates it.

I wonder if this is a statement from the discussed paper or from the blog author. Haven't found the original paper yet, but this blog post very much makes me want to read it.

ta20240528 · 3 months ago
> Language doesn't just describe reality; it creates it.

I never under stand these kinds of statements.

Does the sun not exist until we have a word for it, did "under the rock" not exist for dinosaurs?

keiferski · 3 months ago
I think create is the wrong word choice here. Shaping reality is a better one, as it doesn't hold the implication that before language, nothing existed.

Think of it this way, though: the divisions that humans make between objects in the world are largely linguistic ones. For example, we say that the Earth is such-and-such an ecosystem with certain species occupying it. But this is more like a convenient shorthand, not a totally accurate description of reality. A more accurate description would be something like, ever-changing organisms undergo this complex process that we call evolution, and are all continually changing, so much so that the species concept is not really that clear, once you dig into it.

https://plato.stanford.edu/entries/species/

Where it really gets interesting, IMO, is when these divisions (which originally were mostly just linguistic categories) start shaping what's actually in the world. The concept of property is a good example. Originally it's just a legal term, but over time, it ends up reshaping the actual face of the earth, ecosystems, wars, migrations, on and on.

cpa · 3 months ago
The sun can mean different things to different people. We usually think of it as the physical star, but for some ancient civilizations it may have been seen as a person or a god. Living with these different representations can, in a very real way, shape the reality around you. If you did not have a word for freedom, would as many desire it?
sanxiyn · 3 months ago
I am not sure how your sun example relates. Language is not whole of reality, but it is clearly part of reality. Memory engram of Coca-Cola is encoded in billions of human brains all over the world, and they are arrangement of atoms.
rolisz · 3 months ago
There are some folks (like Donald Hoffman) that believe that consciousness is what creates reality. He believes consciousness is the base layer of reality and then we make up physical reality.
sharikous · 3 months ago
> I've always had the feeling that AI researchers want to build their own human without having to change diapers being part of the process. Just skip to adulthood please, and learn to drive a car without having experience in bumping into things and hurting yourself.

I partially agree, but the idea about AI is that you need to bump into things and hurt yourself only once. Then you have a good driver you can replicate at will

degamad · 3 months ago
I think this might be the paper being referenced:

Melanie Mitchell (2021) "Why AI is Harder Than We Think." https://arxiv.org/abs/2104.12871

That sentence is not from this paper.

namro · 3 months ago
*skip to slavery
simianwords · 3 months ago
> But that still leaves a crucial question: can we develop a more precise, less anthropomorphic vocabulary to describe AI capabilities? Or is our human-centric language the only tool we have to reason about these new forms of intelligence, with all the baggage that entails?

I don't get the problem with this really. I think LLM's "reasoning" is a very fair and proper way to call it. It takes time and spits out tokens that it recursively uses to get a much better output than it otherwise would have. Is it actually really reasoning using a brain like a human would? No. But it is close enough so I don't see the problem calling it "reasoning". What's the fuss about?

keiferski · 3 months ago
Are swimming and sailing the same, because they both have the result of moving through the water?

I'd say, no, they aren't, and there is value in understanding the different processes (and labeling them as such), even if they have outputs that look similar/identical.

iLoveOncall · 3 months ago
It has absolutely nothing to do with reasoning, and I don't understand how anyone could think it's"close enough".

Reasoning models are simply answering the same question twice with a different system prompt. It's a normal LLM with an extra technical step. Nothing else.

tim333 · 3 months ago
The problem is fuzzy language can make debate poor and about the definition of words rather than about reality. The answer I think it to avoid that and find things that you can be clear about. A famous example is the Turing test. Rather than debates on whether machines can think getting bogged down in endless variation of how people define thinking, Turing looked at if the machines could be told apart from humans which he discussed in his paper.

Dead Comment

_1tem · 3 months ago
I would add a fifth fallacy: assuming what we humans do can be reduced to “intelligence”. We are actually very irrational. Humans are driven strongly by Will, Desire, Love, Faith, and many other irrational traits. Has an LLM ever demonstrated irrational love? Or sexual desire? How can it possibly do what humans do without these?
peterashford · 3 months ago
Yeah I think that's an important dimension. David Hume said that there was no action without passion and I think that's a key difference with AIs. They sit there passive until we interact with them. They dont want anything, they dont have goals, desires, motivations. The emotional part of the human psyche does a lot of work - we aren't just calculating sums
ehnto · 3 months ago
The idea that any of those attributes could arise out of an LLM would be surprising to say the least. They do not maintain a continuum of thought for which those things could exist within. In the case of humans, those things are not just thought anyway, they are a complex mix of chemical signals, physical signals and thoughts, memories etc. So complex we barely understand it, even though we live it and have studied it for centuries.
alwinaugustin · 3 months ago
For all its advanced capabilities, the LLM remains a glorified natural language interface. It is exceptionally good at conversational communication and synthesizing existing knowledge, making information more accessible and in some cases, easier to interact with. However, many of the more ambitious applications, such as so-called "agents," are not a sign of nascent intelligence. They are simply sophisticated workflows—complex combinations of Python scripts and chained API calls that leverage the LLM as a sub-routine. These systems are clever, but they are not a leap towards true artificial agency. We must be cautious not to confuse a powerful statistical tool with the dawn of genuine machine consciousness.
shubhamjain · 3 months ago
> The primary counterargument can be framed in terms of Rich Sutton's famous essay, "The Bitter Lesson," which argues that the entire history of AI has taught us that attempts to build in human-like cognitive structures (like embodiment) are always eventually outperformed by general methods that just leverage massive-scale computation

This reminds me Douglas Hofstadter, of the Gödel, Escher, Bach fame. He rejected all of this statistical approaches towards creating intelligence and dug deep into the workings of human mind [1]. Often, in the most eccentric ways possible.

> ... he has bookshelves full of these notebooks. He pulls one down—it’s from the late 1950s. It’s full of speech errors. Ever since he was a teenager, he has captured some 10,000 examples of swapped syllables (“hypodeemic nerdle”), malapropisms (“runs the gambit”), “malaphors” (“easy-go-lucky”), and so on, about half of them committed by Hofstadter himself.

>

> For Hofstadter, they’re clues. “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.”

I don't know when, where, and how the next leap in AGI will come through, but it's just very likely, it will be through brute-force computation (unfortunately). So much for fifty years of observing Freudian slips.

[1]: https://www.theatlantic.com/magazine/archive/2013/11/the-man...

CuriouslyC · 3 months ago
Brute force will always be part of the story, but it's not the solution. It just allows us to take an already working solution and make it better.
ggm · 3 months ago
It's statistics, linear programming, and shamanism.
tim333 · 3 months ago
>...the most important fallacy. It's the deep-seated assumption that intelligence is, like software, a form of pure information processing that can be separated from its body.

I think he gets into a muddle on that one. If something online can provide smarter thinking and answers to questions than I can then I figure it's intelligent and it doesn't matter if it's an LLM, a human or a disembodied spirit that somehow happens to be online.

He kind of gets that from human minds not being disembodied from their brains but that's a different thing.