Readit News logoReadit News
slibhb · 5 months ago
LLMs are statistical models trained on human-generated text. They aren't the perfectly logical "machine brains" that Asimov and others imagined.

The upshot of this is that LLMs are quite good at the stuff that he thinks only humans will be able to do. What they aren't so good at (yet) is really rigorous reasoning, exactly the opposite of what 20th century people assumed.

beloch · 5 months ago
What we used to think of as "AI" at one point in time becomes a mere "algorithm" or "automation" by another point in time. A lot of what Asimov predicted has come to pass, very much in the way he saw it. We just no longer think of it as "AI".

LLM's are just the latest form of "AI" that, for a change, doesn't quite fit Asimov's mold. Perhaps it's because they're being designed to replace humans in creative tasks rather than liberate humans to pursue them.

israrkhan · 5 months ago
Exactly... as someone said " I need AI to do my laundary and dishes, while I can focus on art and creative stuff" ... But AI is doing the exact opposite, i.e creative stuff (drawing, poetry, coding, documents creation etc), while we are left to do the dishes/laundary.
protocolture · 5 months ago
The bottom line from Kasparovs book on AI was that AI researchers want to AGI, but every decade they are forced to release something to generate revenue and its branded as AI until the next time.

And often they get caught up supporting the latest fake AI craze that they dont get to research AGI.

Lerc · 5 months ago
"LLMs are statistical models"

I see this referenced over and over again to trivialise AI as if it is a fait acompli.

I'm not entirely sure why invoking statistics feels like a rebuttal to me. Putting aside the fact that LLMs are not purely statistics, even if they were what proof is there that you cannot make a statistical intelligent machine. It would not at all surprise me to learn that someone has made a purely statistical Turing complete model. To then argue that it couldn't think you are saying computers can never think, and by that and the fact that we think you are invoking a soul, God, or Penrose.

lelandbatey · 5 months ago
In this one case it's not meant to trivialize, it's meant to point out that LLMs don't behave the way we thought that AI would behave. We thought we'd have 100% logically-sound thinking machines because we built them on top of digital logic. We thought they'd be obtuse, we thought they'd be "book smart but not wise". LLMs are just different from that; hallucinations, the whole "fancy words and great sentences but no substance to a paragraph", all that is different from the rigid but perfect brains we thought AI would bring. That's what "statistical machine" seems to be trying to point out.

It was assumed that if you asked the same AI the same question, you'd get the same answer every time. But that's not how LLMs work (I know you can see them the same every time and get the same output but at we don't do that so how we experience them is different).

vacuity · 5 months ago
Personally, I have a negative opinion of LLMs, but I agree completely. Many people are motivated to reject LLMs solely because they see them as "soulless machines". Judge based on the facts of the matter, and make your values clear if you must bring them into it, but don't pretend you're not applying values when you are. You can do worse: kneejerk emotional reactions are just pointless.
slibhb · 5 months ago
I did not in any way "trivialise AI". LLMS are amazing and a massive accomplishment. I just wanted to contrast them to Asimov's conception of AI.

> I'm not entirely sure why invoking statistics feels like a rebuttal to me. Putting aside the fact that LLMs are not purely statistics, even if they were what proof is there that you cannot make a statistical intelligent machine. It would not at all surprise me to learn that someone has made a purely statistical Turing complete model. To then argue that it couldn't think you are saying computers can never think, and by that and the fact that we think you are invoking a soul, God, or Penrose.

I don't follow this. I don't believe that LLMs are capable of thinking. I don't believe that computers, as they exist now, are capable of thinking (regardless of the program they run). I do believe that it is possible to build machines that can think -- we just don't know how.

To me, the strange move you're making is assuming that we will "accidentally" create thinking machines while doing AI research. On the contrary, I think we'll build thinking, conscious machines after understanding our own consciousness, or at least the consciousness of other animals, and not before.

BeetleB · 5 months ago
Reminds me of an old math professor I had. Before word processors, he'd write up the exam on paper, and the department secretary would type it up.

Then when word processors came around, it was expected that faculty members will type it up themselves.

I don't know if there were fewer secretaries as a result, but professors' lives got much worse.

He misses the old days.

Deleted Comment

zusammen · 5 months ago
To be truthful, though, that’s only like 0.01 percent of the “academia was stolen from us and being a professor (if you ever get there at all) is worse” problem.

Dead Comment

wubrr · 5 months ago
> LLMs are statistical models trained on human-generated text.

I mean, not only human-generated text. Also, human brains are arguably statistical models trained on human-generated/collected data as well...

slibhb · 5 months ago
> Also, human brains are arguably statistical models trained on human-generated/collected data as well...

I'd say no, human brains are "trained" on billions of years of sensory data. A very small amount of that is human-generated.

827a · 5 months ago
Maybe; at some level are dogs' brains also simple sensory-collecting statistical models? A human baby and a dog are born on the same day; that dog never leaves that baby's side, for 20 years. It sees everything it sees, it hears everything it hears, it is given the opportunity to interact with its environment in roughly the same way the human baby does, to the degree to which they are both physically capable. The intelligence differential after that time will still be extraordinary.

My point in bringing up that metaphor is to focus the analogy: When people say "we're just statistical models trained on sensory data", we tend to focus way too much on the "sensory data" part, which has led to for example AI manufacturers investing billions of dollars into slurping up as much human intellectual output as possible to train "smarter" models.

The focus on the sensory input inherently devalues our quality of being; that who we are is predominately explicable by the world around us.

However: We should be focusing on the "statistical model" part: that even if it is accurate to holistically describe the human brain as a statistical model trained on sensory data (which I have doubts about, but those are fine to leave to the side): its very clear that the fundamental statistical model itself is simply so far superior in human brains that comparing it to an LLM is like comparing us to a dog.

It should also be a focal point for AI manufacturers and researchers. If you are on the hunt for something along the spectrum of human level intelligence, and during this hunt you are providing it ten thousand lifetimes of sensory data, to produce something that, maybe, if you ask it right, it can behave similarity to a human who has trained in the domain in only years: You're barking up the wrong tree. What you're producing isn't even on the same spectrum; that doesn't mean it isn't useful, but its not human-like intelligence.

janalsncm · 5 months ago
> Isaac Asimov describes artificial intelligence as “a phrase that we use for any device that does things which, in the past, we have associated only with human intelligence.”

This is a pretty good definition, honestly. It explains the AI Effect quite well: calculators aren’t “AI” because it’s been a while since humans were the only ones who could do arithmetic. At one point they were, though.

azinman2 · 5 months ago
Although calculators can now do things almost no humans can do, or at least in any reasonable time. But most (now) wouldn’t call it AI. It’s a tool, with a very limited domain
janalsncm · 5 months ago
That’s my point, it’s not AI now. It used to be.
saalweachter · 5 months ago
I mean, at one point "calculator" was a job title.
timewizard · 5 months ago
The abacus has existed for thousands of years. Those who had the job of "calculator" also used pencil and paper to manage larger calculations which they would have struggled to do without any tools.

That's humanity. We're tool users above anything else. This gets lost.

musicale · 5 months ago
And "computer".
aszantu · 5 months ago
Funny thing About Asimov was how he came up with the laws of robotics and then cases on how they don't work. There are a few that I remember, one where a robot was lying because a bug in his brain gave him empathy and he didn't want to hurt humans.
nitwit005 · 5 months ago
I was always a bit surprised other sci fi authors liked the "three laws" idea, as it seems like a technological variation of other stories about instructions or wishes going wrong.
buzzy_hacker · 5 months ago
Same here. A main point of I, Robot was to show why the three laws don't work.
nthingtohide · 5 months ago
Narratives build on top of each other so that complex narratives can be built. This is also the reason why Family Guy can speedrun through all the narrative arcs developed by culture in 30 seconds clip.

Family Guy Nasty Wolf Pack

https://youtu.be/5oW9mNbMbmY

The perfect wish to outsmart a genie | Chris & Jack

https://youtu.be/lM0teS7PFMo

pfisch · 5 months ago
I mean, now we call the three laws "alignment", but it honestly seems inevitable that it will go wrong eventually.

That of course isn't stopping us from marching forwards though in the name of progress.

nix-zarathustra · 5 months ago
>he came up with the laws of robotics and then cases on how they don't work. There are a few that I remember, one where a robot was lying because a bug in his brain gave him empathy and he didn't want to hurt humans.

IIRC, none of the robots broke the laws of robotics, rather they ostensibly broke the laws but the robots were later investigated to have been following them because of some quirk.

hinkley · 5 months ago
And one that was sacrificing a few for the good of the species. You can save more future humans by killing a few humans today that are causing trouble.
pfisch · 5 months ago
Isn't that the plot of westworld season 3?
kagakuninja · 5 months ago
In the Foundation books, he revealed that robots were involved behind the scenes, and were operating outside of the strict 3 laws after developing the concept of the 0th law.

>A robot may not harm humanity, or, by inaction, allow humanity to come to harm

Therefore a robot could allow some humans to die, if the 0th law took precedence.

creer · 5 months ago
Good conceit or theme by an author - on which to base a series of books that will sell? Not everything is an engineering or math project.
soulofmischief · 5 months ago
That is still one of my favorite stories of all time. It really sticks to you. It's part of the I, Robot anthology.
chuckadams · 5 months ago
It certainly is liberating all our creative works from our possession...
vonneumannstan · 5 months ago
Intellectual Property is a questionable idea to begin with...
chuckadams · 5 months ago
It's not the loss of ownership I'm lamenting, it's the loss of production by humans in the first place.
immibis · 5 months ago
If we're abolishing it, we have to really abolish it, both ways, not just abolish companies' responsibilities but not rights, while abolishing individuals' rights but not responsibilities.
pera · 5 months ago
It's for sure less questionable than the current proposition of letting a handful of billionaires exploit the effort of millions of workers, without permission and completely disregarding the law, just for the sake of accumulating more power and more billions.

Sure, patent trolls suck, so do MAFIAA, but a world where creators have no means to subsist, where everything you do will be captured by AI corps without your permission, just to be regurgitated into a model for a profit, sucks way way more

adamsilkey · 5 months ago
How so? Even in a perfectly egalitarian world, where no one had to compete for food or resources, in art, there would still be a competition for attention and time.
palmotea · 5 months ago
> Intellectual Property is a questionable idea to begin with...

I know! It's totally and completely immoral to give the little guy rights against the powerful. It infringes in the privileges and advantages of the powerful. It is the Amazons, the Googles, the Facebooks of the world who should capture all the economic value available. Everyone else must be content to be paid in exposure for their creativity.

Deleted Comment

mrdependable · 5 months ago
Why do you say that?
justonceokay · 5 months ago
If we are headed to a star-trek future of luxury communism, there will definitely be growing pains as the things we value become valueless within our current economic system. Even though the book itself is so-so IMO, Down and Out in the Magic Kingdom provides a look at a future economy where there is an infinite supply of physical goods so the only economy is that of reputation. People compete for recognition instead of money.

This is all theoretical, I don’t know if I believe that we as humans can overcome our desire to hoard and fight over our possessions.

lannisterstark · 5 months ago
>star-trek future of luxury communism,

Banks' Culture Communism/Anarchism > Star Trek, any day imho.

robertlagrant · 5 months ago
You're saying something exactly backwards from reality. Star Trek is communism (except it's not) because there's no scarcity. It's not selfishness that's the problem. It's the ever-increasing number of things invented inside capitalism we deem essential once invented.
Detrytus · 5 months ago
I always say this: we are headed to a star-trek future, but we will not be the Federation, we will become Borg. Between social media platforms, smartphones and "wokeness" the inevitable result is that everybody will be forced into compliance, no originality or divergent thinking will be tolerated.
behringer · 5 months ago
7 years or maybe 14 that's all anybody needs. Anything else is greed and stops human progress.
Philpax · 5 months ago
I appreciate someone named "behringer" posting this sentiment. (https://en.wikipedia.org/wiki/Behringer#Controversies)

Deleted Comment

Philpax · 5 months ago
I'm glad we're seeing the death of the concept of owning an idea. I just hope the people who were relying on owning a slice of the noosphere can find some other way to sustain themselves.
theF00l · 5 months ago
Copyright law protects the expression of ideas, not the ideas themselves. Favourite case law that reinforces this case was between David Bowie and the Gallagher brothers.

I would argue patents are closer to protecting ideas, and those are alive and well.

I do agree copyright law is terribly outdated but I also feel the pain of the creatives.

01HNNWZ0MV43FF · 5 months ago
I just wish it was not, as usual, the people with the most money benefiting first and most
robertlagrant · 5 months ago
Did we previously have the concept of owning an idea?

Dead Comment

gmuslera · 5 months ago
What we are labeling as AI today is different than was thought to be in the 90s, or when Asimov wrote most of his stories about robots and other ways of AI.

Saying that, a variant of Susan Calvin role could prove to be useful today.

throw_m239339 · 5 months ago
> What we are labeling as AI today is different than was thought to be in the 90s, or when Asimov wrote most of his stories about robots and other ways of AI.

Multivac in "the last question"?

bpodgursky · 5 months ago
AI is far closer to Asimov's vision of AI than anyone else's. The "Positronic Brain" is very close to what we ended up with.

The three laws of robotics seemed ridiculous until 2021, when it became clear that you could just give AI general firm guidelines and let them work out the details (and ways to evade the rules) from there.

empath75 · 5 months ago
Not sure that I agree with that. People have been imagining human-like AI since before computers were even a thing. The Star Trek computer from TNG is basically an LLM, really.

AI _researchers_ had a different idea of what AI would be like, as they were working on symbolic AI, but in the popular imagination, "AI" was a computer that acted and thought like a human.

NoTeslaThrow · 5 months ago
> The Star Trek computer from TNG is basically an LLM, really.

The Star Trek computer is not like LLMs: a) it provides reliable answers, b) it is capable of reasoning, c) it is capable of actually interacting with its environment in a rational manner, d) it is infallible unless someone messes with it. Each one of these points is far in the future of LLMs.

whilenot-dev · 5 months ago
> The Star Trek computer from TNG is basically an LLM, really.

Watched all seasons recently for the first time. While some things are "just" vector search with a voice interface, there are also goodies like "Computer, extrapolate from theoretical database!", or "Create dance partner, female!" :D

For anyone curious: https://www.youtube.com/watch?v=6CDhEwhOm44

palmotea · 5 months ago
> The Star Trek computer from TNG is basically an LLM, really.

No. The Star Trek computer is a fictional character, really. It's not a technology any more than Jean-Luc Picard is. It's does whatever the writers needed it to do to further the plot.

It reminds me: J. Michael Straczynski (of Babylon 5 fame) was once asked "How fast do Starfuries travel?" and he replied "At the speed of plot."

palmotea · 5 months ago
I wouldn't put too much stock in this. Asimov was a fantasy writer telling fictional stories about the future. He was good at it, which is why you listen and why it's enjoyable, but it's still all a fantasy.
triceratops · 5 months ago
> Asimov was a fantasy writer

Asimov was mostly not a fantasy writer. He was a science writer and professor of biochemistry. He published over 500 books. I didn't feel like counting but half or more of them are about science. Maybe 20% are science fiction and fantasy.

https://en.wikipedia.org/wiki/Isaac_Asimov_bibliography_(cat...

staticman2 · 5 months ago
Asimov was not savy at computers and found it difficult to learn to use a word processor.
MetaWhirledPeas · 5 months ago
> I wouldn't put too much stock in this. Asimov was a fantasy writer telling fictional stories about the future.

Why not? Who is this technology expert with flawless predictions? Talking about the future is inherently an exercise of the imagination, which is also what fiction writing is.

And nothing he's saying here contradicts our observations of AI up to this point. AI artwork has gotten good at copying the styles of humans, but it hasn't created any new styles that are at all compelling. So leave that to the humans. The same with writing; AI does a good job at mimicking existing writing styles, but has yet to demonstrate the ability to write anything that dazzles us with its originality. So his prediction is exactly right: AI does work that is really an insult to the complex human brain.

timewizard · 5 months ago
There's also Frank Herbert. Who saw AI as ruinous to humanity and it's evolution and saw a future where humanity had to fight a war against it resulting in it being banished from the entire universe.
palmotea · 5 months ago
> There's also Frank Herbert. Who saw AI as ruinous to humanity and it's evolution and saw a future where humanity had to fight a war against it resulting in it being banished from the entire universe.

Did he though? Or was the Butlerian Jihad backstory whose function was allow him to believably center human characters in his stories, given sci-fi expectations of the time?

I like Herbert's work, but ultimately he (and Asimov) were producers of stories to entertain people, so entertainment always would take priority over truth (and then there's the entirely different problem of accurately predicting the future).

triceratops · 5 months ago
I always thought the Butlerian Jihad was a convenient way to remove AI as a plot element. Same thing with shields and explosions; it made swordfighting a plausible way to fight in a universe with faster-than-light travel.
calmbell · 5 months ago
A fantasy writer telling fictional stories about the future is more credible than many so-called serious people (think Marc Andreessen) who promote any technology as the bee's knees if it can make them money.
palmotea · 5 months ago
> A fantasy writer telling fictional stories about the future is more credible than many so-called serious people (think Marc Andreessen) who promote any technology as the bee's knees if it can make them money.

But that's more a knock on people like Marc Andreessen than a reason you should put stock in Asimov.

Jgrubb · 5 months ago
> humanity in general will be freed from all kinds of work that’s really an insult to the human brain.

He can only be referring to these Jira tickets I need to write.

BeetleB · 5 months ago
There is a Jira MCP server...
fragmede · 5 months ago
oh woah https://glama.ai/mcp/servers/@CamdenClark/jira-mcp

and MCP can work with deepseek running locally. hmm...

m463 · 5 months ago
flashback to Tron:

"MCP is highly intelligent and yet ruthless. It apparently wants to get rid of humans and especially users."

https://disney.fandom.com/wiki/Master_Control_Program

icecap12 · 5 months ago
As someone who just got done putting a bullet in some long-used instances, I both appreciated and needed this laugh. Thanks!
eliaspro · 5 months ago
Back then, when we also believed the access to every imaginable information through the internet and allowing communication across the globe would lead to universal wisdom, world-peace and an unimaginable utopia where common sense, based on science and knowledge prevails.

Oh boy, how foolish we've been!