Anyone questioning the author's intention should read one of his books, "Who Owns the Future?"
It was written sometime ago, and I think Sam Altman read it as a handbook on power concentration using AI rather than the human-centric approach it was laying out.
Personally I wish Lanier wasn't as right about many things as he is, because I lose a little faith in humanity each time.
There are lots of parallels between Jaron Lanier and Richard Stallman. Cory Doctorow is another one I would put in that list, as well as SF writer Charles Stross.
I actually agree with his perspective. AI is simply a another huge leap in technology that directly affects social order. We only need to look at the effects social media has had on society and just amplify them to perceive what may be likely outcomes.
This aligns very close to my own thoughts that I have written about in great detail. I foresee the societal impacts to be exceedingly disturbing long before we ever reach the concept of a Singularity.
Regulation of social media is still woefully behind even in cases where we do know there has been a hugely negative impact (Myanmar & Facebook, for example). And there are approximately 5 people who exert massive, unregulated power over the shaping of planetary discourse (social media CEOs). If social media is too big to regulate, AI regulation doesn't have a chance in hell.
At the end, as you said, is social order, a similarity with social control. In a sense our past and current fears for caffeine [1], alcohol, drugs, etc is the fear that society will change and be out of control. Not saying that those things are healthy but even if drugs were harmless it would he controlled.
I feel that if smart people spent more time writing books about how good outcomes could come about rather than warning about bad outcomes powerful actors wouldn't have so many dystopian handbooks lying around and might reach for those positive books instead.
It's way easier to write believable dystopian novels because you are deconstructing what already is rather than building something new. The smart ones are the ones capable of writing the utopian novels.
If you claim “these [AI risk] concerns make no sense” then you either lack imagination, are willfully ignorant, or are selling something.
It’s perfectly reasonable to say something like “I think it’s very unlikely because I disagree that [insert causal link in the model] is plausible.”
But to claim that the whole set of concerns are nonsensical is lazy thinking.
You see this a lot (and Robin Hanson and PG have commented on this dynamic recently) where a small group makes a bunch of very specific claims, which get dismissed by the “mainstream” without actually engaging with or understanding them.
So in this case, “[the concerns] make no sense” should be read as “I did not bother to try to understand these claims, but I don’t like the conclusion”, not any particular argument about whether they are logically sound.
That's not what he said and not even what he was asked. He definitely acknowledged the dangers up to and including "human extinction" but wanted to make sure the question was couched in the right context.
From reading this, I don't get the impression that Lanier has any objective reason to believe the world won't be destroyed as the direct result of AI. If he does have a reason, the reporter certainly doesn't devote any space to analysing it, or to explain why dying from AI-induced insanity is different from being destroyed.
People have spent the last decade modify their behavior to please algorithms. They've become indistinguishable from bots. Cattle herded into segregated pens.
Being more human is the only possible defense, warts and all.
Yeah, I agree, many of us have become bots or zombies, though still being basic humans and communicating as humans. If you were a techie who wants to create a new algorithm to which we shall obey, you had to learn the language of computers to do so. Now this has changed as well. The computers have learned to speak our —human— language. That means they will also adapt to our behavior, which means the spiral into the insanity Jaron Lanier was talking about could possibly go faster…
EDIT: So yes, a return to what makes us human, to nature, with an awareness of history and philosophy would be very desirable and quite appropriate in these and future times.
The interview isn't very intellectual, and even rambles, but blame the reporter for that. Lanier's a great thinker.
I'll add my own danger: AI/VR could lead us to each live in our own realities. When you watch the evening news, it'll be specifically written for you, and won't get any of the scrutiny that a broadcast watched by millions would get. Or, you go watch the president's State of the Union, and get served a custom speech written to appeal to your particular psychological profile. This'll be possible on day, and it gives me Deus Ex vibes.
I've read this short story about the singularity years ago, written by a scientist from UW-Madison and although the writing isn't great, it has always stayed with me. Recent developments made me think of it, and the premise is precisely that - the group that develops AGI uses it to control the markets and drives everyone else insane through economic disruption, while staying entirely opaque.
In recent times we've already significantly given up on our humanity. The lack of shared institutes (church, bars, etc.), remote studying, remote work, ecommerce, personal contact via chat, social media, these all point in the same direction of a contactless society where we rarely interact with the physical world and its people.
It stands to reason that AI will only accelerate this further. It will be convenience on steroids. Your AI ear piece isn't going to tell you to throw it into the bin and go for a walk in the forest. It's going to tell you that you need to buy more stuff and it knows exactly what it is that you need. It's also going to feed you non-stop ultimate entertainment, custom generated for you and you only.
In a rare moment of humanity, one of your friends calls you. AI knows all about your friends and their recent events so had already summarized talking points. In case you can't be bothered, AI will carry out the actual conversation, it's trained in your voice.
A long running trend of outsourcing humanity to technology.
Good news for philosophers though, they finally might have their moment of actual relevancy. In particular to answer the question: what is the point of anything, really?
> In a rare moment of humanity, one of your friends calls you. AI knows all about your friends and their recent events so had already summarized talking points. In case you can't be bothered, AI will carry out the actual conversation, it's trained in your voice
I love this thought. Why not go further, have AI reach out to my friends and ask them about things they (or their AIs) recently told "me" about?
Soon our AIs will carry on our social lives and we'll just lie in the dark with tubes in us. We become the computers, and the computers become us, and the robots have finally won.
> I love this thought. Why not go further, have AI reach out to my friends and ask them about things they (or their AIs) recently told "me" about?
We already have this. Secretaries. Automated Happy Birthday emails.
When I was in a sales engineering role our sales team had a admin assistant who would sent out follow-ups, check-ins, and other correspondence (e.g. customer made a big public release, so congratulate them, etc.).
This is just another example of robots takin ur jerbs, basically.
Yep, our AI voice equivalents could maintain friendships with each other in which case the "what is the point?" question applies. Or, you might reach out for real but fail to be sure if you're talking to your real friend or not.
Or how about this interesting second-order effect: email. Soon Office will include advanced AI capabilities to write and reply to email.
What is the point of me reading it? If my AI can generate a satisfactory reply, your AI could have generated the response too. No email needed, nor a reply.
We're now in a phase where anybody can generate spectacular art. What is the point of me looking at your generated art? AI can generate personalized art based on what it knows I like.
If AI works, and it's headed that way, you keep ending up at the same question: what is the point of anything?
As counter force, there's significant room for a new low tech hippie Luddite movement.
We gave up our humanity when we came down from the trees, then again when we started cooking our food, then again when we made up languages, started writing, reading, and counting... the list goes on. Whatever "our humanity" is, we don't seem to be the worse for having lost it and made up a new one over and over. Each time might be the last, but so far we've done well.
Remote work brings people together. Instead of being in an office with colleagues, I'm in the same space with my significant other, and what used to be smoke breaks are now sex breaks. The time I used to waste on commute I now use to meet with friends and acquaintances.
I mean I agree for my life but only because I already built up my social circle from these shared spaces. What's someone fresh out of school in a new city supposed to do in 20 years?
"The time I used to waste on commute I now use to meet with friends and acquaintances."
I hope this is true, same for the sex breaks, but I'm skeptical. So on any given work day, you physically meet with friends between 7-9AM and/or 5-7 Pm? Like, every day?
These "friends" of yours, they have nowhere to go? Or do you sneak this into your work day and just randomly disappear for any length of time, which is something most of us can't do?
"In a rare moment of humanity, one of your friends calls you. AI knows all about your friends and their recent events so had already summarized talking points. In case you can't be bothered, AI will carry out the actual conversation, it's trained in your voice."
This is the premise of an episode of Silicon Valley where Guilfoyle trains an AI to act as his chat agent surrogate based on his historical transcripts, and then his colleague creates another one and they end up just conversing with each other.
The problem i see, that someone might send our "primitive" AI into a hostile environment, were it has to compete against other AI, creating a "take-over" and a "devensive" monster, similar to the go automaton. While the real world training data might be dripping, the speed in which a NN under evolutionary pressure against itself might evolve could go through the roof.
It was written sometime ago, and I think Sam Altman read it as a handbook on power concentration using AI rather than the human-centric approach it was laying out.
Personally I wish Lanier wasn't as right about many things as he is, because I lose a little faith in humanity each time.
I never wanted to respect him, as I always thought he was one of those "too good to be true" people, and was mostly a paper tiger.
It turns out that he's the real deal, and has been right about a lot of stuff.
They are all pretty good at looking ahead.
This aligns very close to my own thoughts that I have written about in great detail. I foresee the societal impacts to be exceedingly disturbing long before we ever reach the concept of a Singularity.
https://dakara.substack.com/p/ai-and-the-end-to-all-things
[1] https://www.researchgate.net/publication/289398626_Cultural-...
[0]: https://youtu.be/XdEuII9cv-U?t=172
But you can also read it at an obtuse angle and see the problems outlined to resolve as opportunities for personal gain.
It's just a matter of perspective.
It’s perfectly reasonable to say something like “I think it’s very unlikely because I disagree that [insert causal link in the model] is plausible.”
But to claim that the whole set of concerns are nonsensical is lazy thinking.
You see this a lot (and Robin Hanson and PG have commented on this dynamic recently) where a small group makes a bunch of very specific claims, which get dismissed by the “mainstream” without actually engaging with or understanding them.
So in this case, “[the concerns] make no sense” should be read as “I did not bother to try to understand these claims, but I don’t like the conclusion”, not any particular argument about whether they are logically sound.
Ignore.
Deleted Comment
EDIT: So yes, a return to what makes us human, to nature, with an awareness of history and philosophy would be very desirable and quite appropriate in these and future times.
I'll add my own danger: AI/VR could lead us to each live in our own realities. When you watch the evening news, it'll be specifically written for you, and won't get any of the scrutiny that a broadcast watched by millions would get. Or, you go watch the president's State of the Union, and get served a custom speech written to appeal to your particular psychological profile. This'll be possible on day, and it gives me Deus Ex vibes.
https://www.ssec.wisc.edu/~billh/g/mcnrsts.html
It stands to reason that AI will only accelerate this further. It will be convenience on steroids. Your AI ear piece isn't going to tell you to throw it into the bin and go for a walk in the forest. It's going to tell you that you need to buy more stuff and it knows exactly what it is that you need. It's also going to feed you non-stop ultimate entertainment, custom generated for you and you only.
In a rare moment of humanity, one of your friends calls you. AI knows all about your friends and their recent events so had already summarized talking points. In case you can't be bothered, AI will carry out the actual conversation, it's trained in your voice.
A long running trend of outsourcing humanity to technology.
Good news for philosophers though, they finally might have their moment of actual relevancy. In particular to answer the question: what is the point of anything, really?
I love this thought. Why not go further, have AI reach out to my friends and ask them about things they (or their AIs) recently told "me" about?
Soon our AIs will carry on our social lives and we'll just lie in the dark with tubes in us. We become the computers, and the computers become us, and the robots have finally won.
We already have this. Secretaries. Automated Happy Birthday emails.
When I was in a sales engineering role our sales team had a admin assistant who would sent out follow-ups, check-ins, and other correspondence (e.g. customer made a big public release, so congratulate them, etc.).
This is just another example of robots takin ur jerbs, basically.
Or how about this interesting second-order effect: email. Soon Office will include advanced AI capabilities to write and reply to email.
What is the point of me reading it? If my AI can generate a satisfactory reply, your AI could have generated the response too. No email needed, nor a reply.
We're now in a phase where anybody can generate spectacular art. What is the point of me looking at your generated art? AI can generate personalized art based on what it knows I like.
If AI works, and it's headed that way, you keep ending up at the same question: what is the point of anything?
As counter force, there's significant room for a new low tech hippie Luddite movement.
I hope this is true, same for the sex breaks, but I'm skeptical. So on any given work day, you physically meet with friends between 7-9AM and/or 5-7 Pm? Like, every day?
These "friends" of yours, they have nowhere to go? Or do you sneak this into your work day and just randomly disappear for any length of time, which is something most of us can't do?
This is the premise of an episode of Silicon Valley where Guilfoyle trains an AI to act as his chat agent surrogate based on his historical transcripts, and then his colleague creates another one and they end up just conversing with each other.
http://akkartik.name/post/2012-11-21-07-09-03-soc
The problem i see, that someone might send our "primitive" AI into a hostile environment, were it has to compete against other AI, creating a "take-over" and a "devensive" monster, similar to the go automaton. While the real world training data might be dripping, the speed in which a NN under evolutionary pressure against itself might evolve could go through the roof.