Everybody looks at AI 2027 and nobody looks at Gradual Disempowerment. They both came out at around the same time and they both portend tremendous volumes of doom with a litany of citations, but the preexisting skynet/paperclipper memes means that nobody talks at length about the scenario of a few hundred human owners using AI servants to eliminate everybody else out of an incremental slippery slope of good business sense.
I think this comes close to the probable near-future outcome, but is missing an important element.
Over the last several decades we've seen an enormous transfer and consolidation of wealth from many people into a much smaller number of people. I think AI is going to dramatically accelerate this.
Fully open source, locally-hosted AI models are currently lagging far behind their commercial counterparts (in adoption if not capability). The web had a free-for-all period run by free software where it was able to gestate and gain traction before the walled garden era took over. Application development likewise had a long period of time where independent developers were able to build and distribute software (for free or otherwise) before the app store model took over; now, a significant portion of software development needs to be authorized by a third party before it can be distributed to potential users.
We did not get that same gestational period for AI. It went from theory to commercial product astonishingly fast. There are daily threads on HN now comparing how much everyone is "happy" to pay for their favorite commercial AI each month.
Developers are paying companies for the privilege of writing software.
The developers that, for whatever reason, refuse to get on board this train are going to be quickly outcompeted by the rest. Maybe they have been already.
It's likely that by about 2028 or thereabouts, we will see a landscape where just a few commercial entities will have captured the process of software development. If you want to make money making software, you will have to pay one of them to do it.
The people in those commercial entities might wholly be the property (with human rights refactored/disrupted), or indentured servants, of just the members of the alignment team of whichever AI company hits recursive self-improvement first, who live like new emperors.
Armies of killer robots are real now. Ukraine makes about 4 million military drones a year. Russia makes slightly fewer. Control is becoming more automated, because there are not enough people to control all those drones. Drones have to operate despite jamming, so they need to be fully autonomous on the battlefield.
And yes, there are AI killbot startups.[1] "At SECL Group, our team has vast experience in developing various drone systems, including those related to drone swarms. If you are interested in implementing AI-based target recognition capabilities in your UAVs, feel free to get in touch with us to discuss the details."
Gary Marcus has a dichotomy between extinction versus bad actors. I feel a third possibiltiy is much more likely: a world of extreme specialization where AI reigns supreme, and where humans are mainly button-pushers. Those at the top and who still enjoy life will be the techies who are good at and enjoy making things with AI, which will be approximately 1% of the population. The other 99% will be administrators, button-pushers, and those on UBI who have a fairly meaningless existence without much dignity.
Because once 99% of the population lose the opportunity to at least learn a skill that they are good at, and (Importantly!!) for which they have some aptitude over others, and apply it, then they will face an existence of very little meaning. Like it or not, people want to be distinguishable from others, and if everyone can do everything with AI, then that disappears.
Techies don't like to admit it because they are at the top, but through AI, they are creating their own bubble world with their little toys that will act through the immense power of AI as an oligarchy that rules the listless and depressed masses.
This is such a utilitarianistic point of view, Not everyone is defined by their work, not everyone cares about how distinguishable they are from the others, or even how the others think of them, and I would even argue, that very few oligarchs/billionaires etc belong to the group of people that truly enjoy life.
I don't think Elon has any malice towards his own species. I tend to agree with the first comment on the article, re: p(dystopia), where almost all LLM services are currently in play, mostly due to the actions of third party clients.
Why not? I think a lot of people with traits like Musk would make us completely subservient without any freedom if he had the technology and opportunity to do so.
The article is talking about actual, literal, species-level extinction. Like, no humans left. That's the sort of "malice towards the species" meant here.
> a lot of people with traits like Musk would make us completely subservient without any freedom
That would be the "dystopia" the person you replied to mentioned.
Total death-cult misanthropy is a surprisingly common feature of repeat divorcees of any sex. Broken hearts, the really broken hearts, do that.
Unfortunately, recovering from that state and returning to normality (and gaining/regaining the ability to face oneself seriously and make amends when necessary) usually requires having multiple supportive friends. Elon hasn't been in a position to get for years.
Given he's engaged in mass murder that's severely destabilizing a significant chunk of the planet (through his unconstitutional culling of USAID among other things), it doesn't seem unreasonable to attribute his behavior to malice.
He’s literally called for jailing and worse of people with progressive ideas. Either he’s a junky who is so high he doesn’t know what he’s saying, or he’s an evil piece of white shit. (I consider “edgelord doing it for the lulz” to be the second category.)
I'm sorry but the whole Grok “mechahitler” thing was a feature not a bug. The timing between that and xAI announcing it's AI services for governments and subsequently being awarded a government contract was not coincidental. Elon was demonstrating that Grok 4 comes with a state propaganda switch governments can flip whenever they want.
Seems like all the p(doom) is coming from Elon himself, and not the AI. An "empowered" but unintelligent "stochastic parrot", which seems to be his view of LLMs, is more likely to hinder than help with one's plan for world domination and annihilation.
He was lucky, and that's all, with Facebook and has coasted on its momentum ever since.
Every single "play" after that has failed.
Not only have they failed, he's even fallen into the dictator's trap: where wealthy men who surround themselves with multiple layers of overlapping and competing of staff and hierarchy which act to completely insulate and isolate them from reality start or fund outlandish, impossible, vanity projects.
I don't know why these guys don't take the money and run and spend the rest of their lives scuba diving and fixing vintage sports cars in their garage-- but I'm not a sociopath.
I assume the simpler explanation to be that many billionaire/dictator's dream hobbies turn out to be starting or funding outlandish, impossible, vanity projects while completely insulated from consequence and isolated from reality.
Yeah, scuba is cool, but I've got a handful of pet engineering projects I'd love to tinker on if I won the lottery (while farming off any drudge work that comes up). I assume all the businesspeople and politicians I also observe to fall into this trap just do the same with their particular area of expertise.
https://gradual-disempowerment.ai/
Over the last several decades we've seen an enormous transfer and consolidation of wealth from many people into a much smaller number of people. I think AI is going to dramatically accelerate this.
Fully open source, locally-hosted AI models are currently lagging far behind their commercial counterparts (in adoption if not capability). The web had a free-for-all period run by free software where it was able to gestate and gain traction before the walled garden era took over. Application development likewise had a long period of time where independent developers were able to build and distribute software (for free or otherwise) before the app store model took over; now, a significant portion of software development needs to be authorized by a third party before it can be distributed to potential users.
We did not get that same gestational period for AI. It went from theory to commercial product astonishingly fast. There are daily threads on HN now comparing how much everyone is "happy" to pay for their favorite commercial AI each month.
Developers are paying companies for the privilege of writing software.
The developers that, for whatever reason, refuse to get on board this train are going to be quickly outcompeted by the rest. Maybe they have been already.
It's likely that by about 2028 or thereabouts, we will see a landscape where just a few commercial entities will have captured the process of software development. If you want to make money making software, you will have to pay one of them to do it.
And yes, there are AI killbot startups.[1] "At SECL Group, our team has vast experience in developing various drone systems, including those related to drone swarms. If you are interested in implementing AI-based target recognition capabilities in your UAVs, feel free to get in touch with us to discuss the details."
[1] https://seclgroup.com/adding-ai-and-ml-to-military-drones-fo...
Because once 99% of the population lose the opportunity to at least learn a skill that they are good at, and (Importantly!!) for which they have some aptitude over others, and apply it, then they will face an existence of very little meaning. Like it or not, people want to be distinguishable from others, and if everyone can do everything with AI, then that disappears.
Techies don't like to admit it because they are at the top, but through AI, they are creating their own bubble world with their little toys that will act through the immense power of AI as an oligarchy that rules the listless and depressed masses.
A rather contemptible existence, in my opinion.
> a lot of people with traits like Musk would make us completely subservient without any freedom
That would be the "dystopia" the person you replied to mentioned.
Unfortunately, recovering from that state and returning to normality (and gaining/regaining the ability to face oneself seriously and make amends when necessary) usually requires having multiple supportive friends. Elon hasn't been in a position to get for years.
He’s literally called for jailing and worse of people with progressive ideas. Either he’s a junky who is so high he doesn’t know what he’s saying, or he’s an evil piece of white shit. (I consider “edgelord doing it for the lulz” to be the second category.)
Instead it is about how Elon Musk and his AI will somehow end the world... Disappointed.
He was lucky, and that's all, with Facebook and has coasted on its momentum ever since.
Every single "play" after that has failed.
Not only have they failed, he's even fallen into the dictator's trap: where wealthy men who surround themselves with multiple layers of overlapping and competing of staff and hierarchy which act to completely insulate and isolate them from reality start or fund outlandish, impossible, vanity projects.
I don't know why these guys don't take the money and run and spend the rest of their lives scuba diving and fixing vintage sports cars in their garage-- but I'm not a sociopath.
Musk is less incompetent.
Yeah, scuba is cool, but I've got a handful of pet engineering projects I'd love to tinker on if I won the lottery (while farming off any drudge work that comes up). I assume all the businesspeople and politicians I also observe to fall into this trap just do the same with their particular area of expertise.