Readit News logoReadit News
NitpickLawyer · 3 months ago
> Maybe a useful way to think about what it would be like to coexist in a world that includes intelligences that aren’t human is to consider the fact that we’ve been doing exactly that for long as we’ve existed, because we live among animals.

Another analogy that I like is about large institutions / corporations. They are, right now, kind of like AIs. Like Harari says in one of his books, Peugeot co. is an entity that we could call AI. It has goals, needs, wants and obviously intelligence, even if it's comprised by many thousands of individuals working on small parts of the company. But in aggregate it manifests intelligence to the world, it acts on the world and it reacts to the world.

I'd take this a step forward and say that we might even have ASI already, in the US military complex. That "machine" is likely the most advanced conglomerate of tech and intelligence (pun intended) that the world has ever created. In aggregate it likely is "smarter" than any single human being in existence, and if it sets a goal it uses hundreds of thousands of human minds + billions of dollars of sensors, equipment and tech to accomplish that goal.

We survived those kinds of entities, I think we'll be fine with whatever AI turns out to be. And if not, oh well, we had a good run.

pona-a · 3 months ago
Did we survive these entities? By current projections, between 13.9% and 27.6% of all species would be likely to be extinct by 2070 [0]. The USA suffers an estimated 200,000 annual deaths associated with lacking health insurance [1]. Thanks to intense lobbying by private prisons, the US incarceration rate is 6 times that of Canada, despite similar economic development [2].

Sure, the human species is not yet on the brink of extinction, but we are already seeing an unprecedented fall in worldwide birth rates, which shows our social fabric itself is being pulled apart for paperclips. Changing the scale and magnitude to a hypothetical entity equivalent to a hundred copies of the generation's brightest minds with a pathological drive to maximize an arbitrary metric might only mean one of two things: either its fixation leads it to hacking its own reward mechanism, putting it in a perpetual comma while resisting termination, or it succeeds at doing the same on a planetary scale.

[0] https://onlinelibrary.wiley.com/doi/abs/10.1111/gcb.17125

[1] https://healthjusticemonitor.org/2024/12/28/estimated-us-dea...

[2] https://www.prisonstudies.org/highest-to-lowest/prison_popul...

satvikpendem · 3 months ago
> but we are already seeing an unprecedented fall in worldwide birth rates, which shows our social fabric itself is being pulled apart for paperclips

People choose to have fewer kids as they get richer, it's not about living conditions like so many people like to claim, otherwise poor people wouldn't be having so many children. Even controlling for high living conditions, like in Scandinavia, people still choose to have fewer kids.

rmah · 3 months ago
We (humans) have not only survived but thrived. 200,000 annual deaths is just 7% of the 3mil that die each year. More (as a percentage) probably died from access to the best health care 100 or 200 years ago. The fall in birth rates is, IMO, a good thing as the alternative, overpopulation seems like a far scarier specter to me. And to bring it back to AI's, an AI "with a pathological drive to maximize an arbitrary metric" is a hypothetical without any basis in reality. While fictional literature -- where I assume you got that concept -- is great for inspiration, it rarely has any predictive power. One probably shouldn't look to it as a guideline.
falcor84 · 3 months ago
> The USA suffers an estimated 200,000 annual deaths associated with lacking health insurance

Isn't this just about the advancement of medical science? I.e. Wouldn't they have died from the same causes regardless of medical insurance a few decades ago?

To take it to the extreme, let's say that I invent a new treatment that can extend any dying person's life by a year for the cost of $10M, and let's say that there is a provider that is willing to insure for that for an exorbitant cost. Then wouldn't almost every single person still dying be dying from lack of insurance?

foxglacier · 3 months ago
You have to be careful with species. It could be dominated by obscure minor local variations in insects and fungi that nobody would even notice went missing and which might not actually matter.

Apparently almost all animal species are insects:

https://ourworldindata.org/grapher/number-of-described-speci...

eru · 3 months ago
> The USA suffers an estimated 200,000 annual deaths associated with lacking health insurance [1].

'Associated with' is a pretty lose term.

keeda · 3 months ago
Charles Stross has also made that point about corporations essentially being artificial intelligence entities:

https://www.antipope.org/charlie/blog-static/2018/01/dude-yo...

ayrtondesozzla · 3 months ago
https://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre...

This blog is where I saw the same idea recently, which also links to that post you link.

TheOtherHobbes · 3 months ago
In the general case, the entire species is an example of ASI.

We're a collective intelligence. Individually we're pretty stupid, even when we're relatively intelligent. But we have created social systems which persist and amplify individual intelligence to raise collective ability.

But this proto-ASI isn't sentient. It's not even particularly sane. It's extremely fragile, with numerous internal conflicts which keep kneecapping its potential. It keeps skirting suicidal ideation.

Right now parts of it are going into reverse.

The difference between where we are now and AI is that ASI could potentially automate and unify the accumulation of knowledge and intelligence, with more effective persistence, and without the internal conflicts.

It's completely unknown if it would want to keep us around. We probably can't even imagine its thought processes. It would be so far outside our experience we have no way of predicting its abilities and choices.

whyowhy3484939 · 3 months ago
I get the idea, but I'm not quite sold on it. Being intelligent on vast scales is something an individual cannot do, but I'm not sure the "species" is more intelligent than any individual agent. I'm actually a bit more sure of the opposite. It's like LLM agents where just adding more doesn't improve the quality it just introduces more room for bullshit.

To allocate capital on vast scales and make decisions on industry etc, sure, that's a level of intelligence quite beyond any one of us but this feels like cheating the definition of intelligence. It's not the quantity of it that matters, it's the quality. It's like flying I guess. A large bird and a small bird are both flying and the big bird is not doing "more" of it. A group of birds is doing something an individual is incapable of (forming a swarm), sure, but it's not an improvement on flying. It's just something else. That something else can be useful, but I don't particularly like applying that same move to "intelligence".

If the species was so goddamn intelligent it could solve unreasonable IQ tests and it cannot. If we want to solve something really, really hard we use Edward Witten not "the species". That's because there is no "species", there is only a bunch of individuals and if they all score bad, the aggregate will score bad as well. We just coast because a bunch of us are extraordinarily clever.

ddq · 3 months ago
Metal Gear Solid 2 makes this point about how "over the past 200 years, a kind of consciousness formed layer by layer in the crucible of the White House" through memetic evolution. The whole conversation was markedly prescient for 2001 but not appreciated at the time.

https://youtu.be/eKl6WjfDqYA

keybored · 3 months ago
I don’t think it was “prescient” for 2001 because it was based on already-existing ideas. The same author that inspired The Matrix.

But the “art” of MGS might be the memetic powerhouse of Hideo Kojima as the inventor of everything. A boss to surpass Big Boss himself.

jumploops · 3 months ago
Corporations, governments, religions -- all human-level intelligences with non-human goals (profit, power, influence).

A professor of mine wrote a paper on this[0](~2012).

[0]https://web.eecs.umich.edu/~kuipers/papers/Kuipers-ci-12.pdf

vonneumannstan · 3 months ago
Unless you have a truly bastardized definition of ASI then there is undoubtedly nothing close to it on earth. No corporation or military or government comes close to what ASI could be capable of.

Any reasonably smart person can identify errors that Militaries, Governments and Corporations make ALL THE TIME. Do you really think a Chimp can identify the strategic errors Humans are making? Because that is where you would be in comparison to a real ASI. This is also the reason why small startups can and do displace massive supposedly superhuman ASI Corporations literally all the time.

The reality of Human congregations is that they are cognitively bound by the handful of smartest people in the group and communication bound by email or in person communication speeds. ASI has no such limitations.

>We survived those kinds of entities, I think we'll be fine with whatever AI turns out to be. And if not, oh well, we had a good run.

This is dangerously wrong and disgustingly fatalistic.

QuadmasterXLII · 3 months ago
Putting aside questions of what is and isn’t artificial, I think with the usual definitions “Is Microsoft a superintelligence” and “Can Microsoft build a superintelligence” are the same question.
ayrtondesozzla · 3 months ago
> Unless you have a truly bastardized definition of ASI then there is undoubtedly nothing close to it on earth. No corporation or military or government comes close to what ASI could be capable of.

This is glistening with religious fervour. Sure, they could be that powerful. Just like God/Allah/Thor/Superman could, too.

I've no doubt that many rationalist types sincerely care about these issues, and are sincerely worried. At the same time, I think it very likely that some significant number of them are majorly titillated by the biblical pleasure of playing messiah/prophet.

ViscountPenguin · 3 months ago
Do we know that Chimps can't identify some subset of human strategic errors? I'm not convinced that's the case.

The idea of dumber agents supervising smarter ones seems relatively grounded to me, and forms the basis of OpenAIs old superalignment efforts (although I think that team might've been disbanded?)

keybored · 3 months ago
If there was anywhere to get the needs-wants-intelligence take on corporations, it would be this site.

> We survived those kinds of entities, I think we'll be fine

We just have climate change to worry about and massive inequality (we didn’t “survive” it, the fuzzy little corporations with their precious goals-needs-wants are still there).

But ultimately corporations are human inventions, they aren’t an Other that has taken on a life of its own.

skybrian · 3 months ago
If a corporation is like an AI, it’s like one we imagine might exist one day, not currently-existing AI. LLM’s aren’t trying to make money or do anything in particular except predict the next token.

The corporations that run LLM’s do charge for API usage, but it’s independent of what you the chat is about. It’s happening at a different level in the stack.

overfeed · 3 months ago
AIs minimize perplexity, corporations maximize profits - the rest are implemention details.

If you built an AI that could outsource labor to humans and whose reward function is profit, your result would approximately be a corporation.

crystal_revenge · 3 months ago
> We survived those kinds of entities

Might want to wait just a bit longer before confidently making this call.

jay_kyburz · 3 months ago
The Neanderthal didn't survive us. Neither did any of the other extinct species. It's perfectly reasonable to think we may not survive a stronger, smarter species.
snthpy · 3 months ago
Thank you. Well expressed. I very much agree with this and have been saying so to friends for years.

The way I look at it is that it's analogous to the way we ourselves function: we're made up of billions of cells which individually just follow simple programs mediated by local interactions with their neighbours as well as some global state mediated by hormones and signals from the nervous system. However collectively they produce what we call intelligence (and even consciousness) which we wouldn't ascribe to any of the component cells and those components aren't aware of the collective organisms goal. Moreover the overall organism can achieve goals and solve problems beyond the scale of the components.

Similarly our institutions, be they corporations, governments, etc... are collective intelligences with us as the parts. These institutions have goals and problems solving capabilities that far surpass our own - no individual could keep all Walmart stores perfectly stocked every day, or design a modern microchip or end-to-end AI platform, etc... . These really are the goals of the organisations, and not the individuals. Take for example the US government, every four years you swap out the individuals in the executive branch yet overall US policy remains largely unchained. Sure, sometimes there is a major shift in direction, but it takes time for that to be translated into shifts in policy and actions as different parts of the system react at different speeds. The bigger point is that the individuals executing the actions get swapped out over time (at different speeds for different parts like cells being replaced at different speeds in our bodies) but the organisation continues to pursue its own goal which only changes slowly over time. Political and financial analysts implicitly acknowledge this when they talk about US or Chinese policy but this often gets personified into the leader.

I think we really need to acknowledge more the existence and reality of organisational goals as independent of the goals of the individuals in those organisations. I was struck by how in the movie The Corporation they point out that corporations often take actions that are contrary to the beliefs of the individuals in them, including the CEO because the CEO is bound by his fiduciary duty to the shareholders. Corporations are legal persons and if you analyse them as persons they are psychopaths, without any human feelings or regard for human cost or externalities unless those are enforced through some legal or pricing mechanism. Yet when corporations or organisations transgress we often hold the individuals accountable. Sometimes the individuals are to blame but often its how the game has been set up that is at fault. For example, in a globally heterogenous tax regime, a multinational corporation will naturally minimise its tax burden, it can't really do otherwise and the executives of the company have a fiduciary duty to shareholders to carry that out.

Therefore we have revise and keep evolving the rules of the game in order to stay compatible with human values and survival.

abeppu · 3 months ago
> It hasn’t always been a cakewalk, but we’ve been able to establish a stable position in the ecosystem despite sharing it with all of these different kinds of intelligences.

To me, the things that he avoids mentioning in this understatement are pretty important:

- "stable position" seems to sweep a lot under the rug when one considers the scope of ecosystem destruction and species/biodiversity loss

- whatever "sharing" exists is entirely on our terms, and most of the remaining wild places on the planet are just not suitable for agriculture or industry

- so the range of things can could be considered "stable" and "sharing" must be quite broad, and includes many arrangements which sound pretty bad for many kinds of intelligences, even if they aren't the kind of intelligence that can understand the problems they face.

gregoryl · 3 months ago
NZ is pretty unique, there is quite a lot of farmable land which is protected wilderness. There's a specific trust setup to help landowners convert property, https://qeiinationaltrust.org.nz/

Imperfect, but definitely better than most!

incoming1211 · 3 months ago
> there is quite a lot of farmable land

This is not really true. ~80% of NZ's farmable agricultural land is in the south island. But ~60% of milk production is done in the north island.

chubot · 3 months ago
Yeah totally, I have read that the total biomass of cows and dogs dwarfs that of say lions or elephants

Because humans like eating beef, and they like having emotional support from dogs

That seems to be true:

https://ourworldindata.org/wild-mammals-birds-biomass

Livestock make up 62% of the world’s mammal biomass; humans account for 34%; and wild mammals are just 4%

https://wis-wander.weizmann.ac.il/environment/weight-respons...

Wild land mammals weigh less than 10 percent of the combined weight of humans

https://www.pnas.org/doi/10.1073/pnas.2204892120

I mean it is pretty obvious when you think that 10,000 years ago, the Americas had all sorts of large animals, as Africa still does to some extent

And then when say the Europeans got here, those animals were mostly gone ... their "biomass" just collapsed

---

Same thing with plants. There were zillions of kinds of plants all over the planet, but corn / wheat / potatoes are now an overwhelming biomass, because humans like to eat them.

Michael Pollan also had a good description of this as our food supply changing from being photosynthesis-based to fossil-fuel-based

Due to the Haber-Bosch process, invented in the early 1900's, to create nitrogen fertilizer

Fertilizer is what feeds industrial corn and wheat ... So yeah the entire "metabolism" of the planet has been changed by humans

And those plants live off of a different energy source now

graemep · 3 months ago
That is only mammalian biomass, though.

> And then when say the Europeans got here, those animals were mostly gone ... their "biomass" just collapsed

A lot of species had long been extinct, but the biomass of the remaining ones fell.

Megafauna extinctions always follow 1. the mere arrival of humans and 2. agriculture and growth in human populations.

Places the humans did not reach until later, kept a lot more megafauna for longer - e.g. New Zealand where flourishing species such as moas became extinct within a century or two of human settlement.

Deleted Comment

vessenes · 3 months ago
By stable I think he might mean ‘dominant’.
hnthrow90348765 · 3 months ago
>We may end up with at least one generation of people who are like the Eloi in H.G. Wells’s The Time Machine, in that they are mental weaklings utterly dependent on technologies that they don’t understand and that they could never rebuild from scratch were they to break down

I don't think this can realistically happen unless all of the knowledge that brought us to that point was erased. Humans are also naturally curious and I think it's unlikely that no one tries to figure out how the machines work across an entire population, even if we had to start all the way down from 'what's a bit?' or 'what's a transistor?'.

Even today, you can find youtube channels of people still interested in living a primitive life and learning those survival skills even though our modern society makes it useless for the vast majority of us. They don't do it full-time, of course, but they would have a better shot if they had to.

acbart · 3 months ago
The research that is coming out is very clear that the best students are benefitting, but the bad students are getting worse than if they had never seen the LLM. And the divide is growing, with fewer good students. LLMs are a disaster in education.
arscan · 3 months ago
And for the curious, this current iteration of AI is an amazing teacher, and makes a world-class education much more accessible. I think (hope) this will offset any kind of over-intellectual dependence that others form on this technology.
pixl97 · 3 months ago
>I don't think this can realistically happen

I'd be far more worried about things in the biosciences and around antibiotic resistance. At our current usage it wouldn't be hard to develop some disease that requires high technology to produce medicine that keep us alive. Add in a little war taking out the few factories that do that, and increase the amount of injuries sustained things could quickly go sideways.

A whole lot of our advanced technology is held in one or two places.

tqi · 3 months ago
> Humans are also naturally curious and I think it's unlikely that no one tries to figure out how the machines work across an entire population

Definitely agree with this. I do wonder if at some point, new technology will become sufficiently complex that the domain knowledge required to actually understand it end to end is too much for a human lifetime?

msabalau · 3 months ago
Stephenson is using a evocative metaphor and a bit of hyperbole to make a point. To take him as meaning that literally everyone entire population is like the Eloi is to misread.
hamburga · 3 months ago
Fun read, thanks for posting!

> If I had time to do it and if I knew more about how AIs work, I’d be putting my energies into building AIs whose sole purpose was to predate upon existing AI models by using every conceivable strategy to feed bogus data into them, interrupt their power supplies, discourage investors, and otherwise interfere with their operations. Not out of malicious intent per se but just from a general belief that everything should have to compete, and that competition within a diverse ecosystem produces a healthier result in the long run than raising a potential superpredator in a hermetically sealed petri dish where its every need is catered to.

This sort of feels like cultivating antibiotic-resistant bacteria by trying to kill off every other kind of bacteria with antibiotics. I don't see this as necessarily a good thing to do.

I think we should be more interested in a kind of mutualist competition: how do we continuously marginalize the most parasitic species of AI?

gwd · 3 months ago
That quote sounded terrifying. It reminds me of The Incredibles, where (spoiler) the villain recruits superheroes to try to defeat his "out of control robot", in order to make it invincible.

I think we want AI to have an "achilles heel" we can stab if it turns out we need to.

w10-1 · 3 months ago
Funny how he seems to get so close but miss.

It's an anthropocentric miss to worry about AI as another being. It's not really the issue in today's marketplace or drone battlefield. It's the scalability.

It's a hit to see augmentation as amputation, but a miss to not consider the range of systemic knock-on effects.

It's a miss to talk about nuclear weapons without talking about how they structured the UN and the world today, where nuclear-armed countries invade others without consequence.

And none of the prior examples - nuclear weapons, (writing?) etc. - had the potential to form a monopoly over a critical technology, if indeed someone gains enduring superiority as all their investors hope.

I think I'm less scared by the prospect of secret malevolent elites (hobnobbing by Chatham house rules) than by the chilling prospect of oblivious ones.

But most of all I'm grateful for the residue of openness that prompts him to share and us to discuss, notwithstanding slings and arrows like mine. The many worlds where that's not possible today are already more de-humanized than our future with AI.

tuatoru · 3 months ago
The point of Chatham House rules is to encourage free-ranging and unfiltered discussion, without restriction on its dissemination. If people know they are going to be held to their words, they become much less willing to say anything at all.

The "residue" of openness is in fact the entire point of that convention. If you want to be invited to the next such bunfight, just email the organisers and persuade them you have insight.

1. https://en.wikipedia.org/wiki/Chatham_House_Rule

swyx · 3 months ago
> If AIs are all they’re cracked up to be by their most fervent believers, this seems like a possible model for where humans might end up: not just subsisting, but thriving, on byproducts produced and discarded in microscopic quantities as part of the routine operations of infinitely smarter and more powerful AIs.

i think this kind of future is closer to 500 years out than 50 years. the eye mites are self sufficient. ai's right now rely on immense amounts of human effort to keep them "alive" and they wont be "self sufficient" in energy and hardware until we not just allow it, but basically work very hard to make it happen.

hweller · 3 months ago
Could be wrong but i think here Neal is saying we are the eye mites subsisting off of AI in the long future, not the other way around.
narrator · 3 months ago
AI does not have a reptilian and mammalian brain underneath it's AI brain as we have underneath our brains. All that wiring is an artifact of our evolution and primitive survival and not how pre-training works nor an essential characteristic of intelligence. This is the source of a lot of misconceptions about AI.

I guess if you put tabula rasa AI in a world simulator, and you could simulate it as a whole biological organism and the environment of the earth and sexual reproduction and all that messy stuff it would evolve that way, but that's not how it evolved at all.

ceejayoz · 3 months ago
We don’t have a reptilian brain, either. It’s a long outdated concept.

https://www.sciencefocus.com/the-human-body/the-lizard-brain...

https://en.wikipedia.org/wiki/Triune_brain

dsign · 3 months ago
The corollary of your statement is that comparing AI with animals is not very fortunate, and I agree.

For me, AI in itself is not as worrying as the socioeconomic engines behind it. Left unchecked, those engines will create something far worse than the T-Rex.

Lerc · 3 months ago
I found this a little frustrating. I liked the content of the talk, but I live in New Zealand, I have thoughts and opinions on this topic. I would like to think I offer a useful perspective. This post was how I found out that there are people in my vicinity talking about these issues in private.

I don't presume that I am important enough that it should be necessary to invite me to discussions with esteemed people, nor that my opinion is imported enough that everyone should hear it, but I would least like to know that such events are happening in my neighbourhood and who I can share ideas with.

This isn't really a criticism of this specific event or even topic, but the overall feeling that things in the world are being discussed in places where I and presumably many other people with valuable input in their individual domains have no voice. Maybe in this particular event it was just a group of individuals who wanted to learn more about the topic, on the other hand, maybe some of those people will end up drafting policy.

There's a small part of me that's just feeling like I'm not one of the cool kids. The greater and more rational concern isn't so much about me as a person but me as a data point. If I am interested in a field, have a viewpoint I'd like to share and yet remain unaware of opportunities to talk to others, how many others does this happen to? If these are conversations that are important to humanity, are they being discussed in a collection of non overlapping bubbles?

I think the fact that this was in New Zealand is kind of irrelevant anyway, given how easy it is to communicate globally. It just served to for the title capture my attention.

(I hope, at least, that Simon or Jack attended)

smfjaw · 3 months ago
Don't feel left out, big data architect in NZ and didn't even hear of this.
kilpikaarna · 3 months ago
Assuming it's basically the same bunch of bunker billionaires who a few years back invited Douglas Rushkoff to give pointers on how to keep their security guards in check after SHTF. They've found their answer, now they just need to figure out how to control the superintelligence...