Readit News logoReadit News
roxolotl · 5 months ago
This is a rare piece on AI which takes a coherent middle of the road viewpoint. Saying both that AI is “normal” and that it will be transformative is a radical statement in today’s discussions about AI.

Looking back on other normal but transformative technologies: steam power, electricity, nuclear physics, the transistor, etc you do actually see similarly stratified opinions. Most of those are surrounded by an initial burst of enthusiasm and pessimism and follow a hype cycle.

The reason this piece is compelling is because during the initial hype phase taking a nuanced middle of the road viewpoint is difficult. Maybe AI really is some “next step” but it is significantly more likely that belief is propped up by science fiction and it’s important to keep expectations inline historically.

cootsnuck · 5 months ago
I wouldn't call it a "middle" road rather a "nuanced" road (or even a "grounded" road IMO).

If its a "middle" road what is it in the middle of (i.e. what "scale")? And how so?

I'm not trying to be pedantic. I think our tendency to call nuanced, principled positions as "middle" encourages an inherent "hierarchy of ideas" which often leads to applying some sort of...valence to opinions and discourse. And I worry that makes it easier for people to "take sides" on topics which leads to more superficial, myopic, and repetitive takes that are much more about those "sides" than they are about the pertinent evidence, facts, reality, whatever.

LiquidSky · 5 months ago
>If its a "middle" road what is it in the middle of (i.e. what "scale")?

That's pretty clear. We already have two "sides": AI is the latest useless tech boondoggle that consumes vast quantities of money and energy while not actually doing anything of value vs. AI is the dawn of a new era of human existence that will fundamentally change all aspects of our world (possibly with the additional "and lead to the AGI Singularity").

datadrivenangel · 5 months ago
AI will transform everything, and after that life will continue as normal, so except for the details, it's not a big deal.

Going to be a simultaneously wild and boring ride.

sanderjd · 5 months ago
I think this is my take as well. Like, the web and smart phones and social media have transformed everything ... and also life goes on.
owebmaster · 5 months ago
> AI will transform everything, and after that life will continue as normal

100%. It just happened with the advent of the internet and then smartphones.

t0lo · 5 months ago
Everything changes, but everything stays the same.
schnable · 5 months ago
Add birth control to that list too.

After these technologies, certainly life is "normal" as in "life goes on" but the social impacts are most definitely new and transformative. Fast travel, instantaneous direct and mass communications, control over family formation all have had massive impact on how people live and interact and then transform again.

AStonesThrow · 5 months ago
Humanae vitae by Pope Paul VI, July 25, 1976

https://www.vatican.va/content/paul-vi/en/encyclicals/docume...

Consequences of Artificial Methods

17. Responsible men can ... first consider how easily this course of action could open wide the way for marital infidelity and a general lowering of moral standards. ... [A] man who grows accustomed to the use of contraceptive methods may forget the reverence due to a woman, and, disregarding her physical and emotional equilibrium, reduce her to being a mere instrument for the satisfaction of his own desires, no longer considering her as his partner whom he should surround with care and affection.

lo_zamoyski · 5 months ago
The surest defense against fashionable nonsense is a sound philosophical education and a temperament disinclined to hysteria. Ignorance leaves you wide open to all manner of emotional misadventure. But even when you are in possession of the relevant facts — and a passable grasp of the principles involved — it requires a certain moral maturity to resist or remain untouched by the lure of melodrama and the thrill of believing you live at the edge of transcendence.

(Naturally, the excitement surrounding artificial intelligence has less to do with reality than with commerce. It is a product to be sold, and selling, as ever, relies less on the truth than on sentiment. It’s not new. That’s how it’s always been.)

mdp2021 · 5 months ago
> sound philosophical education and a temperament disinclined to hysteria

Sound good common sense suffices - the ability to go "dude, that's <whatever it is>". Preferring a clear idea of reality to intoxication... That should not be hard to ask and obtain.

> it requires a certain moral maturity to

Same thing.

Dead Comment

pdfernhout · 5 months ago
From the article:

====

History suggests normal AI may introduce many kinds of systemic risks While the risks discussed above have the potential to be catastrophic or existential, there is a long list of AI risks that are below this level but which are nonetheless large-scale and systemic, transcending the immediate effects of any particular AI system. These include the systemic entrenchment of bias and discrimination, massive job losses in specific occupations, worsening labor conditions, increasing inequality, concentration of power, erosion of social trust, pollution of the information ecosystem, decline of the free press, democratic backsliding, mass surveillance, and enabling authoritarianism.

If AI is normal technology, these risks become far more important than the catastrophic ones discussed above. That is because these risks arise from people and organizations using AI to advance their own interests, with AI merely serving as an amplifier of existing instabilities in our society.

There is plenty of precedent for these kinds of socio-political disruption in the history of transformative technologies. Notably, the Industrial Revolution led to rapid mass urbanization that was characterized by harsh working conditions, exploitation, and inequality, catalyzing both industrial capitalism and the rise of socialism and Marxism in response.

The shift in focus that we recommend roughly maps onto Kasirzadeh’s distinction between decisive and accumulative x-risk. Decisive x-risk involves “overt AI takeover pathway, characterized by scenarios like uncontrollable superintelligence,” whereas accumulative x-risk refers to “a gradual accumulation of critical AI-induced threats such as severe vulnerabilities and systemic erosion of econopolitical structures.” ... But there are important differences: Kasirzadeh’s account of accumulative risk still relies on threat actors such as cyberattackers to a large extent, whereas our concern is simply about the current path of capitalism. And we think that such risks are unlikely to be existential, but are still extremely serious.

====

That tangentially relates to my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity." Because as our technological capabilities continue to change, it becomes ever more essential to revisit our political and economic assumptions.

As I outline here: https://pdfernhout.net/recognizing-irony-is-a-key-to-transce... "There is a fundamental mismatch between 21st century reality and 20th century security [and economic] thinking. Those "security" agencies [and economic corporations] are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ... The big problem is that all these new war machines [and economic machines] and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political [and economic] mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream."

A couple Slashdot comments by me from Tuesday, linking to stuff I have posted on risks form AI and other advanced tech -- and ways to address those risks -- back to 1999:

https://slashdot.org/comments.pl?sid=23665937&cid=65308877

https://slashdot.org/comments.pl?sid=23665937&cid=65308923

So, AI just cranks up an existing trend of technology-as-an-amplifier to "11". And as I've written before, if it is possible our path out of any singularity may have a lot to do with our moral path going into the singularity, we really need to step up our moral game right now to make a society that works better for everyone in healthy joyful ways.

kjkjadksj · 5 months ago
The idea of abundance vs scarecety makes sense on the outset. But I have to wonder where all this alleged abundance is hiding. Sometimes the assumptions feel a bit like “drill baby drill” to me without figures and projections behind it. One would think if there was much untapped capacity in resources today it would get used up. We can look at how agriculture yields improved over the 19th century and see how that lead to higher populations but also less land under the plow and fewer hands working that land, vs having an equal land under plow and I don’t know dumping the excess yield someplace where it isn’t participating in the market?
xpe · 5 months ago
> The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it.

A question for the author(s), at least one of whom is participating in the discussion (thanks!): Why try to lump together description, prediction, and prescription under the "normal" adjective?

Discussing AI is fraught. My claim: conflating those three under the "normal" label seems likely to backfire and lead to unnecessary confusion. Why not instead keep these separate?

My main objection is this: it locks in a narrative that tries to neatly fuse description, prediction, and prescription. I recoil at this; it feels like an unnecessary coupling. Better to remain fluid and not lock in a narrative. The field is changing so fast, making description by itself very challenging. Predictions should update on new information, including how we frame the problem and our evolving values.

A little bit about my POV in case it gives useful context: I've found the authors (Narayanan and Kapoor) to be quite level-headed and sane w.r.t. AI discussions, unlike many others. I'll mention Gary Marcus as one counterexample; I find it hard to pin Marcus down on the actual form of his arguments or concrete predictions. His pieces often feel like rants without a clear underlying logical backbone (at least in the year or so I've read his work).

randomwalker · 5 months ago
Thanks for the comment! I agree — it's important to remain fluid. We've taken steps to make sure that predictively speaking, the normal technology worldview is empirically testable. Some of those empirical claims are in this paper and others in coming in follow-ups. We are committed to revising our thinking if it turns out that our framework doesn't generate good predictions and effective prescriptions.

We do try to admit it when we get things wrong. One example is our past view (that we have since repudiated) that worrying about superintelligence distracts from more immediate harms.

mr_toad · 5 months ago
Statistically prediction and description are two sides of the same coin. Even a simple average is both.
xpe · 5 months ago
> Statistically prediction and description are two sides of the same coin. Even a simple average is both.

I'll restate your comment in my language in the hopes of making it clearer. First, the mean is a descriptive statistic. Second, it is possible to build a very simple predictive model using the mean (over observed data).

Ok, but I don't see how this applies to my comment above. Are you disagreeing with some part of my comment?

You're using a metaphor "two sides of the same coin"... what is the coin here? How does it connect with my points above?

pluto_modadic · 5 months ago
Burning the planet for a ponzi scheme isn't normal.

The healthiest thing for /actual/ AI to develop is for the current addiction to LLMs to die off. For the current bets by OpenAI, Gemini, DeepSeek, etc to lose steam. Prompts are a distraction, and every single company trying to commodify this are facing an impossible problem in /paying for the electricity/. Currently they're just insisting on building more power plants, more datacenters, which is like trying to do more compute with vacuum relays. They're digging in the wrong place for breakthroughs, and all the current ventures will go bust and be losses for investors. If they start doing computation with photons or something like that, then call me back.

loeber · 5 months ago
Virtually all of this is false. AI is neither burning the planet nor a ponzi scheme. If you're concerned about energy costs, consider for just a second that increased demand for computation directly incentivizes the construction of datacenters, co-located with renewable (read: free) energy sources at scale. ChatGPT isn't going to be powered by diesel.
jzymbaluk · 5 months ago
The only thing standing in the way of nuclear/solar/hydroelectric data centers is local laws and regulations. All the big cloud providers are actively researching this. see Microsoft's interest in acquiring the three mile island nuclear reactor for an example[1]

[1] https://www.technologyreview.com/2024/09/26/1104516/three-mi...

mentalgear · 5 months ago
In reality: renewables are siphoned away from replacing dirty energy. Dirty energy is still the same %, but the more energy computation needs, it might be added as renewables.
bob1029 · 5 months ago
I feel like the quadratic scaling laws are going to win in the end.

We need new techniques and algorithms, not new datacenters.

woah · 5 months ago
If only Chomsky and Lisp received this level of investment, we would have pure philosophical symbolic logic proving the answer to the universe by now
FL33TW00D · 5 months ago
All data centres consume ~1% of global electricity. A very small fraction.
4887d30omd8 · 5 months ago
That's funny because to me 1% of all electricity seems like a huge number.
bux93 · 5 months ago
"We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions"

If you read the EU AI act, you'll see it's not really about AI at all, but about quality assurance of business processes that are scaled. (Look at pharma, where GMP rules about QA apply equally to people pipetting and making single-patient doses as it does to mass production of ibuprofen - those rules are eerily similar to the quality system prescribed by the AI act.)

Will a think piece like this be used to argue that regulation is bad, no matter how benificial to the citizenry, because the regulation has 'AI' in the name, because the policy impedes someone who shouts 'AI' as a buzzword, or just because it was introduced in the present in which AI exists? Yes.

randomwalker · 5 months ago
I appreciate the concern, but we have a whole section on policy where we are very concrete about our recommendations, and we explicitly disavow any broadly anti-regulatory argument or agenda.

The "drastic" policy interventions that that sentence refers to are ideas like banning open-source or open-weight AI — those explicitly motivated by perceived superintelligence risks.

evrythingisfine · 5 months ago
The assumption of status quo or equilibrium with technology that is already growing faster than we can keep up with seems irrational to me.

Or, put another way:

https://youtu.be/0oBx7Jg4m-o

lubujackson · 5 months ago
I like these "worldview adjustment" takes. I'm reminded of Jeff Bezos' TED Talk (from 18 years ago). I was curious what someone who started Amazon would choose to highlight in his talk and the topic alone was the most impactful thing for me - the adoption of electricity: https://www.ted.com/talks/jeff_bezos_the_electricity_metapho...

He discussed the structural and cultural changes, the weird and dangerous period when things moved fast and broke badly and drew the obvious parallels between "electricity is new" to "internet is new" as a core paradigm shift for humanity. AI certainly feels like another similar potential shift.

xpe · 5 months ago
> One important caveat: We explicitly exclude military AI from our analysis, as it involves classified capabilities and unique dynamics that require a deeper analysis, which is beyond the scope of this essay.

Important is an understatement. Recursively self-improving AI with military applications does not mesh with the claim that "Arms races are an old problem".

> Again, our message is that this is not a new problem. The tradeoff between innovation and regulation is a recurring dilemma for the regulatory state.

I take the point, but the above statement is scoped to a _state_, not an international dynamic. The AI arms race is international in nature. There are relatively few examples of similar international agreements. The classic examples are bans on chemical weapons and genetic engineering.

kjkjadksj · 5 months ago
The US military probably already has Mendicant Bias in alpha build.
sandspar · 5 months ago
Interesting ideas but terribly overwritten.

"The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory."

Why write it so overblown like this? You can say the same thing much more cleanly like, "AI doesn’t shape the future on its own. Society and institutions do, slowly, as with past technologies."

bilsbie · 5 months ago
AI having the same impact as the internet. Changes everything and changes nothing at the same time.
tempodox · 5 months ago
I wouldn't call putting everything into overdrive “nothing”.
bilsbie · 5 months ago
We still pay bills and have mortgages. Still drink coffee. Same dogs and cats. Same roof technology.