Or, put another way:
Part II of the paper describes one vision of what a world with advanced AI might look like, and it is quite different from the current world.
We also say in the introduction:
"The world we describe in Part II is one in which AI is far more advanced than it is today. We are not claiming that AI progress—or human progress—will stop at that point. What comes after it? We do not know. Consider this analogy: At the dawn of the first Industrial Revolution, it would have been useful to try to think about what an industrial world would look like and how to prepare for it, but it would have been futile to try to predict electricity or computers. Our exercise here is similar. Since we reject “fast takeoff” scenarios, we do not see it as necessary or useful to envision a world further ahead than we have attempted to. If and when the scenario we describe in Part II materializes, we will be able to better anticipate and prepare for whatever comes next."
If you read the EU AI act, you'll see it's not really about AI at all, but about quality assurance of business processes that are scaled. (Look at pharma, where GMP rules about QA apply equally to people pipetting and making single-patient doses as it does to mass production of ibuprofen - those rules are eerily similar to the quality system prescribed by the AI act.)
Will a think piece like this be used to argue that regulation is bad, no matter how benificial to the citizenry, because the regulation has 'AI' in the name, because the policy impedes someone who shouts 'AI' as a buzzword, or just because it was introduced in the present in which AI exists? Yes.
The "drastic" policy interventions that that sentence refers to are ideas like banning open-source or open-weight AI — those explicitly motivated by perceived superintelligence risks.
A question for the author(s), at least one of whom is participating in the discussion (thanks!): Why try to lump together description, prediction, and prescription under the "normal" adjective?
Discussing AI is fraught. My claim: conflating those three under the "normal" label seems likely to backfire and lead to unnecessary confusion. Why not instead keep these separate?
My main objection is this: it locks in a narrative that tries to neatly fuse description, prediction, and prescription. I recoil at this; it feels like an unnecessary coupling. Better to remain fluid and not lock in a narrative. The field is changing so fast, making description by itself very challenging. Predictions should update on new information, including how we frame the problem and our evolving values.
A little bit about my POV in case it gives useful context: I've found the authors (Narayanan and Kapoor) to be quite level-headed and sane w.r.t. AI discussions, unlike many others. I'll mention Gary Marcus as one counterexample; I find it hard to pin Marcus down on the actual form of his arguments or concrete predictions. His pieces often feel like rants without a clear underlying logical backbone (at least in the year or so I've read his work).
We do try to admit it when we get things wrong. One example is our past view (that we have since repudiated) that worrying about superintelligence distracts from more immediate harms.