The AFR piece that underlies this article [1] [2] has more detail on Ng's argument:
> [Ng] said that the “bad idea that AI could make us go extinct” was merging with the “bad idea that a good way to make AI safer is to impose burdensome licensing requirements” on the AI industry.
> “There’s a standard regulatory capture playbook that has played out in other industries, and I would hate to see that executed successfully in AI.”
> “Just to be clear, AI has caused harm. Self-driving cars have killed people. In 2010, an automated trading algorithm crashed the stock market. Regulation has a role. But just because regulation could be helpful doesn’t mean we want bad regulation.”
> “There’s a standard regulatory capture playbook that has played out in other industries
But imagine all the money bigco can make by crippling small startups from innovating and competing with them! It's for your own safety. Move along citizen.
The answer is c) sell that energy and use your resulting funds to deeply root yourself in all other systems and prevent or destroy alternative forms of energy production, thus achieving total market dominance
This non-hypothetical got us global warming already
This analogy of course is close to nuclear energy. I think most people would say that regulation is still broadly aligned with the public interest there, even though the forces of regulatory capture are in play.
I read that book. No, you deny your gift to the world and become a recluse while the world slowly spins apart.
Technically: a solar panel is just such a machine. You'll have to wait a long, long time but the degradation is slow enough that you can probably use a panel for more than several human life times at ever decreasing output. You will probably find it more economical to replace the panel at some point because of the amount of space it occupies and the fact that newer generations of solar panels will do that much better in that same space. But there isn't any hard technical reason why you should discard one after 10, 30 or 100 years. Of course 'infinite' would require the panel to be 'infinitely durable' and likely at some point it will suffer mechanical damage. But that's not a feature of the panel itself.
And I strongly agree with pointing out a low hanging fruit for "good" regulation is strict and clear attribution laws to label any AI generated content with its source. That's a sooner the better easy win no brainer.
Why would we do this? And how would this conceivably even be enforced? I can't see this being useful or even well-defined past cartoonishly simple special cases of generation like "artist signatures for modalities where pixels are created."
Requiring attribution categorically across the vast domain of generative AI...can you please elaborate?
Where is the line drawn? My phone uses math to post-process images. Do those need to be labeled? What about filters placed on photos that do the same thing? What about changing the hue of a color with photoshop to make it pop?
Please define "AI generated content" in a clear and legally enforceable manner. Because I suspect you don't understand basic US constitutional law including the vagueness doctrine and limits on compelled speech.
There are two dominant narratives I see when AI X-Risk stuff is brought up:
- it's actually to get regulatory capture
- it's hubris, they're trying to seem more important and powerful than they are
Both of these explanations strike me as too clever by half. I think the parsimonious explanation is that people are actually concerned about the dangers of AI. Maybe they're wrong, but I don't think this kind of incredulous conspiratorial reaction is a useful thing to engage in.
When in doubt take people at their word. Maybe the CEOs of these companies have some sneaky 5D chess plan, but many many AI researchers (such as Joshua Bengio and Geoffrey Hinton) who don't stand to gain monetarily have expressed these same concerns. They're worth taking seriously.
> Both of these explanations strike me as too clever by half. I think the parsimonious explanation is that people are actually concerned about the dangers of AI
This rings hollow when these companies don’t seem to practice what they preach, and start by setting an example - they don’t halt research and cut the funding for development of their own AIs in-house.
If you believe that there’s X-Risk of AI research, there’s no reason to think it wouldn’t come from your own firm’s labs developing these AIs too.
Continuing development while telling others they need to pause seems to make “I want you to be paused while I blaze ahead” far more parsimonious than “these companies are actually scared about humanity’s future” - they won’t put their money where their mouth is to prove it.
It's a race dynamic. Can you truly imagine any one of them stopping without the others agreeing? How would they tell that the others really have stopped. I think they do believe that it's dangerous what they're doing but that they would rather be the ones to build it than let somebody else get there first because who knows what they'll do.
It's all a matter of incentives and people can easily act recklessly given the right ones. They keep going because they just can't stop.
This is not mutually exclusive with it being either hubris or regulatory capture. People see the world colored by their own interests, emotions, background, and values. It's quite possible that the person making the statement sincerely believes there's a danger to humanity, but it's actually a danger to their monopoly that their self-image will not let them label as a such.
It's never regulatory capture when you're the one doing it. It's always "The public needs to be protected from the consequences that will happen if any non-expert could hang up a shingle." Oftentimes the dangers are real, but the incumbent is unable to also perceive the benefits of other people competing with them (if they could, competition wouldn't be dangerous, they'd just implement those benefits themselves).
When I see comments like these, it's clear that the commenter is probably an individual contributor that has never seen how upper management or politics actually works. Regulatory capture is probably one of the biggest wealth generating techniques out there. It's very real.
If some rando anonymous posters could think it up, it doesn't require a CEO to play 5D chess to think it up. And many of us have witnessed these techniques being used by companies directly. Microsoft was famous for doing this sort of thing, and in a much more roundabout fashion, for instance with the SCO debacle.
It's standard business practice, not conspiracy 5D chess or whatever moniker you want to give it to be dismissive.
>it's hubris, they're trying to seem more important and powerful than they are
>Both of these explanations strike me as too clever by half
This is a good point. You have to be clever to hop on a soapbox and make a ruckus about doomsday to get attention. Only savvy actors playing 5D chess can aptly deploy the nuanced and difficult pattern of “make grandiose claims for clicks”
You can go back 30 years and read passages from textbooks about how dangerous an underspecified AI could be, but those were problems for the future. I'm sure there's some degree of x-risk promotion in the industry serving the purpose of hyping up businesses, but it's naive to act like this is a new or fictitious concern. We're just hearing more of it because capabilities are rapidly increasing.
1. While their contributions to AI tech are unmistakable, what do Bengio and Hinton really know about the human dangers of AI? Being an expert in one thing does not make one an expert in everything. It is unlikely that they understand the human dangers any more than any other random kook on Reddit. Why take them more seriously than the other kooks?
2. Hinton's big concern is that AI will make it easy to steal identities. Even if we assume that is true, it is already not that hard to steal identities. It is a danger that already exists even without AI and, realistically, already needs to be addressed. What's the takeaway if we are to take the message seriously? That AI will make the problems we already have more noticeable, and because of that we will finally have to get off our lazy asses and do something about those problems that we've tried to sweep under the rug? That seems like a good thing.
Getting the government to regulate your competition isn't 5d chess, it's barely even chess. If you study the birth of any technology in the last 200 years -- rail, electricity, radio, integrated circuits, etc -- you will see the same playbook put to this use. Any good tech executive must be aware of this history.
None of this requires every doomer to be disingenuous or even ill-informed, or even for specific leaders to by lying about their beliefs. It's just that those beliefs that benefit highly capitalized companies get amplified, and the alternatives not so much.
> many many AI researchers (such as Joshua Bengio and Geoffrey Hinton) who don't stand to gain monetarily have expressed these same concerns
I respect these researchers, but I believe they are doing it to build their own brand, whether consciously or subconsciously. There's no doubt it's working. I'm not in the sub-field, but I have been following neural nets for a long time, and I haven't heard of either Bengio nor Hinton before they started talking to the press about this.
As someone who has been following deep learning for quite some time as well, Bengio and Hinton would be some of the first people I think of in this field. Just search Google for "godfathers of ai" if you don't believe me.
It's a reference to the more apt name for Occam's razor. I happen to disagree with GP because governments always want to expand their power. When they do something that results in what they want it's actually the parsimonious explaination to say that they did it because they wanted that result.
It's unfortunate that "AI" is still framed and discussed as some type of highly autonomous system that's separate from us.
Bad acting humans with AI systems are the threat, not the AI systems themselves. The discussion is still SO focused on the AI systems, not the actors and how we as societies align on what AI uses are okay and which ones aren't.
> Bad acting humans with AI systems are the threat, not the AI systems themselves.
I wish more people grasped this extremely important point. AI is a tool. There will be humans who misuse any tool. That doesn't mean we blame the tool. The problem to be solved here is not how to control AI, but how to minimize the damage that bad acting humans can do.
Right now, the "bad acting human" is, for example, Sam Altman, who frequently cries "Wolf!" about AI. He is trying to eliminate the competition, manipulate public opinion, and present himself as a good Samaritan. He is so successful in his endeavor, even without AI, that you must report to the US government about how you created and tested your model.
This is true, but skirts around a bit of the black box problem. It's hard to put guardrails on an amoral tool that makes it hard to fully understand the failure modes. And it doesn't even require "bad acting humans" to do damage; it can just be good-intending-but-naïve humans.
A big problem with discourse on AI is people talking past each other because they're not being clear enough on their definitions.
An AI doomer isn't talking about any current system, but hypothetical future ones which can do planning and have autonomous feedback loops. These are best thought of as agents rather than tools.
Of people understood this then they would have to live with the unsatisfying reality that not all violators can be punished. When you do it this way and paint the technology as potentially criminal that they can get revenge on corporations that which is what is mostly artist types want
If you apply this thinking to Nuclear weapons it becomes nonsensical, which tells us that a tool that can only be oriented to do harm will only be used to do harm. The question then is if LLMs or AI more broadly will even potentially help the general public and there is no reason to think so. The goal of these tools is to be able to continue running the economy while employing far fewer people. These tools are oriented by their very nature to replace human labor, which in the context of our economic system has a direct and unbreakable relationship to a reduction in the well being of the humans it replaces.
But usually there’s a one-way flow of intent from the human to the tool. With a lot of AI the feedback loop gets closed, and people are using it to help them make decisions, and might be taken far from the good outcome they were seeking.
You can already see this today’s internet. I’m sure the pizzagate people genuinely believed they were doing a good thing.
This isn’t the same as an amoral tool like a knife, where a human decides between cutting vegetables or stabbing people.
AI “systems” are provided some level of agency by their very nature. That is, for example, you cannot predict the outcomes of certain learning models.
We necessarily provide agency to AI because that’s the whole point! As we develop more advanced AI, it will have more agency. It is an extension of the just world fallacy, IMO, to say that AI is “just a tool” - we lend agency and allow the tool to train on real world (flawed) data.
Hallucinations are a great example of this in an LLM. We want the machine to have agency to cite its sources… but we also create potential for absolute nonsense citations, which can be harmful in and of themselves, though the human on the using side may have perfectly positive intent.
AI can become a highly autonomous system that's separate from us. Current technological limits make it currently a hard sell.
LLMs, viewed as general purpose simulators/predictors, don't necessarily have any agency or goals by themselves. There is nothing to say that they cannot be made to simulate an agent with its own goals, by humans - and possibly either by malice or by mistake. Model capabilities are the limiting factor right now, but with the rise of more capable uncensored models, it isn't difficult to imagine a model attaining some degree of autonomy, or at least doing a lot of damage before imploding in on itself.
> Bad acting humans with AI systems are the threat, not the AI systems themselves.
It's worth noting this is exactly the same argument used by pro-gun advocates as it pertains to gun rights. It's identical to: guns don't harm/kill people, people harm/kill people (the gun isn't doing anything until the bad actor aims and pulls the trigger; bad acting humans with guns are the real problem; etc).
It isn't an effective argument and is very widely mocked by the political left. I doubt it will work to shield the AI sector from aggressive regulation.
It is an effective argument though, and the left is widely mocked by the right for simultaneously believing that only government should have the necessary tools for violence, and also ACAB.
Assuming ML systems are dangerous and powerful, would you rather they be restricted to a small group of power-holders who will definitely use them to your detriment/to control you (they already do) or democratize that power and take a chance that someone may use them against you?
This argument pertains to every tool: guns, kitchen knives, cars, the anarchist cookbook, etc. You aren't against the argument. You're against how it's used. (Hmm...)
The disturbing thing to consider is that it might be bad acting AI with human systems. I can easily see a situation where a bad acting algorithm alone wouldn't have nearly so negative an effect, if it weren't tuned precisely and persuasively to get more humans to do the work of increasing the global suffering of others for temporary individual gain.
To be clear, I'm not sure LLMs and their near term derivatives are so incredibly clever, but I have confidence that many humans have a propensity for easily manipulated irrational destructive stupidity, if the algorithm feeds them what they want to hear.
Some dogs get bad reputations, but humans are an intricate part of the picture. For example, German Shepherds are objectively dangerous, but have a good reputation because they are trained and cared for by responsible people such as for the police.
Most of the things people are worried about AI doing are the things corporations are already allowed to do - snoop on everybody, influence governments, oppress workers, lie. AI just makes some of that cheaper.
Turning something that we're already able to do into something we're able to do very easily can be extremely significant. It's the difference between "public records" and "all public records about you being instantly viewable online." It's also one of the subjects of the excellent sci fi novel "A Deepness in the Sky," which is still great despite making some likely bad guesses about AI.
And just like in politics the strategy is to redefine that which you want to achieve - in this case total control of a technology - as something else that’s bad so that people will be distracted from what you actually want which is exactly that which you describe as something else.
Politicians that point fingers at other politicians being corrupt or incompetent while they themselves are exactly that use the same strategy.
Power and manipulation. Nothing new under the sun. What’s new though is that we can see in plain sight how corporations control politics. Like literarily this can be documented with git commit history accuracy: thousands upon thousands of people repeating the exact same phrases defending openai and the “revolutionary” product, fear mongering, political lobby, manufactured threats and of course a cure that only they can provide and so on. I would not let people that use such tactics near an email account let alone ai policy making.
Nukes are not cheap. It is cheaper to firebomb. I would love if the reason nukes were not used was that of empathy or humanitarian.
It is strictly money, optics, psychological and practicality.
You don't want your troops to have to deal with the results of a nuked area. You want to use the psychological terror to dissuade someone to invade you, while you are invading them or others. See Russia's take.
Or you are a regime and want to stay in power. Having them keeps you in power; using them or crossing the suggestion to use them line will cause international retaliation and your removal. (See Iraq.)
The ironic thing is that many individuals now clamoring for more regulation have long claimed to be free-market libertarians who think regulation is "always" bad.
Evidently they think regulation is bad only when it puts their profits at risk. As I wrote elsewhere, the tech glitterati asking for regulation of AI remind me of the very important Fortune 500 CEO Mr. Burroughs in the movie "Class:"
Mr. Burroughs: "Government control, Jonathan, is anathema to the free-enterprise system. Any intelligent person knows you cannot interfere with the laws of supply and demand."
Jonathan: "I see your point, sir. That's the reason why I'm not for tariffs."
Mr. Burroughs: "Right. No, wrong! You gotta have tariffs, son. How you gonna compete with the damn foreigners? Gotta have tariffs."
Absolutely. Those folks arguing for AI regulation aren't arguing for safety – they're asking the government to build a moat around the market segment propping up their VC-funded scams.
their motivations may be selfish, but that doesn't mean that regulation of AI is wrong. I'd prefer there be a few heavily-regulated and/or publicly-owned bodies in the public eye that can use and develop these technologies, rather than literally anyone with a powerful enough computer. yeah it's anti-competitive, but competition isn't always a good thing
I feel like Andrew Ng has more name recognition than Google Brain itself.
Also Business Insider isn't great, the original Australian Financial Review article has a lot more substance: https://archive.ph/yidIa
I've never been convinced by the arguments of OpenAI/Anthropic and the like on the existential risks of AI. Maybe I'm jaded by the ridiculousness of "thought experiments" like Roko's basilisk and lines of reasoning followed EA adherents, where the risks are comically infinite and alignment feels a lot more like hermeneutics.
I am probably just a bit less cynical than Ng is here on the motivations[^1]. But regardless of whether or not the AGI doomsday claim is justification for a moat, Ng is right in that it's taking a lot the oxygen out of the room for more concrete discussion on the legitimate harms of generative AI -- like silently proliferating social biases present in the training data, or making accountability a legal and social nightmare.
[^1]: I don't doubt, for instance, that there's in part some legitimate paranoia -- Sam Altman is a known doomsday prepper.
> Ng is right in that it's taking a lot the oxygen out of the room for more concrete discussion on the legitimate harms of generative AI -- like silently proliferating social biases present in the training data, or making accountability a legal and social nightmare.
And this is the important bit. All these people like Altman and Musk who go on rambling about the existential risk of AI distracts from the real AI harm discussions we should be having, and thereby directly harms people.
I'm always unsure what people like you actually believe regarding existential AI risk.
Do you think it's just impossible to make something intelligent that runs in a computer? That intelligence will automatically mean it will share our values? That it's not possible to get anything smarter than a smart human?
Or do you simply believe that's a very long way away (centuries) and there's no point in thinking about it yet?
Why would Roko's basilisk play a big part in your reasoning?
In my experience, it's basically never been a part of serious discussions in EA/LW/AI Safety. Mostly, comes up when people are joking around or when speaking to critics who raise it themselves.
Even in the original post, the possibility of this argument was actually more of a sidenote on the way to main point (admittedly, he's main point involved an equally wacky thought experiment!).
I didn't intend to portray it as a large part of my reasoning. It's not really any part of my reasoning at all except to illustrate that the sort of absurd argumentation that lead to the regulations Ng is criticizing[^1]. These lines of reasoning their proponents basically _begin_ with an all-mighty AI and derive harms, then step back and debate/design methods for preventing the all-mighty AI. From a strict utilitarian framework this works because infinite harm times non-zero probability is still infinite. From a practical standpoint this is a waste of time, and like Ng argues, is likely to stifle innovations with the a far greater chance to benefit society than cause AI-doomsday.
The absurdity of this line of reasoning also supports the cynical interpretation that this is all just moat building, with the true believers propped up as useful idiots. I'm no Gary Marcus, but prepping for AGI doomsday seems like a bit premature.
>In my experience, it's basically never been a part of serious discussions in EA/LW/AI Safety. Mostly, comes up when people are joking around or when speaking to critics who raise it themselves.
>Even in the original post, the possibility of this argument was actually more of a sidenote on the way to main point (admittedly, he's main point involved an equally wacky thought experiment!).
This is fair, it was a cheap shot. While I will note that EY seems to take the possibility seriously, I admittedly have no idea how seriously people take EY these days. But, for some reason 80,000 hours lists AI as the #1 threat to humanity, so it reads to me more like flat earthers vs geocentrists.
[^1]: As in, while I understand that Roko is sincerely shitposting about something else, and merely coming across the repugnant conclusion that an AGI could be motivated to accelerate its own development by retroactive punishment, the absurd part is in concluding that AGI is a credible threat. Everything else just adds to that absurdity.
Amen. This whole scare tactic thing is ridiculous. Just make the public scared of it so you can rope it in yourself. Then you've got people like my mom commenting that "AI scares her because Musk and (some other corporate rep) said that AI is very dangerous. And I don't know why there'd be so many people saying it if it's not true." because you're gullible mom.
"<noun> scares her because <authoritative source> said that <noun> is very dangerous. And I don't know why there'd be so many people saying it if it's not true."
The truly frustrating part is how many see this ubiquitous pattern in some places, but are blind to it elsewhere.
That "pattern" actually indicates that something is true most of the time (after all, a lot of dangerous things really exist). So "noticing" this pattern seems to rely on being all-knowing?
I'm not sure if this is commentary on me somehow or not lol but I agree with you. She is the same person who will point out issues with things my brother brings up but yeah is unable to recognize it when she does it. I'm sure I'm guilty but, naturally, I don't know of them.
Meh, I don't think this extrapolates to a general principle very well. While no authoritative source is perfectly reliable, some are more reliable than others. And Elon Musk is just full of crap.
Is Mom scared because Musk told her to be scared, or because she thought about the matter herself and concluded that it's scary? Why do you assume that people scared of AI must be under the influence of rich people/corps today, rather than this fear being informed by their own consideration of the problem or by decades of media that has been warning about the dangers of AI?
Maybe Mom worries about any radical new technology because she lived though nuclear attack drills in schools. Or because she's already seen computers and robots take peoples jobs. Or because she watched Terminator or read Neuromancer. Or because she reads lesswrong. Why assume it's because she's fallen under the influence of Musk?
Because most sociologists suggest that most people don’t take time to critically think like this. Emotional brain wins out usually over the rational one.
Then you have this idea of the sources of information most people have access to being fundamentally biased and incentivized towards reporting certain things in certain manners and not others.
You basically have low odds of thinking rationally, low odds of finding good information that isn’t slanted in some way, and far lower odds taking the product of those probabilities for if you’d both act rationally and somehow have access to the ground truth. To say nothing of the expertise required to place all of this truth into the correct context. But if you did consider the probability of the mother having to be an AI expert then the odds get far lower still off all of this working out successfully.
Obviously, I don't know that person's mom, but I know mine and other moms, and I don't think it's a milquetoast conclusion that it's a combination of both. However, the former (as both a proxy and Musk himself) probably carries more weight. Most non-technical people's thoughts on AI aren't particularly nuanced or original.
Musk certainly doesn't help with anything. In my experience, a lot of people of my mom's generation are still sucking the Musk lollipop and are completely oblivious to Musk's history of lying to investors, failing to keep promises, taking credit for things he and his companies didn't invent, promoting an actual Ponzi scheme, claiming to be autistic, suggesting he knows more than anyone else, and so on. Even upon being informed, none of it ends up mattering because "he landed a rocket rightside up!!!"
So yeah, if Musk hawks some lame opinion on a thing like AI, tons of people will take that as an authoritative stance.
First, I don't assume, I know my mom and her knowledge about topics. Second, the quoted text was a quote. She literally said that. (replacing the word "her" with "me")
I'm not sure what you're getting at otherwise. It's not like she and I haven't spoken outside of her saying that phrase. She clearly has no idea what AI/ML is or how it works and is prone to fear-mongering messages on social media telling her how to think and to be scared of things. She has a strong history of it.
AGI is scary, I think we can all agree on that. What the current hype does is that it increased changes the estimated probability of AGI actually happening in the near future.
yes, just like "our nuclear bombs are so powerful, they could wipe out civilisation", which led to strict regulation around them and lack of open-source nuclear bombs
Maybe an odd take, but I'm not sure what people actually mean when they say "AI terrifies them". Terrified is a strong wrong. Are people unable to sleep? Biting their nails constantly? Is this the same terror as watching a horror movie? Being chased by a mountain lion?
I have a suspicion that it's sort of a default response. Socially expected? Then you poll people: Are you worried about AI doing XYZ? People just say yes, because they want to seem informed, and the kind of person that considers things carefully.
Honestly not sure what is going on. I'm concerned about AI, but I don't feel any actual emotion about it. Arguably I must have some emotion to generate an opinion, but it's below conscious threshold obviously.
And thats exactly the goal - make mom and dad scared so they can vote those that provide “protection” from manufactured fear. And resorting to this type of tactics to make your product viable just proves how weak your position is.
I think more people should speak out left and right about what’s going on to educate mom and dad.
Here we have all these free-market-libertarian tech execs asking for more regulation! They say they believe regulation is "always" terrible -- unless it's good for their profits. In that case, they think it's actually important and necessary. They remind me of Mr. Burroughs in the movie "Class:"
Mr. Burroughs: "Government control, Jonathan, is anathema to the free-enterprise system. Any intelligent person knows you cannot interfere with the laws of supply and demand."
Jonathan: "I see your point, sir. That's the reason why I'm not for tariffs."
Mr. Burroughs: "Right. No, wrong! You gotta have tariffs, son. How you gonna compete with the damn foreigners? Gotta have tariffs."
I mean if they were lying about that, what else might they be lying about? Maybe giving huge tax breaks to the 0.1% isn't going to result in me getting more income? Maybe it is in fact possible to acquire a CEO just as good or better than your current one that doesn't need half a billion dollar compensation package and an enormous golden parachute to do their job? I'm starting to wonder if billionaires are trustworthy at all.
An alternative idea to the regulatory moat thesis is that it serves Big Tech’s interests to have people think it is dangerous because then surely it must also be incredibly valuable (and hence lead to high Big Tech valuations).
I think it was Cory Doctorow who first pointed this out.
You don’t even need fear, hype alone would do that and did just that over the past year, with ai stocks exploding exponentially like some shilled shitcoin before dramatic clifflike falls. Mention ai in your earnings call and your stock might move 5%.
Exactly like "fentanyl is so dangerous, a few miligrams can kill you" which only led to massive fentanyl demand because everybody wants the drug branded the most powerful
A few milligrams CAN kill you. This was the headline after many thousands of overdoses, it didn't invigorate the marketplace. Junkies knew of Fent decades ago, it's only prevalent in the marketplace because of effective laws regarding the production of other illicit opiates, which is probably the real lesson here.
It's all a big balloon - squeezing one side just makes another side bigger.
Any source for this? I thought the demand was based on its low cost and high potency so it's easier to distribute. Is anyone really seeking out fentanyl specifically because the overdose danger is higher?
Yup this is it. As anyone who worked even closely with "AI" can immediately smell the bs of existential crisis. Elon Musk started this whole trend due to his love of sci fi and Sam Altman ran with that idea heavily because it adds to the novelty of open AI.
I don't think they are so capable actors to do it on purpose.
I think they really believe what they are saying because people in such position tend to be strong believers into something and that something happens to be the "it" thing at the moment and thus propels them from rags to riches, (or in Musk case further propels them towards even more riches).
Let's be honest here, what's Sam Altman without AI? What's Fauci without COVID, what's Trump without the collective paranoia that got him elected?
I think there are actual existential and “semi-existential” risks, especially with going after an actual AGI.
Separately, I think Ng is right - big corp AI has a massive incentive to promote doom narratives to cement themselves as the only safe caretakers of the technology.
I haven’t yet succeeded in squaring these two into a course of action that clearly favors human freedom and flourishing.
> [Ng] said that the “bad idea that AI could make us go extinct” was merging with the “bad idea that a good way to make AI safer is to impose burdensome licensing requirements” on the AI industry.
> “There’s a standard regulatory capture playbook that has played out in other industries, and I would hate to see that executed successfully in AI.”
> “Just to be clear, AI has caused harm. Self-driving cars have killed people. In 2010, an automated trading algorithm crashed the stock market. Regulation has a role. But just because regulation could be helpful doesn’t mean we want bad regulation.”
[1]: https://www.afr.com/technology/google-brain-founder-says-big...
[2]: https://web.archive.org/web/20231030062420/https://www.afr.c...
But imagine all the money bigco can make by crippling small startups from innovating and competing with them! It's for your own safety. Move along citizen.
Submitters: "Please submit the original source. If a post reports on something found on another site, submit the latter." - https://news.ycombinator.com/newsguidelines.html
Imagine someone invents a machine that can give infinite energy.
Do you
a) sell that energy, or b) give the technology to build the machine to everyone.
Clearly b is better for society, a is locking up profits.
This non-hypothetical got us global warming already
Technically: a solar panel is just such a machine. You'll have to wait a long, long time but the degradation is slow enough that you can probably use a panel for more than several human life times at ever decreasing output. You will probably find it more economical to replace the panel at some point because of the amount of space it occupies and the fact that newer generations of solar panels will do that much better in that same space. But there isn't any hard technical reason why you should discard one after 10, 30 or 100 years. Of course 'infinite' would require the panel to be 'infinitely durable' and likely at some point it will suffer mechanical damage. But that's not a feature of the panel itself.
Requiring attribution categorically across the vast domain of generative AI...can you please elaborate?
Deleted Comment
not the most convincing of arguments
- it's actually to get regulatory capture
- it's hubris, they're trying to seem more important and powerful than they are
Both of these explanations strike me as too clever by half. I think the parsimonious explanation is that people are actually concerned about the dangers of AI. Maybe they're wrong, but I don't think this kind of incredulous conspiratorial reaction is a useful thing to engage in.
When in doubt take people at their word. Maybe the CEOs of these companies have some sneaky 5D chess plan, but many many AI researchers (such as Joshua Bengio and Geoffrey Hinton) who don't stand to gain monetarily have expressed these same concerns. They're worth taking seriously.
This rings hollow when these companies don’t seem to practice what they preach, and start by setting an example - they don’t halt research and cut the funding for development of their own AIs in-house.
If you believe that there’s X-Risk of AI research, there’s no reason to think it wouldn’t come from your own firm’s labs developing these AIs too.
Continuing development while telling others they need to pause seems to make “I want you to be paused while I blaze ahead” far more parsimonious than “these companies are actually scared about humanity’s future” - they won’t put their money where their mouth is to prove it.
It's all a matter of incentives and people can easily act recklessly given the right ones. They keep going because they just can't stop.
This is not mutually exclusive with it being either hubris or regulatory capture. People see the world colored by their own interests, emotions, background, and values. It's quite possible that the person making the statement sincerely believes there's a danger to humanity, but it's actually a danger to their monopoly that their self-image will not let them label as a such.
It's never regulatory capture when you're the one doing it. It's always "The public needs to be protected from the consequences that will happen if any non-expert could hang up a shingle." Oftentimes the dangers are real, but the incumbent is unable to also perceive the benefits of other people competing with them (if they could, competition wouldn't be dangerous, they'd just implement those benefits themselves).
If some rando anonymous posters could think it up, it doesn't require a CEO to play 5D chess to think it up. And many of us have witnessed these techniques being used by companies directly. Microsoft was famous for doing this sort of thing, and in a much more roundabout fashion, for instance with the SCO debacle.
It's standard business practice, not conspiracy 5D chess or whatever moniker you want to give it to be dismissive.
>Both of these explanations strike me as too clever by half
This is a good point. You have to be clever to hop on a soapbox and make a ruckus about doomsday to get attention. Only savvy actors playing 5D chess can aptly deploy the nuanced and difficult pattern of “make grandiose claims for clicks”
1. While their contributions to AI tech are unmistakable, what do Bengio and Hinton really know about the human dangers of AI? Being an expert in one thing does not make one an expert in everything. It is unlikely that they understand the human dangers any more than any other random kook on Reddit. Why take them more seriously than the other kooks?
2. Hinton's big concern is that AI will make it easy to steal identities. Even if we assume that is true, it is already not that hard to steal identities. It is a danger that already exists even without AI and, realistically, already needs to be addressed. What's the takeaway if we are to take the message seriously? That AI will make the problems we already have more noticeable, and because of that we will finally have to get off our lazy asses and do something about those problems that we've tried to sweep under the rug? That seems like a good thing.
None of this requires every doomer to be disingenuous or even ill-informed, or even for specific leaders to by lying about their beliefs. It's just that those beliefs that benefit highly capitalized companies get amplified, and the alternatives not so much.
I respect these researchers, but I believe they are doing it to build their own brand, whether consciously or subconsciously. There's no doubt it's working. I'm not in the sub-field, but I have been following neural nets for a long time, and I haven't heard of either Bengio nor Hinton before they started talking to the press about this.
I am always in awe at how easily people craft unfalsifiable worldviews in service to their preconceived opinions.
AlexNet, the paper that arguable started it all, came out of Hinton's lab.
https://papers.nips.cc/paper_files/paper/2012/hash/c399862d3...
I really don't think they need to build any more of a brand.
Hanlon's razor works great when applied to your personal relationships, but it falls apart when billions/trillions of dollars are at stake.
Bad acting humans with AI systems are the threat, not the AI systems themselves. The discussion is still SO focused on the AI systems, not the actors and how we as societies align on what AI uses are okay and which ones aren't.
I wish more people grasped this extremely important point. AI is a tool. There will be humans who misuse any tool. That doesn't mean we blame the tool. The problem to be solved here is not how to control AI, but how to minimize the damage that bad acting humans can do.
AI is a tool today, tomorrow AI is calling shots in many domains. It's worth planning for tomorrow.
An AI doomer isn't talking about any current system, but hypothetical future ones which can do planning and have autonomous feedback loops. These are best thought of as agents rather than tools.
You can already see this today’s internet. I’m sure the pizzagate people genuinely believed they were doing a good thing.
This isn’t the same as an amoral tool like a knife, where a human decides between cutting vegetables or stabbing people.
AI “systems” are provided some level of agency by their very nature. That is, for example, you cannot predict the outcomes of certain learning models.
We necessarily provide agency to AI because that’s the whole point! As we develop more advanced AI, it will have more agency. It is an extension of the just world fallacy, IMO, to say that AI is “just a tool” - we lend agency and allow the tool to train on real world (flawed) data.
Hallucinations are a great example of this in an LLM. We want the machine to have agency to cite its sources… but we also create potential for absolute nonsense citations, which can be harmful in and of themselves, though the human on the using side may have perfectly positive intent.
LLMs, viewed as general purpose simulators/predictors, don't necessarily have any agency or goals by themselves. There is nothing to say that they cannot be made to simulate an agent with its own goals, by humans - and possibly either by malice or by mistake. Model capabilities are the limiting factor right now, but with the rise of more capable uncensored models, it isn't difficult to imagine a model attaining some degree of autonomy, or at least doing a lot of damage before imploding in on itself.
Does this mean "humans with bad motives" or does it extend to "humans who deploy AI without an understanding of the risk"?
I would say the latter warrants a discussion on the AI systems, if they make it hard to understand the risk due to opaqueness.
It's worth noting this is exactly the same argument used by pro-gun advocates as it pertains to gun rights. It's identical to: guns don't harm/kill people, people harm/kill people (the gun isn't doing anything until the bad actor aims and pulls the trigger; bad acting humans with guns are the real problem; etc).
It isn't an effective argument and is very widely mocked by the political left. I doubt it will work to shield the AI sector from aggressive regulation.
Assuming ML systems are dangerous and powerful, would you rather they be restricted to a small group of power-holders who will definitely use them to your detriment/to control you (they already do) or democratize that power and take a chance that someone may use them against you?
Are we going to ban and regulate Photoshop and GIMP because bad people use them to create false imagery for propaganda?
Actually, back that up for a second.
Are we going to ban and regulate computers (enterprise and personal) because bad people use them for bad things?
Are we going to ban and regulate speech because bad people say bad things?
Are we going to ban and regulate hands because bad people use them to do bad things?
The buck always starts and stops at the person doing the act. A tool is just a tool, blaming the tool is nothing but an act of scapegoating.
If it's been trained by bad actors, that's really not a good thing.
To be clear, I'm not sure LLMs and their near term derivatives are so incredibly clever, but I have confidence that many humans have a propensity for easily manipulated irrational destructive stupidity, if the algorithm feeds them what they want to hear.
Some dogs get bad reputations, but humans are an intricate part of the picture. For example, German Shepherds are objectively dangerous, but have a good reputation because they are trained and cared for by responsible people such as for the police.
Politicians that point fingers at other politicians being corrupt or incompetent while they themselves are exactly that use the same strategy.
Power and manipulation. Nothing new under the sun. What’s new though is that we can see in plain sight how corporations control politics. Like literarily this can be documented with git commit history accuracy: thousands upon thousands of people repeating the exact same phrases defending openai and the “revolutionary” product, fear mongering, political lobby, manufactured threats and of course a cure that only they can provide and so on. I would not let people that use such tactics near an email account let alone ai policy making.
A snowball probably isn't harmful unless you do something really dumb.
A snow drift isn't harmful unless you're not cautious.
An avalanche, well that gets harmful pretty damned quick.
These things are all snow, but suddenly at some point scale starts to matter.
You don't want your troops to have to deal with the results of a nuked area. You want to use the psychological terror to dissuade someone to invade you, while you are invading them or others. See Russia's take.
Or you are a regime and want to stay in power. Having them keeps you in power; using them or crossing the suggestion to use them line will cause international retaliation and your removal. (See Iraq.)
Evidently they think regulation is bad only when it puts their profits at risk. As I wrote elsewhere, the tech glitterati asking for regulation of AI remind me of the very important Fortune 500 CEO Mr. Burroughs in the movie "Class:"
Mr. Burroughs: "Government control, Jonathan, is anathema to the free-enterprise system. Any intelligent person knows you cannot interfere with the laws of supply and demand."
Jonathan: "I see your point, sir. That's the reason why I'm not for tariffs."
Mr. Burroughs: "Right. No, wrong! You gotta have tariffs, son. How you gonna compete with the damn foreigners? Gotta have tariffs."
---
Source: https://www.youtube.com/watch?v=nM0h6QXTpHQ
Also Business Insider isn't great, the original Australian Financial Review article has a lot more substance: https://archive.ph/yidIa
I've never been convinced by the arguments of OpenAI/Anthropic and the like on the existential risks of AI. Maybe I'm jaded by the ridiculousness of "thought experiments" like Roko's basilisk and lines of reasoning followed EA adherents, where the risks are comically infinite and alignment feels a lot more like hermeneutics.
I am probably just a bit less cynical than Ng is here on the motivations[^1]. But regardless of whether or not the AGI doomsday claim is justification for a moat, Ng is right in that it's taking a lot the oxygen out of the room for more concrete discussion on the legitimate harms of generative AI -- like silently proliferating social biases present in the training data, or making accountability a legal and social nightmare.
[^1]: I don't doubt, for instance, that there's in part some legitimate paranoia -- Sam Altman is a known doomsday prepper.
And this is the important bit. All these people like Altman and Musk who go on rambling about the existential risk of AI distracts from the real AI harm discussions we should be having, and thereby directly harms people.
Do you think it's just impossible to make something intelligent that runs in a computer? That intelligence will automatically mean it will share our values? That it's not possible to get anything smarter than a smart human?
Or do you simply believe that's a very long way away (centuries) and there's no point in thinking about it yet?
In my experience, it's basically never been a part of serious discussions in EA/LW/AI Safety. Mostly, comes up when people are joking around or when speaking to critics who raise it themselves.
Even in the original post, the possibility of this argument was actually more of a sidenote on the way to main point (admittedly, he's main point involved an equally wacky thought experiment!).
The absurdity of this line of reasoning also supports the cynical interpretation that this is all just moat building, with the true believers propped up as useful idiots. I'm no Gary Marcus, but prepping for AGI doomsday seems like a bit premature.
>In my experience, it's basically never been a part of serious discussions in EA/LW/AI Safety. Mostly, comes up when people are joking around or when speaking to critics who raise it themselves.
>Even in the original post, the possibility of this argument was actually more of a sidenote on the way to main point (admittedly, he's main point involved an equally wacky thought experiment!).
This is fair, it was a cheap shot. While I will note that EY seems to take the possibility seriously, I admittedly have no idea how seriously people take EY these days. But, for some reason 80,000 hours lists AI as the #1 threat to humanity, so it reads to me more like flat earthers vs geocentrists.
[^1]: As in, while I understand that Roko is sincerely shitposting about something else, and merely coming across the repugnant conclusion that an AGI could be motivated to accelerate its own development by retroactive punishment, the absurd part is in concluding that AGI is a credible threat. Everything else just adds to that absurdity.
The truly frustrating part is how many see this ubiquitous pattern in some places, but are blind to it elsewhere.
You know, sometimes shit is just dangerous.
Deleted Comment
Maybe Mom worries about any radical new technology because she lived though nuclear attack drills in schools. Or because she's already seen computers and robots take peoples jobs. Or because she watched Terminator or read Neuromancer. Or because she reads lesswrong. Why assume it's because she's fallen under the influence of Musk?
Then you have this idea of the sources of information most people have access to being fundamentally biased and incentivized towards reporting certain things in certain manners and not others.
You basically have low odds of thinking rationally, low odds of finding good information that isn’t slanted in some way, and far lower odds taking the product of those probabilities for if you’d both act rationally and somehow have access to the ground truth. To say nothing of the expertise required to place all of this truth into the correct context. But if you did consider the probability of the mother having to be an AI expert then the odds get far lower still off all of this working out successfully.
Musk certainly doesn't help with anything. In my experience, a lot of people of my mom's generation are still sucking the Musk lollipop and are completely oblivious to Musk's history of lying to investors, failing to keep promises, taking credit for things he and his companies didn't invent, promoting an actual Ponzi scheme, claiming to be autistic, suggesting he knows more than anyone else, and so on. Even upon being informed, none of it ends up mattering because "he landed a rocket rightside up!!!"
So yeah, if Musk hawks some lame opinion on a thing like AI, tons of people will take that as an authoritative stance.
I'm not sure what you're getting at otherwise. It's not like she and I haven't spoken outside of her saying that phrase. She clearly has no idea what AI/ML is or how it works and is prone to fear-mongering messages on social media telling her how to think and to be scared of things. She has a strong history of it.
I have a suspicion that it's sort of a default response. Socially expected? Then you poll people: Are you worried about AI doing XYZ? People just say yes, because they want to seem informed, and the kind of person that considers things carefully.
Honestly not sure what is going on. I'm concerned about AI, but I don't feel any actual emotion about it. Arguably I must have some emotion to generate an opinion, but it's below conscious threshold obviously.
I think more people should speak out left and right about what’s going on to educate mom and dad.
Mr. Burroughs: "Government control, Jonathan, is anathema to the free-enterprise system. Any intelligent person knows you cannot interfere with the laws of supply and demand."
Jonathan: "I see your point, sir. That's the reason why I'm not for tariffs."
Mr. Burroughs: "Right. No, wrong! You gotta have tariffs, son. How you gonna compete with the damn foreigners? Gotta have tariffs."
---
Source: https://www.youtube.com/watch?v=nM0h6QXTpHQ
However Elon Musk has openly worried about AI for a number of years. He even got a girlfriend out of it: https://www.vice.com/en/article/evkgvz/what-is-rokos-basilis...
I think it was Cory Doctorow who first pointed this out.
It's all a big balloon - squeezing one side just makes another side bigger.
I think they really believe what they are saying because people in such position tend to be strong believers into something and that something happens to be the "it" thing at the moment and thus propels them from rags to riches, (or in Musk case further propels them towards even more riches).
Let's be honest here, what's Sam Altman without AI? What's Fauci without COVID, what's Trump without the collective paranoia that got him elected?
Separately, I think Ng is right - big corp AI has a massive incentive to promote doom narratives to cement themselves as the only safe caretakers of the technology.
I haven’t yet succeeded in squaring these two into a course of action that clearly favors human freedom and flourishing.