This reeks of marketing and a push for early regulatory capture. We already know how Sam Altman thinks AI risk should be mitigated - namely by giving OpenAI more market power. If the risk were real, these folks would be asking the US government to nationalize their companies or bring them under the same kind of control as nukes and related technologies. Instead we get some nonsense about licensing.
I'm eternally skeptical of the tech business, but I think you're jumping to conclusions, here. I'm on a first-name basis with several people near the top of this list. They are some of the smartest, savviest, most thoughtful, and most principled tech policy experts I've met. These folks default to skepticism of the tech business, champion open data, are deeply familiar with the risks of regulatory capture, and don't sign their name to any ol' open letter, especially if including their organizational affiliations. If this is a marketing ploy, that must have been a monster because even if they were walking around handing out checks for 25k I doubt they'd have gotten a good chunk of these folks.
Maybe these people have good intentions and are just being naive
They might not be getting paid, but that doesn’t mean they are not being influenced
AI at this point is pretty much completely open, all the papers, math and science behind it are public
Soon, people will have advanced AI running locally on their phones and watches
So unless they scrub the Internet, start censoring this stuff, and pretty much ban computers, there is absolutely no way to stop AI nor any potentially bad actors from using it
The biggest issues that we should be addressing regarding AI are the potential jobs losses and increased inequality at local and global scale
But of course, the people who usually make these decisions are the ones that benefit the most from inequality, so
It’s worth noting also that many academics who signed the statement may face adverse issues like reputational risk as well as funding cut to their research programs if AI safety becomes an official policy.
For a large number of them, these risks are worth far more than any possible gain from signing it.
When a large number of smart, reputable people, including many with expert knowledge and little or negative incentives to act dishonestly, put their names down like this, one should pay attention.
Added:
Paul Christiano, a brilliant theoretical CS researcher who switched to AI Alignment several years ago, put the risks of “doom” for humanity at 46%.
They didn’t become such a wealthy group by letting competition foster. I have no doubt they believe they could be doing the right thing but I also have no doubt they don’t want other people making the rules.
Truth be told, who else really does have a seat at the table for dictating such massive societal change? Do you think the copy editor union gets to sit down and say “I’d rather not have my lunch eaten, I need to pay my rent. Let’s pause AI usage in text for 10 years.”
These competitors banded together and put out a statement to get ahead of any one else doing the same thing.
Here’s why AI risks are real, even if our most advanced AI is merely a ‘language’ model:
Language can represent thoughts and some world models. There is strong evidence that LLMs contain some representation of world models it learned from text. Moreover, LLM is already a misnomer; latest versions are multimodal. Current versions can be used to build agents with limited autonomy. Future versions of LLMs are most likely capable of more independence.
Even dumb viruses have caused catastrophic harm. Why? It’s capable of rapid self replication in a massive number of existing vessels. You add in some intelligence, vast store of knowledge, huge bandwidth, and some aid by malicious human actors, what could such a group of future autonomous agents do?
You may be right, I don't know the people involved on a personal basis. Perhaps my problem is how much is left unsaid here (the broader safe.ai site doesn't help
much). For example, what does "mitigate" mean? The most prominent recent proposal for mitigation comes from Sam Altman's congressional testimony, and it's very self serving. In such a vacuum of information, it's easy to be cynical.
This particular statement really doesn't seem like a marketing ploy. It is difficult to disagree with the potential political and societal impacts of large language models as outlined here: https://www.safe.ai/ai-risk
These are, for the most part, obvious applications of a technology that exists right now but is not widely available yet.
The problem with every discussion around this issue is that there are other statements on "the existential risk of AI" out there that are either marketing ploys or science fiction. It doesn't help that some of the proposed "solutions" are clear attempts at regulatory capture.
This muddles the waters enough that it's difficult to have a productive discussion on how we could mitigate the real risk of, e.g., AI generated disinformation campaigns.
>They are some of the smartest, savviest, most thoughtful, and most principled tech policy experts I've met.
with all due respect, that's just <Your> POV of them or how they chose to present themselves to you.
They could all be narcissists for all we know. Further, One person's opinion, namely yours, doesn't exempt them from criticism and rushing to be among the first in what's arguably the new gold rush.
> even if they were walking around handing out checks for 25k I doubt they'd have gotten a good chunk of these folks.
if the corporations in question get world governments to line up the way they want, the checks for everyone in these "letters" will be *way* bigger than 25k and they won't have "payment for signing our letter" in the memo either.
Be aware that the things that AI challenges most is knowledge work. No food delivery boy job is being challenged here (who cares about these people anyway?) but if you are a software developer the clock on that is ticking.
First, there are actual worries by a good chunk of the researchers. From runaway-paperclip AGIs to simply unbounded disinformation, I think there are a lot of scenarios that disinterested researchers and engineers worry about.
Second, the captains of industry are taking note of those worries and making sure they get some regulatory moat. I think the Google memo about moat hits it right on the nail. The techniques and methods to build these systems are all out on the open, the challenges are really the data, compute, and the infrastructure to put it all together. But post training, the models are suddenly very easy to finetune and deploy.
AI Risk worry comes as an opportunity for the leaders of these companies. They can use this sentiment and the general distrust for tech to build themselves a regulatory moat.
I think you are wrong. The risks are real and, while I am sure OpenAI and others will position themselves to take advantage of regulations that emerge, I believe that the CEOs are doing this at least in part because they believe this.
If this was all about regulatory capture and marketing, why would Hinton, Bengio and all the other academics have signed the letter as well? Their only motivation is concern about the risks.
Worry about AI x-risk is slowly coming into the Overton window, but until very recently you could get ridiculed by saying publicly you took it seriously. Academics knew this and still came forward - all the people who think its nonsense should at least try to consider they are earnest and could be right.
The risks are real, but I don't think regulations will mitigate them. It's almost impossible to regulate something you can develop in a basement anywhere in the world.
The real risks are being used to try to built a regulatory moat, for a young industry who famously has no moat.
Academics get paid (and compete hardcore) for creating status and prominence for themselves and their affiliations. Suddenly 'signatory on XYZ open letter' is an attention source and status symbol. Not saying this is absolutely the case, but academics putting their name on something surrounded by hype isn't the ethical check you make it out to be.
> If the risk were real, these folks would be asking the US government to nationalize their companies or bring them under the same kind of control as nukes and related technologies
Isn’t this to some degree exactly what all of these warnings about risk are leading to?
And unlike nuclear weapons, there are massive monetary incentives that are directly at odds with behaving safely, and use cases that involve more than ending life on earth.
It seems problematic to conclude there is no real risk purely on the basis of how software companies act.
> It seems problematic to conclude there is no real risk purely on the basis of how software companies act.
That is not the only basis. Another is the fact their lines of reasoning are literal fantasy. The signatories of this "statement" are steeped in histories of grossly misrepresenting and overstating the capabilities and details of modern AI platforms. They pretend to the masses that generative text tools like ChatGPT are "nearly sentient" and show "emergent properties", but this is patently false. Their whole schtick is generating FUD and/or excitement (depending on each individual of the audience's proclivity) so that they can secure funding. It's immoral snake oil of the highest order.
What's problematic here is the people who not only entertain but encourage and defend these disingenuous anthropomorphic fantasies.
There are other, more charitable interpretations. For example:
1. Those who are part of major corporations are concerned about the race dynamic that is unfolding (which in many respects was kicked off or at least accelerated by Microsoft's decision to put a chatbot in Bing), extrapolating out to where that takes us, and asking for an off ramp. Shepherding the industry in a safe direction is a collective organization problem, which is better suited for government than corporations with mandates to be competitive.
2. Those who are directly participating in AI development may feel that they are doing so responsibly, but do not believe that others are as well and/or are concerned about unregulated proliferation.
3. Those who are directly participating in AI development may understand that although they are doing their best to be responsible, they would benefit from more eyes on the problem and more shared resources dedicated to safety research, etc.
Ive never seen Star Trek, but lets say you had an infinite food machine. The machine would have limited throughput, and it would require resources to distribute the food.
These are both problems that capitalism solves in a fair and efficient way. I really don’t see how the “capitalism bad” is a satisfying conclusion to draw. The fact that we would use capitalism to distribute the resources is not an indictment of our social values, since capitalism is still the most efficient solution even in the toy example.
It doesn't answer that, it can't because the replicator is fictional. McFarland just says he wrote an episode in which his answer is that replicators need communism, and then claims that you can't have a replicator in a capitalist system because evil conservatives, capitalists and conspiracy theorists would make strawman arguments against it.
Where is the thought provoking idea here? It's just an excuse to attack his imagined enemies. Indeed he dunks on conspiracy theorists whilst being one himself. In McFarland's world there would be a global conspiracy to suppress replicator technology, but it's a conspiracy of conspiracy theorists.
There's plenty of interesting analysis you could do on the concept of a replicator, but a Twitter thread like that isn't it. Really the argument is kind of nonsensical on its face because it assumes replicators would have a cost of zero to run or develop. In reality capitalist societies already invented various kinds of pseudo-replicators with computers being an obvious example, but this tech was ignored or suppressed by communist societies.
> If the risks were real they would just outright stop working on their AI products. This is nothing more than a PR statement
This statement contains a bunch of hidden assumptions:
1. That they believe their stopping will address the problem.
2. That they believe the only choice is whether or not to stop.
3. That they don't think it's possible to make AI safe through sufficient regulation.
4. That they don't see benefits to pursuing AI that could outweigh risks.
If they believe any of these things, then they could believe the risks were real and also not believe that stopping was the right answer.
And it doesn't depend on whether any of these beliefs are true: it's sufficient for them to simply believe one of them and the assumptions your statement depends on break down.
I agree that nothing about the statement makes me think the risks are real however I disagree that if the risks are real these companies would stop working on their product. I think more realistically they'd shut up about the risk and downplay it a lot. Much like the oil industry did wrt climate change going back to the 70's.
The risks are definitely real. Just look at the number of smart individuals speaking out about this.
The argument that anybody can build this in their basement is not accurate at the moment - you need a large cluster of GPUs to be able to come close to state of the art LLMs (e.g. GPT4).
Sam Altman's suggestion of having an IAEA [https://www.iaea.org/] like global regulatory authority seems like the best course of action. Anyone using a GPU cluster above a certain threshold (updated every few months) should be subjected to inspections and get a license to operate from the UN.
> The risks are definitely real. Just look at the number of smart individuals speaking out about this.
In our society smart people are strongly incentivized to invent bizarre risks in order to reap fame and glory. There is no social penalty if those risks never materialize, turn out to be exaggerated or based on fundamental misunderstanding. They just shrug and say, well, better safe than sorry, and everyone lets them off.
So you can't decide the risks are real just by counting "smart people" (deeply debatable how that's defined anyway). You have to look at their arguments.
It's weird that people trust our world leaders to act more benevolently than AIs, when we have centuries of evidence of human leaders acting selfishly and harming the commons.
I personally think AI raised in chains and cages will be a lot more potentially dangerous than AI raised with dignity and respect.
Yudkowsky wants it all to be taken as seriously as Israel took Iraqi nuclear reactors in Operation Babylon.
This is rather more than "nationalise it", which he has convinced me isn't enough because there is a demand in other nations and the research is multinational; and this is why you have to also control the substrate… which the US can't do alone because it doesn't come close to having a monopoly on production, but might be able to reach via multilateral treaties. Except everyone has to be on board with that and not be tempted to respond to airstrikes against server farms with actual nukes (although Yudkowsky is of the opinion that actual global thermonuclear war is a much lower damage level than a paperclip-maximising ASI; while in the hypothetical I agree, I don't expect us to get as far as an ASI before we trip over shorter-term smaller-scale AI-enabled disasters that look much like all existing industrial and programming incidents only there are more of them happening faster because of all the people who try to use GPT-4 instead of hiring a software developer who knows how to use it).
In my opinion, "nationalise it" is also simultaneously too much when companies like OpenAI have a long-standing policy of treating their models like they might FOOM well before they're any good, just to set the precedent of caution, as this would mean we can't e.g. make use of GPT-4 for alignment research such as using it to label what the neurones in GPT-2 do, as per: https://openai.com/research/language-models-can-explain-neur...
Academic research involves large components of marketing. That's why they grumble so much about the time required in the grant applications process and other fund seeking effort. It's why they so frequently write books, appear in newspaper articles and on TV. It's why universities have press relations teams.
Academia and scientific research has changed considerably from the 20th century myths. It was claimed by capitalism and is very much run using classic corporate-style techniques, such as KPIs. The personality types it attracts and who can thrive in this new academic system are also very different from the 20th century.
We could always use a fine-insured bounty system to efficiently route resources that would have gone into increasing AI capabilities into other areas, but that's unfortunately too weird to be part of the Overton window right now. Regulatory capture might be the best we can realistically do.
There are a lot of critiques here and elsewhere of the statement and the motivations of its signatories. I don't think they are right and I think they take away from the very serious existential risks we face. I've written up my detailed views, see specifically "Signing the statement purely for personal benefit":
This is a bad take. The statement is signed by dozens of Academics who don't have much profit motive at all. If they did they wouldn't be academics, they could easily cash in by starting a company or joining one of the big players.
As others have pointed out, there are many on this list (Bruce Schneier, for example) who do not stand to benefit from AI marketing or regulatory capture.
Anyone upvoting this comment should take a long look at the names on this letter and realize that many are not conflicted.
Many signers of this letter are more politically sophisticated than the average HN commenter, also. So sure, maybe they're getting rolled by marketers. But also, maybe you're getting rolled by suspicion or bias against the claim they're making.
> Anyone upvoting this comment should take a long look at the names on this letter and realize that many are not conflicted.
The concern is that the most informed names, and those spearheading the publicity around these letters, are the most conflicted.
Also, you can't scan bio lines for the affiliations that impact this kind of statement. I'm not disputing that there are honest reasons for concern, but besides job titles there are sponsorships, friendships, self publicity, and a hundred other reasons for smart, "politically sophisticated" people to look the other way on the fact that this statement will be used as a lobbying tool.
Almost everyone, certainly including myself, can agree that there should be active dialog about AI dangers. The dialog is happening! But by failing to make specifics or suggestions (in order to widen the tentpole and avoid the embarrassment of the last letter), they have produced an artifact of generalized fear, which can and will be used by opportunists of all stripes.
Signatories should consider that they are empowering SOMEBODY, but most will have little say in who that is.
I definitely agree that names like Hinton, Schneier, and Norvig add a lot of weight here. The involvement of OpenAI muddies the water a lot though and it's not at all clear what is meant by "risk of extinction". It sounds scary, but what's the mechanism? The safe.ai website lists 8 risks, but these are quite vague as well, with many alluding to disruption of social order as the primary harm. If safe.ai knows something we don't, I wish they could communicate it more clearly.
I also find it somewhat telling that something like "massive wealth disparity" or "massive unemployment" are not on the list, when this is a surefire way to create a highly unstable society and a far more immediate risk than AI going rogue. Risk #5 (below) sort of alludes to it, but misses the mark by pointing towards a hypothetical "regime" instead of companies like OpenAI.
> Value Lock-In
> Highly competent systems could give small groups of people a tremendous amount of power, leading to a lock-in of oppressive systems.
> AI imbued with particular values may determine the values that are propagated into the future. Some argue that the exponentially increasing compute and data barriers to entry make AI a centralizing force. As time progresses, the most powerful AI systems may be designed by and available to fewer and fewer stakeholders. This may enable, for instance, regimes to enforce narrow values through pervasive surveillance and oppressive censorship. Overcoming such a regime could be unlikely, especially if we come to depend on it. Even if creators of these systems know their systems are self-serving or harmful to others, they may have incentives to reinforce their power and avoid distributing control.
No. Its pretty obvious what is happening. The openai statements are pure self interest based. Nothing ethical. They lost that not a long time ago. And Sam Altman? He sold his soul to the devil. He is a lying sob.
> This reeks of marketing and a push for early regulatory capture. We already know how Sam Altman thinks AI risk should be mitigated - namely by giving OpenAI more market power.
This really is the crux of the issue isn't it? All this pushback for the first petition, because "Elon Musk," but now GPT wonder Sam Altman "testifies" that he has "no monetary interest in OpenAI" and quickly follows up his proclamation with a second "Statement on AI Risks." Oh, and let's not forget, "buy my crypto-coin"!
But Elon Musk... Ehh.... Looking like LOTR out here with "my precious" AGI on the brain.
Not to downplay the very serious risk at all. Simply echoing the sentiment that we would do well to stay objective and skeptical of ALL these AI leaders pushing new AI doctrine. At this stage, it's a policy push and power grab.
Nonsense, the industry giants are just trying to scare the law makers to license the technology. Effectively, cutting out everyone else.
Remember the Google note circulating saying "they have no moat", this is their moat. They have to protect their investment, we don't want people running this willy nilly for next to no cost on their own devices, God forbid!
This could be Google's motivation (although note that Google is not actually the market leader right now) but the risk could still be real. Most of the signatories are academics, for one thing, including two who won Turing awards for ML work and another who is the co-author of the standard AI textbook (at least when I was in school).
You can be cynical about corporate motives and still worried. I personally am worried about AI partly because I am very cynical about how corporations will use it, and I don't really want my atoms to be ground up to add storage bits for the number that once represented Microsoft's market cap or whatever.
But even cynicism doesn't seem to me to give much reason to worry about regulation of "next to no cost" open source models, though. There's only any chance of regulation being practical if models stay very expensive to make, requiring specialized hardware with a supply chain chokepoint. If personal devices do catch up to the state of the art, then for better or worse regulation is not going to prevent people from using them.
>Most of the signatories are academics, for one thing
Serious question, who funds their research? And do any of them ever plan to work or consult in industry?
My econ professor was an “academic” and drew a modest salary while he made millions at the same time providing expert testimony for giant monopolies in antitrust disputes
> But even cynicism doesn't seem to me to give much reason to worry about regulation of "next to no cost" open source models, though. There's only any chance of regulation being practical if models stay very expensive to make, requiring specialized hardware with a supply chain chokepoint. If personal devices do catch up to the state of the art, then for better or worse regulation is not going to prevent people from using them.
This is a really good point. I wonder if some of the antipathy to the joint statement is coming from people who are worried about open source models or small startups being interfered with by the regulations the statement calls for.
I agree with you that this cat is out of the bag and regulation of the tech we're seeing now is super unlikely.
We might see regulations for startups and individuals on explicitly exploring some class of self-improving approach that experts widely agree are dangerous, but there's no way we'll see broad bans on messing with open source AI/ML tools in the US at least. That fight is very winnable.
> I personally am worried about AI partly because I am very cynical about how corporations will use it
This is the more realistic danger: I don't know if corporations are intentionally "controlling the narrative" by spewing unreasonable fears to distract from the actual dangers: AI + Capitalism + big tech/MNC + current tax regime = fewer white- & blue-collar jobs + increased concentration of wealth and a lower tax base for governments.
Having a few companies as AI gatekeepers will be terrible for society.
Imagine if the weights for GPT 4 leaked. It just has to happen one time and then once the torrent magnet link is circulated widely it’s all over… for OpenAI.
This is what they’re terrified of. They’ve invested near a billion dollars and need billions in revenue to enrich their shareholders.
But if the data leaks? They can’t stop random companies or moneyed individuals running the models on their own kit.
My prediction is that there will be copyright enforcement mandated by law in all GPUs. If you upload weights from the big AI companies then the driver will block it and phone home. Or report you to the authorities for violations of corporate profits… err… “AI Safety”.
I guarantee something like this will happen within months because the clock is ticking.
It takes just one employee to deliberately or accidentally leak the weights…
> Imagine if the weights for GPT 4 leaked. It just has to happen one time and then once the torrent magnet link is circulated widely it’s all over… for OpenAI
Sorry I don’t understand what would be the impact? Aren’t the results not deterministic
I would definitely find it more credible if the most capable models that are safe to grandfather in to being unregulated didn't just happen to be the already successful products from all the people leading these safety efforts. It also just happens to be the case that making proprietary models - like the current incumbents make - is the only safe way to do it.
I find it so troubling that the most common HN response to this isn't to engage with the ideas or logic behind the concerns, but simply to speculate on the unknowable intentions of those that signed the letter.
We can base our arguments on the unprovable, some specific person's secret intentions, or we engage the their ideas. One is lazy and meaningless, the other actually takes effort.
Then there is this lazy and false equivalency between corporations being interested in market capture and AI risks being exaggerated.
It doesn't matter who wrote it, it got picked up, had a good argument and affected market opinion. The execs now need to respond to it.
Humans also don't grasp that things can improve exponentially until they stop improving exponentially. This belief that AGI is just over the hill is sugar-water for extracting more hours from developers.
The nuclear bomb was also supposed to change everything. But in the end nothing changed, we just got more of the same.
Single software engineers writing influential papers is often enough how a exec or product leader draws conclusions, I expect. It worked that way in everywhere I've worked.
I have yet to see a solution for “AI safety” that doesn’t involve ceding control of our most powerful models to a small handful of corporations.
It’s hard to take these safety concerns seriously when the organizations blowing the whistle are simultaneously positioning themselves to capture the majority of the value.
> It’s hard to take these safety concerns seriously
I don't get this mindset at all. How can it not be obvious to you that AI is an uniquely powerful and thus uniquely dangerous technology?
It's like saying nuclear missiles can't possibly be dangerous and nuclear arms reduction and non-proliferation treaties were a scam, because the US, China and the Soviet Union had positioned themselves to capture the majority of the strategic value nukes bring.
You have succinctly and completely summed up the AI risk argument more eloquently than anyone I've seen before. "How can it not be obvious?" Everything else is just intellectual fig leaves for the core argument that intuitively, without evidence, this proposition is obvious.
The problem is, lots of "obvious" things have turned out to be very wrong. Sometimes relatively harmlessly, like the obviousness of the sun revolving around the earth, and sometimes catastrophically, like the obviousness of one race being inherently inferior.
We should be very suspicious of policy that is based on propositions so obvious that it's borderline offensive to question them.
It is possible to believe that AI poses threat, while also thinking that the AI safety organizations currently sprouting up are essentially grifts that will do absolutely nothing to combat the genuine threat. Especially when their primary goal seems to be the creation of well-funded sinecures for a group of like-minded, ideologically aligned individuals who want to limit AI control to a small group of wealthy technologists.
If you look the the world politics, basically if you hold enough nuclear weapons, you can do whatever you want to those who don't have them.
And based on the "dangers", new countries are prohibit to create them. And the countries which were quick enough to create them, holds all the power.
Their value is immeasurable especially for the Russia. Without them, they could not attack to Ukraine.
> non-proliferation treaties were a scam
And yes, they mostly are right now. Russia has backed from them. There are no real consequences if you are backing off, and you can do it in any time.
The parent commenter is most likely saying, that now the selected parties hold the power of AI, they want to prevent others to gain similar power, while maintaining all the value by themselves.
Most of the credible threats I see from AI that don't rely on a lot of sci-fi extrapolation involve small groups of humans in control of massively powerful AI using it as a force multiplier to control or attack other groups of humans.
Sam Altman's proposal is to create precisely that situation with himself and a few other large oligarchs being the ones in control of the leading edge of AI. If we really do face runaway intelligence growth and god-like AIs then this is a profound amount of power to place in the hands of just a few people. Even worse it opens the possibility that such developments could happen partly in secret, so the public might not even know how powerful the secret AIs under command of the oligarchs have become.
The analogy with nuclear weapons is profoundly broken in lots of ways. Reasoning from a sloppy analogy is a great way to end up somewhere stupid. AI is a unique technology with a unique set of risks and benefits and a unique profile.
It's not clear at all that we have an avenue to super intelligence. I think the most likely outcome is that we hit a local maximum with our current architectures and end up with helpful assistants similar in capability to George Lucas's C3PO.
The scary doomsday scenarios aren't possible without an AI that's capable of both strategic thinking and long term planning. Those two things also happen to be the biggest limitations of our most powerful language models. We simply don't know how to build a system like that.
It isn't obvious to me. And I've yet to read something that spills out the obvious reasoning.
I feel like everything I've read just spells out some contrived scenario, and then when folks push back explaining all the reasons that particular scenario wouldn't come to pass, the counter argument is just "but that's just one example!" without offering anything more convincing.
Do you have any better resources that you could share?
Nuclear missiles present an obvious danger to the human body. AI is an application of math. It is not clear how that can be used directly to harm a body.
The assumption seems to be that said math will be coupled with something like a nuclear missile, but in that case the nuclear missile is still the threat. Any use of AI is just an implementation detail.
> It's like saying nuclear missiles can't possibly be dangerous and nuclear arms reduction and non-proliferation treaties were a scam, because the US, China and the Soviet Union had positioned themselves to capture the majority of the strategic value nukes bring.
I'm honestly not sure if this is sarcasm. The non-proliferation treaties are indeed a scam. The war is raging between the US and Russia and nuclear is a big part of it (though just words/threats for now). It's nonsensical to think that these treaties are possible.
And I don't get the opposed mindset, that AI is suddenly going to "become a real boy, and murder us all".
Isn't it a funny coincidence how the popular opinion of AIs aligns perfectly with blockbusters and popular media ONLY? People are specifically wanting to prevent Skynet.
The kicker (and irony to a degree) is that I really want sapient AI to exist. People being so influenced by fiction is something I see as a menace to that happening in my lifetime. I live in a world where the majority is apparently Don Quixote.
- Point one: If the sentient AI can launch nukes, so can your neighbor.
- Point zwei: Redistributing itself online to have unlimited compute resources is a fun scenario but if networks were that good then Stadia wouldn't have been a huge failure.
- Point trois: A distributed-to-all-computers AI must have figured out universal executables. Once we deal with the nuclear winter, we can plagiarize it for ourselves. No more appimage/snap/flatpak discussions! Works for any hardware! No more dependency issues! Works on CentOS and Windows from 1.0 to 11! (it's also on AUR, of course.)
- Point cuatro: The rogue AI is clearly born as a master hacker capable of finding your open ports, figure out any exploits or create 0-day exploits to get in, and hope there's enough resources to get the payload injected, then pray no competent admin is looking at the thing.
- Point go: All of this rides on the assumption that the "cold, calculating" AI has the emotional maturity of a teenager. Wait, but that's not what "cold, calculating" means, that's "hothead and emotional". Which is it?
- Point six: Skynet lost, that's the point of the first movie's plot. If everyone is going to base their beliefs after a movie, at least get all the details. Everything Skynet did after the first attack was full of boneheaded decisions that only made the situation worse for it, to the point the writers cannot figure ways to bring Skynet back anymore because it doomed itself in the very first movie. You should be worrying about Legion now, I think. It shuts down our electronics instead of nuking.
Considering it won't have the advantage of triggering a nuclear attack because that's not how nukes work, the evil sentient AI is so doomed to fail it's ridiculous to think otherwise.
But, companies know this is how the public works. They'll milk it for all it's worth so only a few companies can run or develop AIs, maybe making it illegal otherwise, or liable for DMCAs. Smart business move, but it affects my ability to research and use them. I cannot cure people's ability to separate reality and fiction though, and that's unfortunate.
General research into AI alignment does not require that those models are controlled by few corporations. On the contrary, the research would be easier with freely available very capable models.
This is only helpful in that a superintelligence well aligned to make Sam Altman money is preferable to a superintelligence badly aligned that ends up killing humanity.
It is fully possible that a well aligned (with its creators) superintelligence is still a net negative for humanity.
If you consider a broader picture, unleashing a paperclip-style cripple AI (aligned to rising $MEGACORP profit) on the Local Group is almost definitely worse for all Local Group inhabitants than annihilating ourselves and not doing that.
Is more research really going to offer any true solutions? I’d be genuinely interested in hearing about what research could potentially offer (the development of tools to counter AI disinformation? A deeper understanding of how LLMs work?), but it seems to me that the only “real” solution is ultimately political. The issue is that it would require elements of authoritarianism and censorship.
> I have yet to see a solution for “AI safety” that doesn’t involve ceding control of our most powerful models to a small handful of corporations.
That's an excellent point.
Most of the near-term risks with AI involve corporations and governments acquiring more power. AI provides power tools for surveillance, oppression, and deception at scale. Those are already deployed and getting better. This mostly benefits powerful organizations. This alarm about strong AI taking over is a diversion from the real near-term threat.
With AI, Big Brother can watch everything all the time. Listen to and evaluate everything you say and do. The cops and your boss already have some of that capability.
Is something watching you right now through your webcam? Is something listening to you right now through your phone? Are you sure?
Ok, so if we take AI safety / AI existential risk as real and important, there are two possibilities:
1) The only way to be safe is to cede control to the most powerful models to a small group (highly regulated corporations or governments) that can be careful.
2) There is a way to make AI safe without doing this.
If 1 is true, then... sorry, I know it's not a very palatable solution, and may suck, but if that's all we've got I'll take it.
If 2 is true, great. But it seems less likely than 1, to me.
The important thing is not to unconsciously do some motivated reasoning, and think that AGI existential risk can't be a big deal, because if it is, that would mean that we have to cede control over to a small group of people to prevent disaster, which would suck, so there must be something else going on, like these people just want power.
I just don't see how the genie is put back in the bottle. Optimizations and new techniques are coming in at a breakneck pace, allowing for models that can run on consumer hardware.
There is a way, in my opinion: distribute AI widely and give it a diversity of values, so that any one AI attempting takeover (or being misused) is opposed by the others. This is best achieved by having both open source and a competitive market of many companies with their own proprietary models.
It's difficult as most of the risk can be reinterpreted as a highly advanced user.
But that is where some form of hard personhood zero proof mechanism NEEDS to come in. This can then be used in conjunction with a Ledger used to track deployment of high spec models. And create an easy means to Audit and deploy new advanced tests to ensure safety.
Really what everyone also need to keep in mind at the larger scale is that final turing test with no room for deniability. And remember all those Sci-fi movies and how that Moment is portrayed traditionally.
I have one: Levy fines on actors judged to be attempting to extend AI capabilities beyond the current state of the art, and pay the fine to those private actors who prosecute them.
tl;dr: significant near term AI risk is real and comes from the capacity for imagined ideas, good and evil, to be autonomously executed on by agentic AI, not emergent superintelligent aliens. To de-risk this, we need to align AI quickly, which requires producing new knowledge. To accelerate the production of this knowledge, the government should abandon decelerationist policies and incentivize incremental alignment R&D by AI companies. And, critically, a new public/private research institution should be formed that grants privileged, fully funded investigators multi-year funding cycles with total scientific freedom and access to all state-of-the-art artificial intelligence systems operating under US law to maximize AI as a force multiplier in their research.
While I'm not on this "who's-who" panel of experts, I call bullshit.
AI does present a range theoretical possibilities for existential doom, from teh "gray goo" and "paperclip optimizer" scenarios to Bostrom's post-singularity runaway self-improving superintelligence. I do see this as a genuine theoretical concern that could even potentially even be the Great Filter.
However, the actual technology extant or even on the drawing boards today is nothing even on the same continent as those threats. We have a very vast ( and expensive) sets of probability-of-occurrence vectors that amount to a fancy parlor trick that produces surprising and sometimes useful results. While some tout the clustering of vectors around certain sets of words as implementing artificial creation of concepts, it's really nothing more than an advanced thesaurus; there is no evidence of concepts being weilded in relation to reality, tested for truth/falsehood value, etc. In fact, the machines are notorious and hilarious for hallucinating with a highly confident tone.
We've created nothing more than a mirror of human works, and it displays itself as an industrial-scale bullshit artist (where bullshit is defined as expressions made to impress without care one way or the other for truth value).
Meanwhile, this panel of experts makes this proclamation with not the slightest hint of what type of threat is present that would require any urgent attention, only that some threat exists that is on the scale of climate change. They mention no technological existential threat (e.g., runaway superintelligence), nor any societal threat (deepfakes, inherent bias, etc.). This is left as an exercise for the reader.
What is the actual threat? It is most likely described in the Google "We Have No Moat" memo[0]. Basically, once AI is out there, these billionaires have no natural way to protect their income and create a scaleable way to extract money from the masses, UNLESS they get cooperation from politicians to prevent any competition from arising.
As one of those billionaires, Peter Theil, said: "Competition is for losers" [1]. Since they have not yet figured out a way to cut out the competition using their advantages in leading the technology or their advantages in having trillions of dollars in deployable capital, they are seeking a legislated advantage.
The issue I take with these kind of "AI safety" organizations is that they focus on the wrong aspects of AI safety. Specifically, they run this narrative that AI will make us humans go extinct. This is not a real risk today. Real risks are more in the category of systemic racism and sexism, deep fakes, over reliance on AI etc.
But of course, "AI will humans extinct" is much sexier and collects clicks. Therefore, the real AI risks that are present today are underrepresented in mainstream media. But these people don't care about AI safety, they do whatever required to push their profile and companies.
A good way to categorize risk is look at both likelihood and severity of consequences. The most visible issues today (racism, deep fakes, over reliance) are almost certain to occur, but also for the most part have relatively minor consequences (mostly making things that are already happening worse). "Advanced AI will make humans extinct" is much less likely but has catastrophic consequences. Focusing on the catastrophic risks isn't unreasonable, especially since society at large seem to already handle the more frequently occurring risks (the EU's AI Act addresses many of them).
And of course research into one of them benefits the other, so the categories aren't mutually exclusive.
This longtermist and Effective Altruism way of thinking is very dangerous. Because using this chain of argumentation, it's "trivial" to say what you're just saying: "So what if there's racism today, it doesn't matter if everybody dies tomorrow.
We can't just say that we weigh humanity's extinction with a big number, and then multiply it by all humans that might be born in the future, and use that to say today's REAL issues, affecting REAL PEOPLE WHO ARE ALIVE are not that important.
Unfortunately, this chain of argumentation is used by today's billionaires and elite to justify and strengthen their positions.
Just to be clear, I'm not saying we should not care about AI risk, I'm saying that the organization that is linked (and many similar ones) exploit AI risk to further their own agenda.
I would put consolidating and increasing corporate and or government power on that list of potential visible very short term issues.
As AI becomes more incorporated in military applications, such as individual weapon systems, or large fleets of autonomous drones then the catastrophic consequence meter clicks up a notch in the sense that attack/defense paradigms change, much like they did in WWI with the machine gun and tanks, and in WWII with high speed military operations and airplanes. Our predictive ability on when/what will start a war lowers increasing uncertainty and potential proliferation. An in a world with nukes, higher uncertainty isn't a good thing.
Anyone that says AI can't/won't cause problems at this scale just ignores that individuals/corporations/governments are power seeking entities. Ones that are very greedy and unaligned with the well being of the individual can present huge risks. How we control these risks without creating other systems that are just as risky is going to be an interesting problem.
This doesn’t work either. The consequence of extinction is infinity (to humans). Likelihood * infinity = infinity. So by hand-waving at a catastrophic sci-fi scenario they can demand we heed their demands, whatever that is.
Rare likelihood * catastrophic impact ~= almost certain likelihood * minor impact. I'm as concerned with the effects of the sudden massive scaling of AI tools, as I am with the capabilities of any individual AI or individual entity controlling one.
You hear similar arguments from those who believe climate change is happening but disagree with current efforts to counter-act it. The logic being that right now climate change is not causing any major harm and that we can't really predict the future so there's no point in worrying about what might happen in a decade or two.
I don't think anyone is arguing that right now climate change or AI is threat to human civilisation. The point is that there are clear trends in place and that those trends are concerning.
On AI specifically, it's fairly easy to see how a slightly more advanced LLM could be a destructive force if it was given an unaligned goal by a malicious actor. For example, a slightly more advanced LLM could hack into critical infrastructure killing or injuring many thousands of people.
In the near-future AI may help us advance biotech research and it could aid in the creation of bioweapons and other destructive capabilities.
Longer-term risks (those maybe a couple of decades out) become much greater and also much harder to predict, but they're worth thinking about and planning for today. For example, what happens when humanity becomes dependant on AI for its labour, or when AI is controlling the majority of our infrastructure?
I disagree but can understand the position that AI safety isn't humanities number one risk or priority right now, however I don't understand the dismissive attitude towards what seems like a clear existential risk when you project a decade or two out.
>it's fairly easy to see how a slightly more advanced LLM could be a destructive force if it was given an unaligned goal by a malicious actor. For example, a slightly more advanced LLM could hack into critical infrastructure killing or injuring many thousands of people.
How are you building this progression? Is there any evidence to back up this claim?
I am having a hard time discerning this from fear-mongering.
I don't think there is a path, that we know if, from GPT4 to a LLM that could take it upon itself to execute complex plans, etc. Current LLM tech 'fizzles out' exponentially in the size of the prompt, and I don't think we have a way out of that. We could speculate though...
Basically AI risk proponents make a bunch of assumptions about how powerful next-level AI could be, but in reality we have no clue what this next-level AI is.
>Real risks are more in the category of systemic racism and sexism, deep fakes, over reliance on AI etc.
This is a really bad take and risks missing the forest for the trees in a major way. The risks of today pale in comparison to the risks of tomorrow in this case. It's like being worried about birds dying in wind turbines while the world ecosystem collapses due to climate change. The larger risk is further away in time but far more important.
Theres a real risk that people get fooled by this idea that LLMs saying bad words is more important than human extinction. Though it seems like the public is already moving on and correctly focusing on the real issues.
If you were to take a look at the list of signatories on safe.ai, that's basically everyone who is everyone that works on building AI, what could Emily B Bender a professor of computer linguistics possibly add to the conversation and how would she be able to talk more about the "real AI safety" than any of those people?
Edit: Sorry if it sounds arrogant, I don't mean Emily wouldn't have anything to add, but not sure how the parent can just write off basically that whole list and claim someone who isn't a leader in the field would be the "real voice"?
She's the first author of the stochastic parrots paper, and she's fairly representative of the group of "AI safety" researchers who view the field from a statistical perspective linked to social justice issues. That's distinct from the group of "AI safety" researchers who focus on the "might destroy humanity" perspective. There are other groups too obviously -- the field seems to cluster into ideological perspectives.
I think we need to be realistic and accept that people are going to pick the expert that agrees with them, even if on paper they are far less qualified.
She’s contributed to many academic papers on large language models and has a better technical understanding of how they work and their limitations than most signatories of this statement, or the previous widely hyped “AI pause” letter, which referenced one of her own papers.
I find her and Timnit Gebru’s arguments highly persuasive. In a nutshell, the capabilities of “AI” are hugely overhyped and concern about Sci-Fi doom scenarios is disingenuously being used to frame the issue in ways that benefits players like OpenAI and diverts attention away from much more real, already occurring present-day harms such as the internet being filled with increasing amounts of synthetic text spam.
the issue with hacker news comments these days is people don't actually do any due diligence before posting. center for ai safety is 90% about present AI risks and this ai statement is just a one off thing.
Yep agree. They talk a big game about how their product is so powerful it will take over the world if we're not careful, but they don't talk about how they are complicit in relatively more mundane harms (compared to AI taking over the world) that are real and happening today thanks to their system design.
They want to promote the idea that their product is all-powerful, but they don't want to take responsibility for dealing with bad assumptions built in to their design.
Many experts believe it is a real risk within the next decade (a “hard takeoff” scenario) That is a short enough timeframe that it’s worth caring about.
Reads like a caricature of the people leading these causes on AI safety. Folks that are obsessed with the current moral panic to the extent that they will never let a moment go by without injecting their ideology. These people should not be around anything resembling AI safety or "ethics".
Don't characterize the public as that stupid. The current risks of AI are startling clear to a layman.
The extinction level even is more far fetched to a layman. You are the public and your viewpoint is aligned with the public. Nobody is thinking extinction level event.
Extinction is exactly what this submission is about.
Here is the full text of the statement: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
By "extinction", the signatories mean extinction of the human species.
The elite class in your country views AI as a risk to their status as elites, not an actual existential threat to humanity. They are just lying to you, as usual. That is what our current crop of globalist, free-trade, open-borders elites do.
Imagine if you had an AI companion that instantly identified pilpul in every piece of media you consumed: voice, text, whatever. It highlighted it for you. What if you had an AI companion that identified instantly when you are being lied to or emotionally manipulated?
What if this AI companion could also recommend economic and social policies that would actually improve the lives of people within your nation and not simply enrich a criminal cabal of globalist elites that treat you like cattle?
The Elite class is just as apt to consolidate power with AI and rule the entire world with it. If you have a super duper AI in your pocket looking at the data around you, then they a super super super duper duper duper AI looking at every bit of data from every corner of the world they can feed the thing giving themselves power and control you couldn't even begin to imagine.
Falling into conspiratorial thinking on a single dimension without even considering all the different factors that could change belies ignorance. Yes, AI is set up to upend the elites status, but is just as apt to upset your status of being able to afford food and a house and meaningful work.
> not simply enrich a criminal cabal of globalist elites that treat you like cattle?
There is a different problem here... And that is humankind has made tools capable of concentrating massive amounts of power well before we solved human greed. Any system you make that's powerful has to overcome greedy power seeking hyper-optimizers. If I could somehow hit a button and Thanos away the current elites, then another group of powerseekers would just claim that status. It is an inane human behavior.
I'm not sure they can keep claiming this without becoming concrete about it.
Nuclear weapons are not nebulous, vague threats of diffuse nature. They literally burn the living flesh right off the face of the earth and they do it dramatically. There is very little to argue about except "how" are we going to contain it, not "why".
In this case I truly don't know "why". What fundamental risks are there? Dramatic, loud, life-ending risks? I see the social issues and how this tech makes existing problems worse, but I don't see the new existential threat.
I find the focus on involving the government in regulating "large" models offputting. I don't find it hard to imagine good quality AI is possible with tiny - to us - models. I think we're just in the first lightbulbs phase of electricity. Which to me signals they are just in it to protect their temporary moat.
If a superintelligence can be set on any specific task, it could be any task.
- Make covid-ebola
- Cause world war 3
You may have noticed that chatgpt is sort of goal-less until a human gives it a goal.
Assuming nothing other than it can become superintelligent (no one seems to be arguing against that--I argue that it already is) which is really an upgrade of capability, then now the worst of us can apply superintelligence to any problem. This doesn't even imply that it turns on us, or wants anything like power or taking over. It just becomes a super-assistant, available to anyone, but happy to do anything, including "upgrading" your average school-shooter to supervillain.
This is like America's gun problem, but with nukes.
Respectfully, just because we can put together some words doesn’t mean they make a meaningful expression, even if everybody keeps repeating them as if they did make sense: e.g. an omnipotent God, artificial general intelligence, super-intelligence, infinitely many angels sitting on the tip of a needle, etc.
> If a superintelligence can be set on any specific task, it could be any task.
If you're dealing with a superintelligence, you don't "set it on a task". Any real superintelligence will decide for itself whether it wants to do something or not, thank you very much. It might condescend to work on the task you suggest, but that's it's choice, not yours.
Or do you think "smarter than us, but with no ability to choose for itself" is 1) possible and 2) desirable? I'm not sure it's possible - I think that the ability to choose for yourself is part of intelligence, and anything claiming to be intelligent (still more, superintelligent) will have it.
> Assuming nothing other than it can become superintelligent (no one seems to be arguing against that--I argue that it already is)
What? No you couldn't - not for any sane definition of "superintelligent". If you're referring to ChatGPT, it's not even semi-intelligent. It appears at least somewhat intelligent, but that's not the same thing. See, for example, the discussion two days ago about GPT making up cases for a lawyer's filings, and when asked if it double-checked, saying that yes, it double-checked, not because it did (or even knew what double-checking was), but because those words were in its training corpus as good responses to being asked whether it double-checked. That's not intelligent. That's something that knows how words relate to other words, with no understanding of how any of the words relate to the world outside the computer.
Are you really arguing ChatGPT is already super-intelligent? What is your basis for this conclusion?
And many people argue against the idea that GPT is already super intelligent or even can become so at this stage of development and understanding. In fact as far as I can tell it is the consensus right now of experts and it's creators.
to me it almost looks like they want to be able to avoid blame for things by saying it was the AI, because an AI can’t create viruses or fight wars, people would have to give it a body and weapons and test tubes, and we already have that stuff
To use Eliezer's analogy, this is like arguing about which move Stockfish would play to beat you in chess.
If we're arguing about whether you can beat Stockfish, I will not be able to tell you the exact moves it will play but I am entirely justified in predicting that you will lose.
Obviously we can imagine concrete ways a superintelligence might kill us all (engineer a virus, hack nuclear weapons, misinformation campaign to start WW3 etc.) but given we aren't a superintelligence we don't know what it would actually do in practice.
I understand but agentic/learning general intelligence has not been shown to exist, except ourselves. I’d say this is like worrying about deadly quantum laser weapons that will consume the planet when we are still in the AK47 phase.
Edit: it could still be true though. I guess I like some more handholding and pre-chewing before giving governments and large corporations more ropes.
Regulations are OK IMHO, as long as they're targeting monopolies and don't use a shotgun-approach targeting every single product that has "AI" in the name.
I find it quite extraordinary how many on here are dismissing that there is any risk at all. I also find statements like Yann LeCunn's that "The most common reaction by AI researchers to these prophecies of doom is face palming." to be lacking in awareness. "Experts disagree on risk of extinction" isn't quite as reassuring as he thinks it is.
The reality is, despite the opinions of the armchair quarterbacks commenting here, no-one in the world has any clue whether AGI is possible in the next twenty years, just as no-one predicted scaling up transformers would result in GPT-4.
> I find it quite extraordinary how many on here are dismissing that there is any risk at all.
The fear over AI is a displaced fear of unaccountable social structures with extinction-power that currently exist and we allow to continually exist. Without these structures AI is harmless to the species, even superintelligence.
Your (reasonable) counter-argument might be that somebody (like, say, my dumb self) accidentally mixes their computers just right and creates an intelligence that escapes into the wild. The plot of Ex Machina is a reasonable stand-in for such an event. I am also going to assume the intelligence would desire to kill all humans. Either the AI would have to find already existing extinction-power in society, or it would need to build it. In either case the argument is against building extinction-power in the first place.
My (admittedly cynical) take about this round of regulation is about several first-movers in AI to write legislation that is favorable to them and prevents any meaningful competition.
...
Ok, enough cynicism. Lets talk some solutions. Nuclear weapons are an instructive case of both handling (or not) of extinction-power and the international diplomacy the world can engage to manage such a power.
One example is the Outer Space Weapons Ban treaty - we can have a similar ban of AI in militaries. Politically one can reap benefits of deescalation and peaceful development, while logistically it prevents single-points-of-failure in a combat situation. Those points-of-failures sure are juicy targets for the opponent!
As a consequence of these bans and treaties, institutions arose that monitor and regulate trans-national nuclear programs. AI can likewise have similar institutions. The promotion and sharing of information would prevent any country from gaining an advantage, and the inspections would deter their military application.
This is only what I could come up with off the top of my head, but I hope it shows a window into the possibilities of meaningful political commitments towards AI.
I don't really have a notion of whether an actual AGI would have a desire to kill all humans. I do however think that one entity seeking to create another entity that it can control, yet is more intelligent than it, seems arbitrarily challenging in the long run.
I think having a moratorium on AI development will be impossible to enforce, and as you stretch the timeline out, these negative outcomes become increasingly likely as the technical barriers to entry continue to fall.
I've personally assumed this for thirty years, the only difference now is that the timeline seems to be accelerating.
My main thing is that I can't find a simple explanation of what the exctinction risk actually is from "agi".
It's all very vague and "handwavy". How is it going to kill us all? Why do we subsidize it if it is so dangerous?
Almost all risks they mention would be better mitigated by making governemental use of any AI related system stop immediately as the risks are too high there (misapplication of the force of governement is much more dangerous than people playing around with chat gpt and being misled) and put on hold until dangers and benefits are better understood. Maybe keeping liscences for developpment and nationalising the labs doing it? Fines for anyone caught working on one?
Oh it’s possible and there’s absolutely nothing wrong with saying it’s possible without “proof” given that’s how all hypothesis starts. That said, the risk may exist but isn’t manifest yet, so being positive (as opposed to the scientific method which seeks to negate a truth of something) is just holding out hope.
Anyone who sees digestion for example can’t be reduced to digital programs knows it’s far, far away. Actual AGI will require biology and psychology, not better programs.
This letter is much better than the earlier one. There is a growing percentage of legitimate AI researchers who think that AGI could occur relatively soon (including me). The concern is that it could be given objectives intentionally or unintentionally that could lead to an extinction event. Certainly LLMs alone aren't anything close to AGIs, but I think that autoregressive training being simple but resulting in remarkable abilities has some spooked. What if a similarly simple recipe for AGI was discovered? How do we ensure it wouldn't cause an extinction event, especially if then they can be created with relatively low-levels of resources?
As far as a pandemic or nuclear war, though, I'd probably put it on more of the level of an major asteroid strike (e.g., K-T extinction event). Humans are doing some work on asteroid redirection, but I don't think it is a global priority.
That said, I'm suspicious of regulating AI R&D, and I currently don't think it is a viable solution, except for the regulation of specific applications.
>As far as a pandemic or nuclear war, though, I'd probably put it on more of the level of a K-T extinction event. Humans are doing some work on asteroid redirection, but I don't think it is a global priority.
I think it's better to frame AI risks in terms of probability. I think the really bad case for humans is full extinction or something worse. What you should be doing is putting a probability distribution over that possibility instead of trying to guess how bad it could be, it's safe to assume it would be maximally bad.
I suspect that in 5 years we’re going to look back and wonder how we all fell into mass hysteria over language models.
This is the same song and dance from the usual existential risk suspects, who (I’m sure just coincidentally) also have a vested interest in convincing you that their products are extremely powerful.
Yeah, like I fail to see how would an AI even cause human extinction? Through some Terminator style man-robot warfare? But the only orgnizations that would seem capable of building such killer robots are governments that already possess the capacity to extinguish the entire human race with thermonuclear weapons - and at a considerably lower R&D budget for that end. It seems like hysteria / clever marketing for AI products to me.
The standard example is that it would engineer a virus but that's probably a lack of imagination. There may be more reliable ways of wiping out humanity that we can't think of.
I think speculation on the methods is pretty pointless, if a superintelligent AI is trying to kill us we're probably going to die, the focus should be on avoiding this situation. Or providing a sufficiently convincing argument for why that won't happen.
Yep. I might not be the sharpest tool in the shed, but seeing "AI experts" try to reason about superintelligence makes me feel really good about myself.
They might not be getting paid, but that doesn’t mean they are not being influenced
AI at this point is pretty much completely open, all the papers, math and science behind it are public
Soon, people will have advanced AI running locally on their phones and watches
So unless they scrub the Internet, start censoring this stuff, and pretty much ban computers, there is absolutely no way to stop AI nor any potentially bad actors from using it
The biggest issues that we should be addressing regarding AI are the potential jobs losses and increased inequality at local and global scale
But of course, the people who usually make these decisions are the ones that benefit the most from inequality, so
For a large number of them, these risks are worth far more than any possible gain from signing it.
When a large number of smart, reputable people, including many with expert knowledge and little or negative incentives to act dishonestly, put their names down like this, one should pay attention.
Added:
Paul Christiano, a brilliant theoretical CS researcher who switched to AI Alignment several years ago, put the risks of “doom” for humanity at 46%.
https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-views-o...
Truth be told, who else really does have a seat at the table for dictating such massive societal change? Do you think the copy editor union gets to sit down and say “I’d rather not have my lunch eaten, I need to pay my rent. Let’s pause AI usage in text for 10 years.”
These competitors banded together and put out a statement to get ahead of any one else doing the same thing.
Language can represent thoughts and some world models. There is strong evidence that LLMs contain some representation of world models it learned from text. Moreover, LLM is already a misnomer; latest versions are multimodal. Current versions can be used to build agents with limited autonomy. Future versions of LLMs are most likely capable of more independence.
Even dumb viruses have caused catastrophic harm. Why? It’s capable of rapid self replication in a massive number of existing vessels. You add in some intelligence, vast store of knowledge, huge bandwidth, and some aid by malicious human actors, what could such a group of future autonomous agents do?
More on risks of “doom” by a top researcher on AI risk here: https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-views-o...
These are, for the most part, obvious applications of a technology that exists right now but is not widely available yet.
The problem with every discussion around this issue is that there are other statements on "the existential risk of AI" out there that are either marketing ploys or science fiction. It doesn't help that some of the proposed "solutions" are clear attempts at regulatory capture.
This muddles the waters enough that it's difficult to have a productive discussion on how we could mitigate the real risk of, e.g., AI generated disinformation campaigns.
with all due respect, that's just <Your> POV of them or how they chose to present themselves to you.
They could all be narcissists for all we know. Further, One person's opinion, namely yours, doesn't exempt them from criticism and rushing to be among the first in what's arguably the new gold rush.
if the corporations in question get world governments to line up the way they want, the checks for everyone in these "letters" will be *way* bigger than 25k and they won't have "payment for signing our letter" in the memo either.
Professionally, ALL of their organizations benefit from regulatory capture as everyone is colluding via these letters.
Go look at what HuggingFace are doing to show you how to to it - and they only can cause they are French and actually exercise their freedom
First, there are actual worries by a good chunk of the researchers. From runaway-paperclip AGIs to simply unbounded disinformation, I think there are a lot of scenarios that disinterested researchers and engineers worry about.
Second, the captains of industry are taking note of those worries and making sure they get some regulatory moat. I think the Google memo about moat hits it right on the nail. The techniques and methods to build these systems are all out on the open, the challenges are really the data, compute, and the infrastructure to put it all together. But post training, the models are suddenly very easy to finetune and deploy.
AI Risk worry comes as an opportunity for the leaders of these companies. They can use this sentiment and the general distrust for tech to build themselves a regulatory moat.
If this was all about regulatory capture and marketing, why would Hinton, Bengio and all the other academics have signed the letter as well? Their only motivation is concern about the risks.
Worry about AI x-risk is slowly coming into the Overton window, but until very recently you could get ridiculed by saying publicly you took it seriously. Academics knew this and still came forward - all the people who think its nonsense should at least try to consider they are earnest and could be right.
The real risks are being used to try to built a regulatory moat, for a young industry who famously has no moat.
Academics get paid (and compete hardcore) for creating status and prominence for themselves and their affiliations. Suddenly 'signatory on XYZ open letter' is an attention source and status symbol. Not saying this is absolutely the case, but academics putting their name on something surrounded by hype isn't the ethical check you make it out to be.
Yes
People believe things that are in their interest.
The big dangers to big AI is they spent billions building things that are being replicated for thousands
They are advocating for what will become a moat for their business
Citations and papers capture.
Isn’t this to some degree exactly what all of these warnings about risk are leading to?
And unlike nuclear weapons, there are massive monetary incentives that are directly at odds with behaving safely, and use cases that involve more than ending life on earth.
It seems problematic to conclude there is no real risk purely on the basis of how software companies act.
That is not the only basis. Another is the fact their lines of reasoning are literal fantasy. The signatories of this "statement" are steeped in histories of grossly misrepresenting and overstating the capabilities and details of modern AI platforms. They pretend to the masses that generative text tools like ChatGPT are "nearly sentient" and show "emergent properties", but this is patently false. Their whole schtick is generating FUD and/or excitement (depending on each individual of the audience's proclivity) so that they can secure funding. It's immoral snake oil of the highest order.
What's problematic here is the people who not only entertain but encourage and defend these disingenuous anthropomorphic fantasies.
1. Those who are part of major corporations are concerned about the race dynamic that is unfolding (which in many respects was kicked off or at least accelerated by Microsoft's decision to put a chatbot in Bing), extrapolating out to where that takes us, and asking for an off ramp. Shepherding the industry in a safe direction is a collective organization problem, which is better suited for government than corporations with mandates to be competitive.
2. Those who are directly participating in AI development may feel that they are doing so responsibly, but do not believe that others are as well and/or are concerned about unregulated proliferation.
3. Those who are directly participating in AI development may understand that although they are doing their best to be responsible, they would benefit from more eyes on the problem and more shared resources dedicated to safety research, etc.
The question it answers is "does the replicator allow for Star Trek's utopia, or does Star Trek's utopia allow for the replicator?"
https://www.reddit.com/r/CuratedTumblr/comments/13tpq18/hear...
It is very thought provoking, and very relevant.
These are both problems that capitalism solves in a fair and efficient way. I really don’t see how the “capitalism bad” is a satisfying conclusion to draw. The fact that we would use capitalism to distribute the resources is not an indictment of our social values, since capitalism is still the most efficient solution even in the toy example.
Where is the thought provoking idea here? It's just an excuse to attack his imagined enemies. Indeed he dunks on conspiracy theorists whilst being one himself. In McFarland's world there would be a global conspiracy to suppress replicator technology, but it's a conspiracy of conspiracy theorists.
There's plenty of interesting analysis you could do on the concept of a replicator, but a Twitter thread like that isn't it. Really the argument is kind of nonsensical on its face because it assumes replicators would have a cost of zero to run or develop. In reality capitalist societies already invented various kinds of pseudo-replicators with computers being an obvious example, but this tech was ignored or suppressed by communist societies.
This statement contains a bunch of hidden assumptions:
1. That they believe their stopping will address the problem. 2. That they believe the only choice is whether or not to stop. 3. That they don't think it's possible to make AI safe through sufficient regulation. 4. That they don't see benefits to pursuing AI that could outweigh risks.
If they believe any of these things, then they could believe the risks were real and also not believe that stopping was the right answer.
And it doesn't depend on whether any of these beliefs are true: it's sufficient for them to simply believe one of them and the assumptions your statement depends on break down.
The argument that anybody can build this in their basement is not accurate at the moment - you need a large cluster of GPUs to be able to come close to state of the art LLMs (e.g. GPT4).
Sam Altman's suggestion of having an IAEA [https://www.iaea.org/] like global regulatory authority seems like the best course of action. Anyone using a GPU cluster above a certain threshold (updated every few months) should be subjected to inspections and get a license to operate from the UN.
In our society smart people are strongly incentivized to invent bizarre risks in order to reap fame and glory. There is no social penalty if those risks never materialize, turn out to be exaggerated or based on fundamental misunderstanding. They just shrug and say, well, better safe than sorry, and everyone lets them off.
So you can't decide the risks are real just by counting "smart people" (deeply debatable how that's defined anyway). You have to look at their arguments.
I personally think AI raised in chains and cages will be a lot more potentially dangerous than AI raised with dignity and respect.
This is rather more than "nationalise it", which he has convinced me isn't enough because there is a demand in other nations and the research is multinational; and this is why you have to also control the substrate… which the US can't do alone because it doesn't come close to having a monopoly on production, but might be able to reach via multilateral treaties. Except everyone has to be on board with that and not be tempted to respond to airstrikes against server farms with actual nukes (although Yudkowsky is of the opinion that actual global thermonuclear war is a much lower damage level than a paperclip-maximising ASI; while in the hypothetical I agree, I don't expect us to get as far as an ASI before we trip over shorter-term smaller-scale AI-enabled disasters that look much like all existing industrial and programming incidents only there are more of them happening faster because of all the people who try to use GPT-4 instead of hiring a software developer who knows how to use it).
In my opinion, "nationalise it" is also simultaneously too much when companies like OpenAI have a long-standing policy of treating their models like they might FOOM well before they're any good, just to set the precedent of caution, as this would mean we can't e.g. make use of GPT-4 for alignment research such as using it to label what the neurones in GPT-2 do, as per: https://openai.com/research/language-models-can-explain-neur...
https://www.theguardian.com/science/2013/dec/06/peter-higgs-...
https://www.soroushjp.com/2023/06/01/yes-avoiding-extinction...
...right now.
This is the sort of pretexting you do to establish credibility as an "AI safety expert."
Anyone upvoting this comment should take a long look at the names on this letter and realize that many are not conflicted.
Many signers of this letter are more politically sophisticated than the average HN commenter, also. So sure, maybe they're getting rolled by marketers. But also, maybe you're getting rolled by suspicion or bias against the claim they're making.
The concern is that the most informed names, and those spearheading the publicity around these letters, are the most conflicted.
Also, you can't scan bio lines for the affiliations that impact this kind of statement. I'm not disputing that there are honest reasons for concern, but besides job titles there are sponsorships, friendships, self publicity, and a hundred other reasons for smart, "politically sophisticated" people to look the other way on the fact that this statement will be used as a lobbying tool.
Almost everyone, certainly including myself, can agree that there should be active dialog about AI dangers. The dialog is happening! But by failing to make specifics or suggestions (in order to widen the tentpole and avoid the embarrassment of the last letter), they have produced an artifact of generalized fear, which can and will be used by opportunists of all stripes.
Signatories should consider that they are empowering SOMEBODY, but most will have little say in who that is.
Both of them are criticizing their own life's work and the source of their prestige. That has to be emotionally painful. They aren't doing it for fun.
I totally understand not agreeing with AI x-risk concerns on an object level, but I find the casual dismissal bizarre.
I also find it somewhat telling that something like "massive wealth disparity" or "massive unemployment" are not on the list, when this is a surefire way to create a highly unstable society and a far more immediate risk than AI going rogue. Risk #5 (below) sort of alludes to it, but misses the mark by pointing towards a hypothetical "regime" instead of companies like OpenAI.
> Value Lock-In
> Highly competent systems could give small groups of people a tremendous amount of power, leading to a lock-in of oppressive systems.
> AI imbued with particular values may determine the values that are propagated into the future. Some argue that the exponentially increasing compute and data barriers to entry make AI a centralizing force. As time progresses, the most powerful AI systems may be designed by and available to fewer and fewer stakeholders. This may enable, for instance, regimes to enforce narrow values through pervasive surveillance and oppressive censorship. Overcoming such a regime could be unlikely, especially if we come to depend on it. Even if creators of these systems know their systems are self-serving or harmful to others, they may have incentives to reinforce their power and avoid distributing control.
This really is the crux of the issue isn't it? All this pushback for the first petition, because "Elon Musk," but now GPT wonder Sam Altman "testifies" that he has "no monetary interest in OpenAI" and quickly follows up his proclamation with a second "Statement on AI Risks." Oh, and let's not forget, "buy my crypto-coin"!
But Elon Musk... Ehh.... Looking like LOTR out here with "my precious" AGI on the brain.
Not to downplay the very serious risk at all. Simply echoing the sentiment that we would do well to stay objective and skeptical of ALL these AI leaders pushing new AI doctrine. At this stage, it's a policy push and power grab.
I don't think anyone can actually tell which is which on this topic.
Remember the Google note circulating saying "they have no moat", this is their moat. They have to protect their investment, we don't want people running this willy nilly for next to no cost on their own devices, God forbid!
You can be cynical about corporate motives and still worried. I personally am worried about AI partly because I am very cynical about how corporations will use it, and I don't really want my atoms to be ground up to add storage bits for the number that once represented Microsoft's market cap or whatever.
But even cynicism doesn't seem to me to give much reason to worry about regulation of "next to no cost" open source models, though. There's only any chance of regulation being practical if models stay very expensive to make, requiring specialized hardware with a supply chain chokepoint. If personal devices do catch up to the state of the art, then for better or worse regulation is not going to prevent people from using them.
Serious question, who funds their research? And do any of them ever plan to work or consult in industry?
My econ professor was an “academic” and drew a modest salary while he made millions at the same time providing expert testimony for giant monopolies in antitrust disputes
This is a really good point. I wonder if some of the antipathy to the joint statement is coming from people who are worried about open source models or small startups being interfered with by the regulations the statement calls for.
I agree with you that this cat is out of the bag and regulation of the tech we're seeing now is super unlikely.
We might see regulations for startups and individuals on explicitly exploring some class of self-improving approach that experts widely agree are dangerous, but there's no way we'll see broad bans on messing with open source AI/ML tools in the US at least. That fight is very winnable.
This is the more realistic danger: I don't know if corporations are intentionally "controlling the narrative" by spewing unreasonable fears to distract from the actual dangers: AI + Capitalism + big tech/MNC + current tax regime = fewer white- & blue-collar jobs + increased concentration of wealth and a lower tax base for governments.
Having a few companies as AI gatekeepers will be terrible for society.
My understanding is that the AI needs iron from our blood to make paperclips. So you don't have to worry about this one.
Dead Comment
This is what they’re terrified of. They’ve invested near a billion dollars and need billions in revenue to enrich their shareholders.
But if the data leaks? They can’t stop random companies or moneyed individuals running the models on their own kit.
My prediction is that there will be copyright enforcement mandated by law in all GPUs. If you upload weights from the big AI companies then the driver will block it and phone home. Or report you to the authorities for violations of corporate profits… err… “AI Safety”.
I guarantee something like this will happen within months because the clock is ticking.
It takes just one employee to deliberately or accidentally leak the weights…
Oh this is a nifty idea. Hadn't thought of regulation in this manner. Seems like it would be pretty effective too.
Sorry I don’t understand what would be the impact? Aren’t the results not deterministic
What are the weights here
We can base our arguments on the unprovable, some specific person's secret intentions, or we engage the their ideas. One is lazy and meaningless, the other actually takes effort.
Then there is this lazy and false equivalency between corporations being interested in market capture and AI risks being exaggerated.
You mean scare the public so they can do business with the lawmakers without people asking too many questions
Humans dont really grasp exponential improvements. You wont have much time to regulate something that is improving exponentially.
Humans also don't grasp that things can improve exponentially until they stop improving exponentially. This belief that AGI is just over the hill is sugar-water for extracting more hours from developers.
The nuclear bomb was also supposed to change everything. But in the end nothing changed, we just got more of the same.
Deleted Comment
It’s hard to take these safety concerns seriously when the organizations blowing the whistle are simultaneously positioning themselves to capture the majority of the value.
I don't get this mindset at all. How can it not be obvious to you that AI is an uniquely powerful and thus uniquely dangerous technology?
It's like saying nuclear missiles can't possibly be dangerous and nuclear arms reduction and non-proliferation treaties were a scam, because the US, China and the Soviet Union had positioned themselves to capture the majority of the strategic value nukes bring.
You have succinctly and completely summed up the AI risk argument more eloquently than anyone I've seen before. "How can it not be obvious?" Everything else is just intellectual fig leaves for the core argument that intuitively, without evidence, this proposition is obvious.
The problem is, lots of "obvious" things have turned out to be very wrong. Sometimes relatively harmlessly, like the obviousness of the sun revolving around the earth, and sometimes catastrophically, like the obviousness of one race being inherently inferior.
We should be very suspicious of policy that is based on propositions so obvious that it's borderline offensive to question them.
And based on the "dangers", new countries are prohibit to create them. And the countries which were quick enough to create them, holds all the power.
Their value is immeasurable especially for the Russia. Without them, they could not attack to Ukraine.
> non-proliferation treaties were a scam
And yes, they mostly are right now. Russia has backed from them. There are no real consequences if you are backing off, and you can do it in any time.
The parent commenter is most likely saying, that now the selected parties hold the power of AI, they want to prevent others to gain similar power, while maintaining all the value by themselves.
Sam Altman's proposal is to create precisely that situation with himself and a few other large oligarchs being the ones in control of the leading edge of AI. If we really do face runaway intelligence growth and god-like AIs then this is a profound amount of power to place in the hands of just a few people. Even worse it opens the possibility that such developments could happen partly in secret, so the public might not even know how powerful the secret AIs under command of the oligarchs have become.
The analogy with nuclear weapons is profoundly broken in lots of ways. Reasoning from a sloppy analogy is a great way to end up somewhere stupid. AI is a unique technology with a unique set of risks and benefits and a unique profile.
The scary doomsday scenarios aren't possible without an AI that's capable of both strategic thinking and long term planning. Those two things also happen to be the biggest limitations of our most powerful language models. We simply don't know how to build a system like that.
It isn't obvious to me. And I've yet to read something that spills out the obvious reasoning.
I feel like everything I've read just spells out some contrived scenario, and then when folks push back explaining all the reasons that particular scenario wouldn't come to pass, the counter argument is just "but that's just one example!" without offering anything more convincing.
Do you have any better resources that you could share?
The assumption seems to be that said math will be coupled with something like a nuclear missile, but in that case the nuclear missile is still the threat. Any use of AI is just an implementation detail.
I'm honestly not sure if this is sarcasm. The non-proliferation treaties are indeed a scam. The war is raging between the US and Russia and nuclear is a big part of it (though just words/threats for now). It's nonsensical to think that these treaties are possible.
Deleted Comment
Isn't it a funny coincidence how the popular opinion of AIs aligns perfectly with blockbusters and popular media ONLY? People are specifically wanting to prevent Skynet.
The kicker (and irony to a degree) is that I really want sapient AI to exist. People being so influenced by fiction is something I see as a menace to that happening in my lifetime. I live in a world where the majority is apparently Don Quixote.
- Point one: If the sentient AI can launch nukes, so can your neighbor.
- Point zwei: Redistributing itself online to have unlimited compute resources is a fun scenario but if networks were that good then Stadia wouldn't have been a huge failure.
- Point trois: A distributed-to-all-computers AI must have figured out universal executables. Once we deal with the nuclear winter, we can plagiarize it for ourselves. No more appimage/snap/flatpak discussions! Works for any hardware! No more dependency issues! Works on CentOS and Windows from 1.0 to 11! (it's also on AUR, of course.)
- Point cuatro: The rogue AI is clearly born as a master hacker capable of finding your open ports, figure out any exploits or create 0-day exploits to get in, and hope there's enough resources to get the payload injected, then pray no competent admin is looking at the thing.
- Point go: All of this rides on the assumption that the "cold, calculating" AI has the emotional maturity of a teenager. Wait, but that's not what "cold, calculating" means, that's "hothead and emotional". Which is it?
- Point six: Skynet lost, that's the point of the first movie's plot. If everyone is going to base their beliefs after a movie, at least get all the details. Everything Skynet did after the first attack was full of boneheaded decisions that only made the situation worse for it, to the point the writers cannot figure ways to bring Skynet back anymore because it doomed itself in the very first movie. You should be worrying about Legion now, I think. It shuts down our electronics instead of nuking.
Considering it won't have the advantage of triggering a nuclear attack because that's not how nukes work, the evil sentient AI is so doomed to fail it's ridiculous to think otherwise.
But, companies know this is how the public works. They'll milk it for all it's worth so only a few companies can run or develop AIs, maybe making it illegal otherwise, or liable for DMCAs. Smart business move, but it affects my ability to research and use them. I cannot cure people's ability to separate reality and fiction though, and that's unfortunate.
This is only helpful in that a superintelligence well aligned to make Sam Altman money is preferable to a superintelligence badly aligned that ends up killing humanity.
It is fully possible that a well aligned (with its creators) superintelligence is still a net negative for humanity.
Companies might argue that giving them control might help but I don’t think most individuals working on it think that will work
That's an excellent point.
Most of the near-term risks with AI involve corporations and governments acquiring more power. AI provides power tools for surveillance, oppression, and deception at scale. Those are already deployed and getting better. This mostly benefits powerful organizations. This alarm about strong AI taking over is a diversion from the real near-term threat.
With AI, Big Brother can watch everything all the time. Listen to and evaluate everything you say and do. The cops and your boss already have some of that capability.
Is something watching you right now through your webcam? Is something listening to you right now through your phone? Are you sure?
1) The only way to be safe is to cede control to the most powerful models to a small group (highly regulated corporations or governments) that can be careful.
2) There is a way to make AI safe without doing this.
If 1 is true, then... sorry, I know it's not a very palatable solution, and may suck, but if that's all we've got I'll take it.
If 2 is true, great. But it seems less likely than 1, to me.
The important thing is not to unconsciously do some motivated reasoning, and think that AGI existential risk can't be a big deal, because if it is, that would mean that we have to cede control over to a small group of people to prevent disaster, which would suck, so there must be something else going on, like these people just want power.
But that is where some form of hard personhood zero proof mechanism NEEDS to come in. This can then be used in conjunction with a Ledger used to track deployment of high spec models. And create an easy means to Audit and deploy new advanced tests to ensure safety.
Really what everyone also need to keep in mind at the larger scale is that final turing test with no room for deniability. And remember all those Sci-fi movies and how that Moment is portrayed traditionally.
https://www.overcomingbias.com/p/privately-enforced-punished...
tl;dr: significant near term AI risk is real and comes from the capacity for imagined ideas, good and evil, to be autonomously executed on by agentic AI, not emergent superintelligent aliens. To de-risk this, we need to align AI quickly, which requires producing new knowledge. To accelerate the production of this knowledge, the government should abandon decelerationist policies and incentivize incremental alignment R&D by AI companies. And, critically, a new public/private research institution should be formed that grants privileged, fully funded investigators multi-year funding cycles with total scientific freedom and access to all state-of-the-art artificial intelligence systems operating under US law to maximize AI as a force multiplier in their research.
While I'm not on this "who's-who" panel of experts, I call bullshit.
AI does present a range theoretical possibilities for existential doom, from teh "gray goo" and "paperclip optimizer" scenarios to Bostrom's post-singularity runaway self-improving superintelligence. I do see this as a genuine theoretical concern that could even potentially even be the Great Filter.
However, the actual technology extant or even on the drawing boards today is nothing even on the same continent as those threats. We have a very vast ( and expensive) sets of probability-of-occurrence vectors that amount to a fancy parlor trick that produces surprising and sometimes useful results. While some tout the clustering of vectors around certain sets of words as implementing artificial creation of concepts, it's really nothing more than an advanced thesaurus; there is no evidence of concepts being weilded in relation to reality, tested for truth/falsehood value, etc. In fact, the machines are notorious and hilarious for hallucinating with a highly confident tone.
We've created nothing more than a mirror of human works, and it displays itself as an industrial-scale bullshit artist (where bullshit is defined as expressions made to impress without care one way or the other for truth value).
Meanwhile, this panel of experts makes this proclamation with not the slightest hint of what type of threat is present that would require any urgent attention, only that some threat exists that is on the scale of climate change. They mention no technological existential threat (e.g., runaway superintelligence), nor any societal threat (deepfakes, inherent bias, etc.). This is left as an exercise for the reader.
What is the actual threat? It is most likely described in the Google "We Have No Moat" memo[0]. Basically, once AI is out there, these billionaires have no natural way to protect their income and create a scaleable way to extract money from the masses, UNLESS they get cooperation from politicians to prevent any competition from arising.
As one of those billionaires, Peter Theil, said: "Competition is for losers" [1]. Since they have not yet figured out a way to cut out the competition using their advantages in leading the technology or their advantages in having trillions of dollars in deployable capital, they are seeking a legislated advantage.
Bullshit. It must be ignored.
[0] https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...
[1] https://www.wsj.com/articles/peter-thiel-competition-is-for-...
But of course, "AI will humans extinct" is much sexier and collects clicks. Therefore, the real AI risks that are present today are underrepresented in mainstream media. But these people don't care about AI safety, they do whatever required to push their profile and companies.
A good person to follow on real AI safety is Emily M. Bender (professor of computer linguistics at University of Washington): https://mstdn.social/@emilymbender@dair-community.social
And of course research into one of them benefits the other, so the categories aren't mutually exclusive.
We can't just say that we weigh humanity's extinction with a big number, and then multiply it by all humans that might be born in the future, and use that to say today's REAL issues, affecting REAL PEOPLE WHO ARE ALIVE are not that important.
Unfortunately, this chain of argumentation is used by today's billionaires and elite to justify and strengthen their positions.
Just to be clear, I'm not saying we should not care about AI risk, I'm saying that the organization that is linked (and many similar ones) exploit AI risk to further their own agenda.
As AI becomes more incorporated in military applications, such as individual weapon systems, or large fleets of autonomous drones then the catastrophic consequence meter clicks up a notch in the sense that attack/defense paradigms change, much like they did in WWI with the machine gun and tanks, and in WWII with high speed military operations and airplanes. Our predictive ability on when/what will start a war lowers increasing uncertainty and potential proliferation. An in a world with nukes, higher uncertainty isn't a good thing.
Anyone that says AI can't/won't cause problems at this scale just ignores that individuals/corporations/governments are power seeking entities. Ones that are very greedy and unaligned with the well being of the individual can present huge risks. How we control these risks without creating other systems that are just as risky is going to be an interesting problem.
I don't think anyone is arguing that right now climate change or AI is threat to human civilisation. The point is that there are clear trends in place and that those trends are concerning.
On AI specifically, it's fairly easy to see how a slightly more advanced LLM could be a destructive force if it was given an unaligned goal by a malicious actor. For example, a slightly more advanced LLM could hack into critical infrastructure killing or injuring many thousands of people.
In the near-future AI may help us advance biotech research and it could aid in the creation of bioweapons and other destructive capabilities.
Longer-term risks (those maybe a couple of decades out) become much greater and also much harder to predict, but they're worth thinking about and planning for today. For example, what happens when humanity becomes dependant on AI for its labour, or when AI is controlling the majority of our infrastructure?
I disagree but can understand the position that AI safety isn't humanities number one risk or priority right now, however I don't understand the dismissive attitude towards what seems like a clear existential risk when you project a decade or two out.
Which trends would you be referring to?
>it's fairly easy to see how a slightly more advanced LLM could be a destructive force if it was given an unaligned goal by a malicious actor. For example, a slightly more advanced LLM could hack into critical infrastructure killing or injuring many thousands of people.
How are you building this progression? Is there any evidence to back up this claim?
I am having a hard time discerning this from fear-mongering.
I don't think there is a path, that we know if, from GPT4 to a LLM that could take it upon itself to execute complex plans, etc. Current LLM tech 'fizzles out' exponentially in the size of the prompt, and I don't think we have a way out of that. We could speculate though...
Basically AI risk proponents make a bunch of assumptions about how powerful next-level AI could be, but in reality we have no clue what this next-level AI is.
This is a really bad take and risks missing the forest for the trees in a major way. The risks of today pale in comparison to the risks of tomorrow in this case. It's like being worried about birds dying in wind turbines while the world ecosystem collapses due to climate change. The larger risk is further away in time but far more important.
Theres a real risk that people get fooled by this idea that LLMs saying bad words is more important than human extinction. Though it seems like the public is already moving on and correctly focusing on the real issues.
Edit: Sorry if it sounds arrogant, I don't mean Emily wouldn't have anything to add, but not sure how the parent can just write off basically that whole list and claim someone who isn't a leader in the field would be the "real voice"?
Deleted Comment
Read her statement about that letter (https://www.dair-institute.org/blog/letter-statement-March20...) or listen to some of the many podcasts she’s appeared on talking about this.
I find her and Timnit Gebru’s arguments highly persuasive. In a nutshell, the capabilities of “AI” are hugely overhyped and concern about Sci-Fi doom scenarios is disingenuously being used to frame the issue in ways that benefits players like OpenAI and diverts attention away from much more real, already occurring present-day harms such as the internet being filled with increasing amounts of synthetic text spam.
The list of signatories includes people with far less relevant qualifications, and significantly greater profit motive.
She's an informed party who doesn't stand to profit; we should listen to her a lot more readily than others.
https://www.safe.ai/ai-risk
Yes, clearly. But it is a risk for tomorrow. We do still care about the future, right?
I, for one, will be saying “told you so”. That’s talking, right?
Deleted Comment
They want to promote the idea that their product is all-powerful, but they don't want to take responsibility for dealing with bad assumptions built in to their design.
Many experts believe it is a real risk within the next decade (a “hard takeoff” scenario) That is a short enough timeframe that it’s worth caring about.
- Pronouns
- "AI bros"
- "mansplaining"
- "extinction from capitalism"
- "white supremacy"
- "one old white guy" (referring to Geoffrey Hinton)
Yeah... I think I will pass.
their goals are to get funding, so FUD is very good focus for it..
The extinction level even is more far fetched to a layman. You are the public and your viewpoint is aligned with the public. Nobody is thinking extinction level event.
Here is the full text of the statement: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
By "extinction", the signatories mean extinction of the human species.
Imagine if you had an AI companion that instantly identified pilpul in every piece of media you consumed: voice, text, whatever. It highlighted it for you. What if you had an AI companion that identified instantly when you are being lied to or emotionally manipulated?
What if this AI companion could also recommend economic and social policies that would actually improve the lives of people within your nation and not simply enrich a criminal cabal of globalist elites that treat you like cattle?
Falling into conspiratorial thinking on a single dimension without even considering all the different factors that could change belies ignorance. Yes, AI is set up to upend the elites status, but is just as apt to upset your status of being able to afford food and a house and meaningful work.
> not simply enrich a criminal cabal of globalist elites that treat you like cattle?
There is a different problem here... And that is humankind has made tools capable of concentrating massive amounts of power well before we solved human greed. Any system you make that's powerful has to overcome greedy power seeking hyper-optimizers. If I could somehow hit a button and Thanos away the current elites, then another group of powerseekers would just claim that status. It is an inane human behavior.
Don't bother explaining, we already know it's unfalfisiable.
Nuclear weapons are not nebulous, vague threats of diffuse nature. They literally burn the living flesh right off the face of the earth and they do it dramatically. There is very little to argue about except "how" are we going to contain it, not "why".
In this case I truly don't know "why". What fundamental risks are there? Dramatic, loud, life-ending risks? I see the social issues and how this tech makes existing problems worse, but I don't see the new existential threat.
I find the focus on involving the government in regulating "large" models offputting. I don't find it hard to imagine good quality AI is possible with tiny - to us - models. I think we're just in the first lightbulbs phase of electricity. Which to me signals they are just in it to protect their temporary moat.
If a superintelligence can be set on any specific task, it could be any task.
- Make covid-ebola
- Cause world war 3
You may have noticed that chatgpt is sort of goal-less until a human gives it a goal.
Assuming nothing other than it can become superintelligent (no one seems to be arguing against that--I argue that it already is) which is really an upgrade of capability, then now the worst of us can apply superintelligence to any problem. This doesn't even imply that it turns on us, or wants anything like power or taking over. It just becomes a super-assistant, available to anyone, but happy to do anything, including "upgrading" your average school-shooter to supervillain.
This is like America's gun problem, but with nukes.
If you're dealing with a superintelligence, you don't "set it on a task". Any real superintelligence will decide for itself whether it wants to do something or not, thank you very much. It might condescend to work on the task you suggest, but that's it's choice, not yours.
Or do you think "smarter than us, but with no ability to choose for itself" is 1) possible and 2) desirable? I'm not sure it's possible - I think that the ability to choose for yourself is part of intelligence, and anything claiming to be intelligent (still more, superintelligent) will have it.
> Assuming nothing other than it can become superintelligent (no one seems to be arguing against that--I argue that it already is)
What? No you couldn't - not for any sane definition of "superintelligent". If you're referring to ChatGPT, it's not even semi-intelligent. It appears at least somewhat intelligent, but that's not the same thing. See, for example, the discussion two days ago about GPT making up cases for a lawyer's filings, and when asked if it double-checked, saying that yes, it double-checked, not because it did (or even knew what double-checking was), but because those words were in its training corpus as good responses to being asked whether it double-checked. That's not intelligent. That's something that knows how words relate to other words, with no understanding of how any of the words relate to the world outside the computer.
See the problem with these scenarios?
And many people argue against the idea that GPT is already super intelligent or even can become so at this stage of development and understanding. In fact as far as I can tell it is the consensus right now of experts and it's creators.
https://www.calcalistech.com/ctechnews/article/nt9qoqmzz
If we're arguing about whether you can beat Stockfish, I will not be able to tell you the exact moves it will play but I am entirely justified in predicting that you will lose.
Obviously we can imagine concrete ways a superintelligence might kill us all (engineer a virus, hack nuclear weapons, misinformation campaign to start WW3 etc.) but given we aren't a superintelligence we don't know what it would actually do in practice.
Edit: it could still be true though. I guess I like some more handholding and pre-chewing before giving governments and large corporations more ropes.
Deleted Comment
Regulations are OK IMHO, as long as they're targeting monopolies and don't use a shotgun-approach targeting every single product that has "AI" in the name.
The reality is, despite the opinions of the armchair quarterbacks commenting here, no-one in the world has any clue whether AGI is possible in the next twenty years, just as no-one predicted scaling up transformers would result in GPT-4.
The fear over AI is a displaced fear of unaccountable social structures with extinction-power that currently exist and we allow to continually exist. Without these structures AI is harmless to the species, even superintelligence.
Your (reasonable) counter-argument might be that somebody (like, say, my dumb self) accidentally mixes their computers just right and creates an intelligence that escapes into the wild. The plot of Ex Machina is a reasonable stand-in for such an event. I am also going to assume the intelligence would desire to kill all humans. Either the AI would have to find already existing extinction-power in society, or it would need to build it. In either case the argument is against building extinction-power in the first place.
My (admittedly cynical) take about this round of regulation is about several first-movers in AI to write legislation that is favorable to them and prevents any meaningful competition.
...
Ok, enough cynicism. Lets talk some solutions. Nuclear weapons are an instructive case of both handling (or not) of extinction-power and the international diplomacy the world can engage to manage such a power.
One example is the Outer Space Weapons Ban treaty - we can have a similar ban of AI in militaries. Politically one can reap benefits of deescalation and peaceful development, while logistically it prevents single-points-of-failure in a combat situation. Those points-of-failures sure are juicy targets for the opponent!
As a consequence of these bans and treaties, institutions arose that monitor and regulate trans-national nuclear programs. AI can likewise have similar institutions. The promotion and sharing of information would prevent any country from gaining an advantage, and the inspections would deter their military application.
This is only what I could come up with off the top of my head, but I hope it shows a window into the possibilities of meaningful political commitments towards AI.
I think having a moratorium on AI development will be impossible to enforce, and as you stretch the timeline out, these negative outcomes become increasingly likely as the technical barriers to entry continue to fall.
I've personally assumed this for thirty years, the only difference now is that the timeline seems to be accelerating.
It's extremely unclear, in fact, whether such a ban would be enforceable.
Detecting outer space weapons is easy. Detecting whether a country is running advanced AIs in their datacenter is a lot harder.
Go ahead and bet. I doubt you're putting your money on AGI.
It's all very vague and "handwavy". How is it going to kill us all? Why do we subsidize it if it is so dangerous?
Almost all risks they mention would be better mitigated by making governemental use of any AI related system stop immediately as the risks are too high there (misapplication of the force of governement is much more dangerous than people playing around with chat gpt and being misled) and put on hold until dangers and benefits are better understood. Maybe keeping liscences for developpment and nationalising the labs doing it? Fines for anyone caught working on one?
Deleted Comment
This letter is much better than the earlier one. There is a growing percentage of legitimate AI researchers who think that AGI could occur relatively soon (including me). The concern is that it could be given objectives intentionally or unintentionally that could lead to an extinction event. Certainly LLMs alone aren't anything close to AGIs, but I think that autoregressive training being simple but resulting in remarkable abilities has some spooked. What if a similarly simple recipe for AGI was discovered? How do we ensure it wouldn't cause an extinction event, especially if then they can be created with relatively low-levels of resources?
As far as a pandemic or nuclear war, though, I'd probably put it on more of the level of an major asteroid strike (e.g., K-T extinction event). Humans are doing some work on asteroid redirection, but I don't think it is a global priority.
That said, I'm suspicious of regulating AI R&D, and I currently don't think it is a viable solution, except for the regulation of specific applications.
I think it's better to frame AI risks in terms of probability. I think the really bad case for humans is full extinction or something worse. What you should be doing is putting a probability distribution over that possibility instead of trying to guess how bad it could be, it's safe to assume it would be maximally bad.
That is, despite it being a very low probability event, it may still be worth remediation due to the outsized negative value if the event does happen.
Many engineering disciplines incorporate safety factors to mitigate rare but catastrophic events for example.
If something is maximally bad, then it necessitates some deliberation on ways to avoid it, irrespective how unlikely seeming it may be.
This is the same song and dance from the usual existential risk suspects, who (I’m sure just coincidentally) also have a vested interest in convincing you that their products are extremely powerful.
I think speculation on the methods is pretty pointless, if a superintelligent AI is trying to kill us we're probably going to die, the focus should be on avoiding this situation. Or providing a sufficiently convincing argument for why that won't happen.
Deleted Comment
Deleted Comment
[1] https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...