Readit News logoReadit News
mercurialsolo · 2 years ago
A lot of the AI doomerism comes from folks who do not understand the real complexities in making systems which really can function in a fully autonomous way in an environment which is hostile and dynamic. Much like an AI maybe tasked to sniff for security loopholes, there will be other AI's which will be tasked to defend. Eventually costs, resources also boil down into what is possible.

Evolutionary goals are not easy even with autonomous systems as goal definition is largely defined by societal needs, the environments we stay in and the resources we work with.

noduerme · 2 years ago
A lot of "dumb" systems we develop require unimaginable resource inputs just to produce a little extra output. Strip mining for coal, or using chemical fertilizer to grow corn to produce ethanol, for instance. There is no guarantee these days that a system will fail just because it requires huge energy inputs to produce marginal profit.

Evolutionary goals are not something that has to be aligned. Evolution isn't specific to organic life, it's an intrinsic rule of self-organizing systems. Viruses aren't alive, but they evolve. A clever stitch of self-writing code on a Pi attached to someone's TV may evolve without knowing or intending to.

What makes this dangerous now is the vast amount of energy input toward specific systems. Saying "there's surely not enough energy available for it to..." is false comfort. It's underrating the process of evolution.

As a poker analogy, if you just called the guy across from you because you think he couldn't possibly have more than two pair, you're wrong. You've bet into a full house.

[edit: Upon review, I think I've unintentionally gone 100% Jeff Goldblum Jurassic Park in this response LOL]

roenxi · 2 years ago
I don't think that completely addresses gp's argument, he wasn't talking about energy or resourcing; just complexity.

That being said, I still think gp's argument is flawed. We don't have examples of complexity that an AI can't overcome. Up until 2016 Go was a great example that was far to complex for an AI. Now there is no problem that an AI isn't expected, in principle, to outperform humans in with engineering effort. It is just a question of figuring out which problems it needs to outperform us in.

Engineering effort is not in shortage. There are many capable people on hand and modern AI is going to be a perfect force multiplier for engineering work. We're going to figure it out. Complexity is maybe a good argument that we've got a few generations to go, but life for a 2nd tier intelligence is not a well-resourced one. It could easily be like this world except with humans displacing orangutans in 2nd place to the smart species.

chaosjevil · 2 years ago
>A lot of the AI doomerism comes from folks who do not understand the real complexities in making systems which really can function in a fully autonomous way in an environment which is hostile and dynamic.

That's a rather interesting mix of appeal to ignorance and argumentum ad hominem. Two genetic fallacies, together; neither addressing what is said, but instead who says it.

>Evolutionary goals are not easy even with autonomous systems as goal definition is largely defined by societal needs, the environments we stay in and the resources we work with.

Great way to say "I didn't read the article".

The author is not talking about evolutionary _goals_.

avgcorrection · 2 years ago
Oh my goodness they’re not even fallacies. A fallacy is saying that something is wrong because of fallacious reasons. But the OP is indirectly making the much weaker claim that “I’m not buying it and that’s because X and Y.”

Speaking of fallacies, “AI doomers” (I’m just running with it) often deploy the rhetoric (not really a fallacy) that AI is about to doom us all because everything is supposedly so simple (like intelligence) and therefore it’s conceptually simple for a sufficiently advanced (but not that advanced) AI to keep improving itself. Now how do you respond to someone that just says that things are not complex when in reality things are indeed complex? Basically you have to unpack everything because the other person is just taking things at face value and is being naive. It’s like an “appeal to my ignorance”.

john-radio · 2 years ago
> The author is not talking about evolutionary _goals_.

Well... He is, actually, just not biological-evolutionary goals. (Like "natural selection," the term "evolution" can apply to anything that has appeared or might appear).

I do think you're right that the article frames the topic pretty well and explores it well, including the concerns that the person you're responding to raised.

jrflowers · 2 years ago
> Two genetic fallacies

I, for one, reject the work of Cohen and Nagel because I think they’re both bad philosophers and therefore cannot be swayed by such rhetorical machinations!

lannisterstark · 2 years ago
>Great way to say "I didn't read the article".

or you, the parent comment.

jstanley · 2 years ago
It's not easy, but it only has to succeed once.

It's not easy to turn primordial soup into humans, but it happened.

chinchilla2020 · 2 years ago
It's nowhere close to understanding basic logical concepts of the real world. Mimicking human sentences is something my African Grey parrot can do.

Building a deck is something AI cannot do.

For all our technology, we still don't have an automated kitchen because the work involved in the logistics and maintenance of the kitchen/materials is astronomically greater than simply doing the cooking by hand.

AI has no bridge to the real world. It can't build a house, farm crops, and maintain vehicles. For those on hackernews, a laptop may seem like the portal to reality, but that isn't the case for the other 90% of the human workforce.

We are 50% of the way between the big bang and heat death of the universe. There are no interstellar civilizations contacting us yet. They better hurry up if we plan to fulfill the scifi notions I see on this site.

mcbuilder · 2 years ago
And it was hard enough of a task that as far as we know for sure we are the only ones to have it happen to in the universe. Edit:

PS. With plenty of time in-between

echelon · 2 years ago
> once

> It's not easy to turn primordial soup into humans, but it happened.

Over trillions or more changes.

No single change is going to produce AGI. We're going to have a lot of forewarning, and it'll be obvious we're opening Pandora's box.

AGI won't leap from zero to superintelligence that can launch nukes. That's not how gradient climbing and scaling work.

Fearmongering is incredibly premature and is only going to produce regulation that favors incumbents.

The way I see it is that Eliezer is the biggest corporate mouthpiece for Microsoft and OpenAI. He's doing their bidding and not even getting paid to do it.

MSFT_Edging · 2 years ago
>A lot of the AI doomerism comes from folks who do not understand the real complexities in making systems which really can function in a fully autonomous way

IMO the doomerism from what I see isn't skynet esque worries, but the usage of AI in really dumb ways. Ways such as inundating the internet with AI generated spam ie blogs, articles, art, music, fake forum interaction, etc.

The smartest people are busy making AI work for us, the idiots are ruining everything for everyone else.

staplers · 2 years ago

  the usage of AI in really dumb ways
People in positions of power (ie. law enforcement) automating tasks like suspect generation, surveillance, ticket creation, etc is a much more sinister reality.

We're talking about a civilian-hostile org using corruptible AI (via training) to enforce some of the murkiest grey areas of society.

This stuff is already happening on the fringe and could become commonplace soon.

BSEdlMMldESB · 2 years ago
> Ways such as inundating the internet with AI generated spam ie blogs, articles, art, music, fake forum interaction, etc.

I think we're well past that...

what I worry is about "interacting" online with bots designed to keep you busy and distracted

ajuc · 2 years ago
> Much like an AI maybe tasked to sniff for security loopholes, there will be other AI's which will be tasked to defend. Eventually costs, resources also boil down into what is possible.

It's inherently easier to break stuff than to prevent damage.

ChatGTP · 2 years ago
Ok, so Yoshua Bengio, Geoffrey Hinton or Max Tegmark aren't able to comprehend or speculate about this? Seems surprising.

Edit: I'm not pandering to authority, I just believe the people I've quoted are actually very smart people who can reason very well and they have a valid opinion on the topic. They also have little financial interest in the success, failure or regulation of AI, which is important.

civilized · 2 years ago
Reminder that not all "experts" agree with the AI doom thesis. Yann LeCun thinks it's nonsense.

I don't think anyone really has "expertise" to perform AI doom prognostication, regardless of whether they created dropout or ReLU or the Transformer architecture or are a professor at Oxbridge/MIT or whatever else people take as an impressive credential. But even if we accept the logic of those who want to argue from the authority of expertise, we don't have expert consensus.

digbybk · 2 years ago
About “pandering to authority”, I keep seeing this move in debates:

- people with safety concerns just don’t understand the tech. - “here’s a list of people with safety concerns who demonstrably do understand the tech very well” - you’re pandering to authority.

A related move:

- greater intelligence leads to greater concern for universal values. - “here’s a list of very intelligent people who disagree” - there are a lot of very intelligent people in cults too, you know.

I don’t think I’m a doomer but they do seem more coherent than the people saying there’s no cause for concern.

drumhead · 2 years ago
Max Tegmark seems most worried about iterative self improvement by the systems. But is that possible? And just how far could they improve themselves? Enough to achieve self awareness or just enough to make themselves a lot faster?
verdverm · 2 years ago
Why their doomerism over their peer's optimism. There are top tier AI researchers who think differently. How do you choose which to listen too?
sampo · 2 years ago
> Ok, so Yoshua Bengio, Geoffrey Hinton or Max Tegmark aren't able to comprehend or speculate about this?

I am genuinely amazed how even educated and intelligent people (or maybe predominantly them) have fallen into this, what I see is essentially a doomsday cult. But groupthink happens, maybe nobody is safe. It's not like it hasn't happened before.

fedeb95 · 2 years ago
criticism is the core of scientific reasoning. Having a bias toward a position just because "an expert told so" when presented with criticism isn't scientific.
mrtranscendence · 2 years ago
Strictly speaking they said “a lot”, not “all”.
logicchains · 2 years ago
Yoshua Bengio is a plagiarist who shamelessly accepted a Turing Award for Schmidhuber's work, and Geoffrey Hinton is a leftist who can't bear the thought of an AI that isn't aligned to progressive values like the Silicon Valley ones are.
DalasNoin · 2 years ago
People are downvoting you. But 'pandering' to authority is a valid point if the other site does an ad hominen (These people are not technical, don't REalleY understand what they are talking about)
stabbles · 2 years ago
Maybe hackernews would benefit from an LLM that refuses appeal to authority type of arguments

Dead Comment

throwuwu · 2 years ago
Appeal to authority isn’t interesting. Tegmark is a physicist and science communicator anyways so he doesn’t belong in your list.
andyjohnson0 · 2 years ago
You make good points, but I wonder what "costs" and "resources" mean in the context of a (hypothetical) self-enhancing, autonomous AI. All I can think of is computational substrate and the energy to power it. And once the AI has booted-up its obligatory drone army and orbital platforms, it can harvest very large amounts of both. Without us.

Obviously I'm being alightly facetious, but my point is the constraints on an AI may not be ones we're familiar with as humans (society, environment, etc.)

And again obviously, such a scenario is unlikely. But, like DNA replication, it only has to happen once and then it just keeps on happening. And then its game over for us, I reckon.

visarga · 2 years ago
> A lot of the AI doomerism comes from folks who do not understand the real complexities in making systems which really can function in a fully autonomous way in an environment which is hostile and dynamic.

I agree. Autonomy is hard. But a weaker version of it is possible - self replication in software. An AI model can generate text that can train new models from scratch[1]. They can also generate the model code, explain it, make meaningful changes[2], monitor the training run and evaluate the "child" models[3].

So AI can "pull everything from inside" to make a child, no external materials needed, but any human generated or synthetic data can be used as well. AI is capable of doing half the job of self replication, but can't make GPUs, and probably won't be able to do it autonomously for a long time. High end GPUs are so hard to make no company or even country controls the whole stack, it only works through global cooperation.

[1] TinyStories https://arxiv.org/abs/2305.07759 and Microsoft's Phi-1 using generated data

[2] Evolution through Large Models https://arxiv.org/abs/2206.08896

[3] G-Eval: NLG Evaluation using GPT 4 with Better Human Alignment https://arxiv.org/abs/2303.16634

gmerc · 2 years ago
we already created a dumb system of rules driven by profit maximizing entities called corporations that is exhausting the planet as we speak and we can’t seem to be able to control it, despite our survival depending on it. So no.
jsight · 2 years ago
As always, we overlook the real risks. Even seen an elderly person get scammed by a cold caller?

Now imagine an army of AI cold calling scammers, with realistic voices, steadily training on the absolute best scam techniques.

How many people lose bank accounts before the scammer gets caught?

Of course, this happens today, without AI, but as with many things in computing, scale changes everything!

Balgair · 2 years ago
Quantity has a Quality all it's own
ftxbro · 2 years ago
> "folks who do not understand the real complexities in making systems which really can function in a fully autonomous way in an environment which is hostile and dynamic."

OK but consider that guys like Noam Brown are becoming involved in LLM based AI, this was the guy who made the Poker bots that beat humans (libratus/pluribus) and the guy who made the first Diplomacy bot that can beat people (cicero). I mean those AIs didn't use LLMs and they weren't literally superhuman cognitive agents in the fully open ended world but I mean they are working on it right now and they appreciate the differences between adversarial and non adversarial environments as well as anyone probably even the military. Also the military is using these LLMs and they probably sometimes think about adversarial environments.

andsoitis · 2 years ago
A lot of the AI doomerism comes from folks who do not understand the real complexities in making systems which really can function in a fully autonomous way in an environment which is hostile and dynamic

To be fair, a lot of those people take their cues from technologists who ostensibly know what they’re talking about.

samstave · 2 years ago
>>*A lot of the AI doomerism comes from folks who do not understand the real complexities in making systems which really can function in a fully autonomous way in an environment which is hostile and dynamic*

THIS IS exactly where AI doomerism SHOULD come from...

What do we do with an unrully AI system fully autonomous in an environment with is both dynamic and hostile?

THIS IS THE FOUNDATION OF THE PREMISE OF THE PROBLEM

-

"Please take a look at the current cadre of political candidates and map them to DND alignment charts, based on their actual votes/political activity"

I hope we can get there one day.

-

I wonder if we can find FORK BOMB or rm -rf / type exploits on various GPTs/LLMs/AIs?

I cant wait to see how devestating these will be -- or the first full executed security exploit by AI alone with a single prompt?

mycall · 2 years ago
Are not evolutionary goals part of keeping a process continous and long running? It is the implicit part which AIs are bound to discover (or emulate) and optimize against.
tim333 · 2 years ago
It also comes from folks who do understand such complexities. I mean Musk who's trying to make cars autonomous in dynamic hostile environments, Geoffrey Hinton who pioneered neural networks, Altman who's behind GTP4. I think the argument that AI isn't risky because the people warning about it are fools is not a good one.

On the other hand there's an AI anti doom argument. Currently we are all doomed to die but maybe through AI upload like scenarios we can dedoom?

barbariangrunge · 2 years ago
Some of the “dumb” systems we’ve built include nuclear weapons. It doesn’t need to be fully autonomous to be dangerous
janalsncm · 2 years ago
> imagine a CEO who acquires an AI assistant. They begin by giving it simple, low-level assignments, like drafting emails and suggesting purchases. As the AI improves over time, it progressively becomes much better at these things than their employees. So the AI gets “promoted.” Rather than drafting emails, it now has full control of the inbox. Rather than suggesting purchases, it’s eventually allowed to access bank accounts and buy things automatically

Ok let’s pause for a second and observe the slippery way the author has described “AI” progress. In the author’s world, this AI isn’t just a limited tool, it’s a self-improving independent agent. It’s an argument that relies on the existence of something that doesn’t presently exist, solving problems that won’t exist by the time it gets here. We already have tools that can draft emails and suggest purchases. The email drafts require oversight and…no one trusts product recommendation. And importantly, they are non overlapping. It turns out that specialization in one area doesn’t transfer. No matter how good you are at writing emails, it doesn’t lend itself to running a company.

lannisterstark · 2 years ago
>imagine a CEO who acquires an AI assistant

>So the AI gets “promoted.” Rather than drafting emails, it now has full control of the inbox

Yeah that premise is absurd. Why would anyone 'promote' an AI system rather than using another, specialized AI system to do that other specific task?

ProllyInfamous · 2 years ago
>Why would...?

Because they trust the trained intelligence to behave predictably, which is a typical reason for promotion.

ErroneousBosch · 2 years ago
because your average CEO doesn't know AI from Auto-reply and inbox rules.they know AI is the hot new thing and they gotta have it.
titanomachy · 2 years ago
> in the author’s world, this AI is... a self-improving independent agent

That's not necessarily true, it could also be a product maintained by a third party that receives upgrades over time.

azeirah · 2 years ago
> At first, the CEO carefully monitors the work, but as months go by without error, the AI receives less oversight and more autonomy in the name of efficiency. It occurs to the CEO that since the AI is so good at these tasks, it should take on a wider range of more open-ended goals: “Design the next model in a product line,” “plan a new marketing campaign,” or “exploit security flaws in a competitor’s computer systems.”

I'm not sure why we assume that a more intelligent system would prevent that much more problems.

Intelligent AI orders image processing ASICS from Image Processing Inc, Image Processing Inc doesn't send order on time. Of course Intelligent AI is intelligent so it calculated error margins in delivery time. Image Processing Inc goes bankrupt, order cannot be delivered, product launch fails, Intelligent's AI boss is mad at Intelligent AI.

Doing business means dealing with systems you have no control over, more intelligence may mean better predictions and broader understanding of the systems you are dealing with (IE, not ordering chips from an area where you as an AI predict an earthquake to happen based on seismographic data you have access to that no reasonable business person would research), but it won't mean these AI's will be some kind of infallible God, they'll just be a bit better.

My personal belief is that even if you "increase" intelligence by an order of magnitude, your ability to predict the behavior of external systems doesn't increase proportionally. You'll still have to deal with unpredictability and chaos; weather, death, scams, war, politics, manufacturing, emotions, logistics.

OTOH, I do believe running a business will become more efficient.

mrtranscendence · 2 years ago
This is a great point. There’s a lot of wooly thinking about what it means for a system to be intelligent, particularly “super” intelligent. People seem to think we’ll create a machine that is almost literally infallible — able to predict both physical systems and human behavior with perfect foresight many steps in advance. I’m not sure that’s even possible, let alone likely.
logicchains · 2 years ago
>able to predict both physical systems and human behavior with perfect foresight many steps in advance

It's literally impossible. That's a key part of https://en.wikipedia.org/wiki/Chaos_theory ; in formally chaotic systems (of which there are many), the uncertainty in a forecast increases exponentially with elapsed time, which means it's impossible for an AI, no matter how smart, to predict very far ahead. People who believe otherwise are engaging in magical thinking; like "intelligence" is some magical quality that allows one to break the laws of the universe, do the equivalent of sorting a list in O(1) time.

ethanbond · 2 years ago
What is "intelligence" other than proximity to that destination?
benlivengood · 2 years ago
Part of the existential risk is that intelligent goal-directed agents will recognize the unpredictability of their environment and do what humans have done to our environment; simplify and systematize it so that it is more predictable (forestry, mining, pavement, antibiotics, pest-control, industry). Biological life is unpredictable and so replacing it with simpler biological organisms or well-understood machines and systems will be a valid strategy for artificial agents to obtain and retain power to achieve their goals.

To put it another way; there are two ways to approach infallibility: be uncomputably intelligent (AIXI) or bulldoze/terraform/grey-goo the Universe into something very predictable. An agent smart enough to understand both options and how reality works is likely to choose the physically realizable option.

Only careful choices of goals that value human flourishing (AI alignment) will avoid optimizing us out of the picture.

abhaynayar · 2 years ago
> My personal belief is that even if you "increase" intelligence by an order of magnitude, your ability to predict the behavior of external systems doesn't increase proportionally.

This is what Max Tegmark writes in his book. I don't know why he has taken a doomer stance on LLMs.

deschutes · 2 years ago
The promise of AGI isn't really optimizing something mundane like widget manufacturing though surely it will be tasked with doing that. It's a rapid advancement of the frontier of knowledge. For example we dream of curing disease and aging but it is beyond our current knowledge.

Obviously nobody knows what that looks like or when or if we'll get there but there is probably a ton of existential hazards if we do. Shit even finding out things we hope for are truly impossible would be a kind of doom of its own.

nonameiguess · 2 years ago
How is AGI supposed to provide this knowledge? We don't have the knowledge we need to cure disease and aging because the experiments that could potentially provide that knowledge are either too difficult to conduct or can't be conducted ethically and/or legally at all.

There is also an inherently time-bound bottleneck to producing knowledge of this sort. You can't possibly demonstrate a treatment has really increased human lifespan until an entire experimental cohort lifespan passes.

You can conceivable simulate an entire human body at molecular level to sidestep the issue with conducting an RCT with live subjects, but we already have sufficient knowledge of atomic physics and organic chemistry to do that right now. It's just not computationally tractable.

logicchains · 2 years ago
>For example we dream of curing disease and aging but it is beyond our current knowledge.

The hard sciences (especially medicine) and engineering don't advance just by thinking deeply about things, they advance through physical research and experiments. A lot of these can't be sped up, e.g. clinical trials, so an AI wouldn't make much difference here. And no it can't just "simulate it all", as the computational power needed to simulate reality to that degree of accuracy is still many orders of magnitude greater than what's currently available to us.

azeirah · 2 years ago
Yeah for sure. I do believe AI has a lot of potential to break through various hard problems we currently face in science and medicine.

I just really dislike the idea of the "omniscient super-intelligent ruler of chaos" vision of AI that I keep encountering in the media.

ChatGTP · 2 years ago
Time travel would be fucking mad.
dist-epoch · 2 years ago
As you become more intelligent the impact of unpredictability decreases. That's kind of baked in the definition of intelligence.

Intelligent AI would surely do better in your situation than a human trying to source the image processing ASICs. That's all that matters, that it executes better, even if eventually it still fails.

civilized · 2 years ago
> As the AI improves over time, it progressively becomes much better at these things than their employees.

People looooooooooooove speculating about things that are nowhere near happening.

It's more fun the more distant and baseless it gets, but it's also more useless.

As usual, I invite you to bookmark this post and make fun of me in 5-10 years if I'm wrong. I'm not that interested in the latest fashionable scaling argument for imminent ASI, or whatever people are saying at the moment.

ben_w · 2 years ago
On the contrary, the loop is:

1. "Computers will never be able to X, as that requires imagination and intuitive thinking"

2. "X is at least 50 years away"

3. Press release: Computers do X (last I saw: Diplomacy (the game not the job), interpretation of medical scans)

4. "X isn't real AI, it's just brute force search/a big database/a glorified Markov chain/linear algebra" (LLMs and diffusion models go here, self driving cars)

5. Computers better at X than most/all humans, but still improved by having human collaborators (Go is either here or 6, IIRC also some variations of Poker, Starcraft, half the Amstrad back catalog)

6. Computers only made worse by having a human trying to help (Chess goes here)

chinchilla2020 · 2 years ago
That isn't the loop. A lot of predictions about computers have turned out wrong.

The biggest one in my lifetime was the belief that social media would bring the world together, create world peace, and spread economic prosperity.

Usually the technologies that surprise us were predicted by nobody. They just show up.

Go watch "2001: A Space Odyssey". Then go watch some music videos from 2001. You will get a better idea about how accurate these predictions are.

paxys · 2 years ago
Now do flying cars, self driving cars, brain interfaces, 3D printed organs, fusion power, humanoid robots...

For every piece of tech that exceeded people's expectations, there are five more that never got out of science fiction books despite decades of continuous investment and constant media hype.

throwuwu · 2 years ago
You conveniently left out the media hype cycles
cubefox · 2 years ago
> People looooooooooooove speculating about things that are nowhere near happening.

You probably would have said the same thing about something like ChatGPT a few years ago. Pure science fiction! Nowhere near happening!

ChatGTP · 2 years ago
To be fair though, I still don’t think ChatGPT is the AI we expected to be talking too. The talking AI was a Hal 9000, current an LLM isn’t that and it might never be. Time will tell I guess.

Personally I think there is a hidden special property of ChatGPT-4 that makes it so unreal. It talks and because it talks, we can say this: “I bet you never thought that would happen, we’re close to solving all problems.”.

Not discounting the talking bit but if it washed dishes or even those that just generate images, we’d be less convinced the AI is about to take over. Someone once commented, why don’t we worry about stable diffusion taking over the world. I have to say I do agree a little because being trained on image data must help it build some type of world model too. We just don’t worry about that though.

civilized · 2 years ago
Tech's hype artists didn't predict ChatGPT either. Hype artists predict the ambitious futures found in sci-fi novels. They predict things that would be cool if they happened, without any understanding of how they would happen and the strengths and limitations of the methods that might be used. Thus, the futures they predict are not what actually occur.

Only the people actually inventing the future, things like AlphaGo and GPT, have a limited crystal ball into the future. They have some understanding of what their methods can and can't do. And even they succumb to hype when they are more successful than they expected to be.

Tech pundits on LinkedIn and Twitter and HN have rarely if ever gotten anything right about what exactly the future will look like, and I expect that pattern to continue.

donmcronald · 2 years ago
> As the AI improves over time, it progressively becomes much better at these things than their employees.

> People looooooooooooove speculating about things that are nowhere near happening.

I agree with that, but I think the real risk isn't that AI improves and gets better than an employee. I think the real risk is that it replaces employees, regardless of whether or not it's better, because it's cheaper. It'll be like the first tier of tech support, but without an option for escalation.

My personal opinion is that is dumb, oversold garbage and people are falling for it because it's good at grammar and spelling. I base that on asking it about things where I know there are common misunderstandings with authoritative clarification. The dumb loud people confidently repeat incorrect claims in large volumes while the authoritative sources are silently bewildered at the stupidity. From what I've seen AI is trained on data produced, at least in part, by the "dumb loud masses" and it's "knowledge" is based on quantity over quality.

As it about the ZFS scrub of death which is a mass mania of idiocy and it'll gladly tell you all about it. Or ask it what the validation methods are for TLS certificates and it'll happily regurgitate the common knowledge of DV, OV, and EV even though the official docs [1] definitively state it's DV, IV, OV, and EV in section 7.1.2.7.1.

It's unreliable and perpetuates misinformation because, AFAIK, it treats most of the input information equally and that's not how things work. I don't remember who, but I remember seeing a famous marketer from the 70s or 80s (?) talking about how most of their success came from realizing that "visibility is credibility". That's true, and unfortunate because we're ignoring a lot of intelligent people that aren't willing to engage in a shouting match to get their voice heard while the dumbest, loudest half of the population is having their viewpoints used to train the LLMs that people are going to rely on for information discovery.

The really scary part is that, based on what I've seen, people that do contract work for the government seem to be very eager to replace their workforce (costs) with AI (not costs). Just wait until you need to deal with the government for something and the whole process is like having an argument with a super Redditor.

1. https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-...

cubefox · 2 years ago
> I think the real risk isn't that AI improves and gets better than an employee.

As long as they are less intelligent than us, they probably won't spiral out of control. The real risk is that they get substantially smarter than us. Gorillas can't control us, because we are smarter. We don't even actively dislike them, but they are nonetheless threatened with extinction. We do not want to be in the role of the gorillas.

paxys · 2 years ago
It seems like these days there are a hundred people working on AI and millions more who are making a career just discussing it. We don't need more ethicists, futurists, policy researchers, influencers, journalists, think tanks and whoever else in the space. There is nothing original left to say about any of these topics. If you aren't contributing to actual progress in the area then it's best to not shove yourself into the conversation at all.
munificent · 2 years ago
> If you aren't contributing to actual progress in the area then it's best to not shove yourself into the conversation at all.

By that same token, if you aren't working to make landmines more efficient, you don't deserve to have an opinion about landmines.

logicchains · 2 years ago
It's very easy to understand what a landmine does even to a layman, but the majority of even technical people have magical, fantastical ideas about what LLMs like GPT are, completely divorced from the mundane reality of how such things actually operate.
erwald · 2 years ago
The author of this article is Dan Hendrycks, who has a PhD in ML from UC Berkeley and was one of the contributors to the GELU activation function (among other things).
ethanbond · 2 years ago
Okay then keep the AI in boxes and out of society.

Notice that people perked up and started caring about AI development primarily when these were introduced to the public? Could it be that the public has a legitimate stake in new technologies that are introduced to it?

"It seems like these days there are a hundred people dumping chemicals into rivers and millions more who are saying 'don't dump chemicals into the rivers.' We don't need em! If you aren't dumping chemicals into rivers then it's best not to shove yourself into the conversation at all."

c_crank · 2 years ago
As far as the public goes, their primary concern would be how LLMs could put a lot of them out of work in the artistic fields. AI Doomsdaying is primarily a hobby for nerdy scientists who believe too much in the singularity, not the average Joe.
chinchilla2020 · 2 years ago
None of these think tank types represent society.

Let society decide what they want to do with AI, not a bunch of compromised hype chasers.

AbrahamParangi · 2 years ago
You don't represent society, certainly not more than the millions of people with ChatGPT subscriptions.
troll_v_bridge · 2 years ago
>”There is nothing original left to say about any of these topics. If you aren't contributing to actual progress in the area then it's best to not shove yourself into the conversation at all.”

This is one of the more hypocritical statements I’ve read recently. If you study the history of science, it evolved from thought experiments.

93po · 2 years ago
How many people have a full time job discussing only AI? It's guaranteed not that many.

Also to say there's nothing original left to say is ridiculous. There is a ton we haven't figured out yet.

andyjohnson0 · 2 years ago
> The good news is that we have a say in shaping what they will be like.

The problem with this is "we". It implies rhe possibility of some kind of global consensus and coordinated relinquishment behaviour. Which is historically unlikely and would increase the rewards for anyone prepared to break the rules. Unless AGI requies superpower-level resources, many sufficiently-resourced actors will be motivated to use it for their own advantage.

EamonnMR · 2 years ago
TIME must have gotten a lot of clicks off of their Yudkowsky op ed. As always the answer isn't 'cool, let's regulate AI to limit its profitability, thus limiting AI development' but rather 'we should keep throwing money at it, just making sure we throw money at the right particular people.' Yudkowsky didn't want to bomb all data centers, just the ones that wouldn't comply with his regime. Similarly:

"We need research on AI safety to progress as quickly as research on improving AI capabilities. There aren’t many market incentives for this, so governments should offer robust funding as soon as possible."

I'm reminded of the tech ceo caricature in Don't Look Up who, when presented with an incoming asteroid ready to wipe out the earth, hatches a plan to profit from it.

yanderekko · 2 years ago
> As always the answer isn't 'cool, let's regulate AI to limit its profitability, thus limiting AI development' but rather 'we should keep throwing money at it, just making sure we throw money at the right particular people.' Yudkowsky didn't want to bomb all data centers, just the ones that wouldn't comply with his regime.

Who would have to own the data centers for Yudkowsky to support "throwing money at profit-driven AI development"?

scoofy · 2 years ago
I think one point missed by this is that the vast majority of outcomes for species in a "darwinian" environment is extinction.

We look at evolution with a very rosy lens because we ended up on the top of the food chain. Unintelligent prokaryotes far and away dominate the "darwinian" world. Intelligent species have vastly less control over their environment as they think they do.