Readit News logoReadit News
latexr · 16 days ago
> “We felt that it wouldn't actually help anyone for us to stop training AI models,”

How magnanimous! They are only thinking of others, you see. They are rejecting their safety pledge for you.

> “We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”

Oops, said the quiet part out loud that it’s all about money. “I mean, if all of our competitors are kicking puppies in the face, it doesn’t make sense for us to not do it too. Maybe we’ll also kick kittens while we’re at it”.

For all of you who thought Anthropic were “the good guys”, I hope this serves as a wake up call that they were always all the same. None of them care about you, they only care about winning.

isodev · 16 days ago
Indeed, Anthropic can’t afford to be the ones that impose any kind of sense in the market - that’s supposed to be the job of the government by creating policy, regulations and installing watchdogs to monitor things.

But lucky for the AI companies, most of them are based in place that only has a government on paper and everyone forgot where that paper is.

Loading comment...

Loading comment...

Loading comment...

watwut · 16 days ago
> Oops, said the quiet part out loud that it’s all about money. “I mean, if all of our competitors are kicking puppies in the face, it doesn’t make sense for us to not do it too. Maybe we’ll also kick kittens while we’re at it”.

I mean, yes, that is actually how world works. That is why we need safety, environmental and other anti-fraud regulations. Because without them, competition makes it so that every successful company will fraud, hurt and harm. Those who wont will be taken over by those who do.

Loading comment...

Loading comment...

Loading comment...

akudha · 15 days ago
they only care about winning

To be fair, this is true in nearly all industries and for nearly all companies. Almost everyone is chasing money and monopoly. Not that it makes it right, just pointing out it isn’t unique or even interesting about the AI companies

Loading comment...

nsbk · 16 days ago
Since it is all about money, I just did vote with my wallet and cancelled the Max subscription

Loading comment...

jlaternman · 12 days ago
Actually this makes sense to me.

It sounds like they are in a cutthroat market, and realised they couldn't afford to stake that principle. And that it wouldn't matter if they did – it would just assure them being handicapped in a field where no others followed suit.

Better hills to die on.

Dead Comment

surgical_fire · 16 days ago
> For all of you who thought Anthropic were “the good guys”

Was anyone fooled by this?

I mean, I know this is HN and there is a demographic here that gets all misty eyed about the benevolence of corporations.

It takes a special kind of naivety to believe in those claims.

Loading comment...

high_na_euv · 16 days ago
But what really AI safety is?

Censorship?

Loading comment...

shubhamjain · 14 days ago
I was wondering if it was because of heavy-handedness of the administration, but apparently:

> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.

Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible ones." I honestly can't comprehend the timeline we are living in. Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.

ACCount37 · 14 days ago
That's because it is.

AI is powerful and AI is perilous. Those two aren't mutually exclusive. Those follow directly from the same premise.

If AI tech goes very well, it can be the greatest invention of all human history. If AI tech goes very poorly, it can be the end of human history.

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

tyre · 14 days ago
“A source familiar with the matter” is almost certainly a company spokesperson.

If they were unrelated, Anthropic wouldn’t be doing this this week because obviously everyone will conflate the two.

Loading comment...

Rapzid · 14 days ago
Well before Anthropic thought they were God's gift to AI; the chosen ones protecting humanity.

With the latest competing models they are now realizing they are an "also" provider.

Sobering up fast with ice bucket of 5.3-codex, Copilot, and OpenCode dumped on their head.

Loading comment...

tenthirtyam · 14 days ago
I always enjoyed the Terminator movie series, but I always struggled to suspend my disbelief that any humans would give an AI such power without having the ability to override or pull the plug at multiple levels. How wrong I was.

N.B. the time travel aspect also required suspension of disbelief, but somehow that was easier :-)

Loading comment...

Loading comment...

jdross · 14 days ago
Would nuclear energy research be a good analogy then? Seems like a path we should have kept running down, but stopped bc of the weapons. So we got the weapons but not the humanity saving parts (infinite clean energy)

Loading comment...

Loading comment...

Loading comment...

Loading comment...

whywhywhywhy · 14 days ago
> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons

They're not really, it's always been a form of PR to both hype their research and make sure it's locked away to be monetized.

whatshisface · 14 days ago
Shouldn't we be a little more skeptical about these abstract arguments when a very concrete sale is on the line?
goodmythical · 14 days ago
Isn't curing cancer just as dangerous as a nuclear bomb? Especially considering some of the gene-therapies under consideration? Because you can bet that a non-negligable portion of research in this space is being funded by governments and groups interested in application beyond curing cancer. (Autism? Whiteness? Jewishness? Race in general? Faith in general? Could china finally cure western greed? Maybe we can slip some extra compliancy in there so that the plebia- ah- population is easier to contr- ah- protect.)

Curing all cancers would increase population growth by more than 10% (9.7-10m cancer related deaths vs current 70-80m growth rate), and cause an average aging of the population as curing cancer would increase general life expectancy and a majority of the lives just saved would be older people.

We'd even see a jobs and resources shock (though likely dissimilar in scale) as billions of funding is shifted away from oncologists, oncology departments, oncology wards, etc. Billions of dollars, millions of hospital beds, countless specialized professionals all suddenly re-assigned just as in AI.

Honestly the cancer/nuclear/tech comparison is rather apt. All either are or could be disruptive and either are or could be a net negative to society while posing the possibility of the greatest revolution we've seen in generations.

mikkupikku · 14 days ago
To paraphrase a deleted comment that I thought was actually making a good point, nuclear medicine and nuclear weapons are both fruit from the same tree.
scottLobster · 14 days ago
> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.

Maybe some of the more naive engineers think that. At this point any big tech businesses or SV startup saying they're in it to usher in some piece of the Star Trek utopia deserves to be smacked in the face for insulting the rest of us like that. The argument is always "well the economic incentive structure forces us to do this bad thing, and if we don't we're screwed!" Oh, so ideals so shallow you aren't willing to risk a tiny fraction of your billions to meet them. Cool.

Every AI company/product in particular is the smarmiest version of this. "We told all the blue collar workers to go white collar for decades, and now we're coming for all the white collar jobs! Not ours though, ours will be fine, just yours. That's progress, what are you going to do? You'll have to renegotiate the entire civilizational social contract. No we aren't going to help. No we aren't going to sacrifice an ounce of profit. This is a you problem, but we're being so nice by warning you! Why do you want to stand in the way of progress? What are you a Luddite? We're just saying we're going to take away your ability to pay your mortgage/rent, deny any kids you have a future, and there's nothing you can do about it, why are you anti-progress?"

Cynicism aside, I use LLMs to the marginal degree that they actually help me be more productive at work. But at best this is Web 3.0. The broader "AI vision" really needs to die

coffeefirst · 14 days ago
Let's suppose I believe them, that's still a bad idea.

The reason Claude became popular is because it made shit up less often than other models, and was better at saying "I can't answer that question." The guardrails are quality control.

I would rather have more reliable models than more powerful models that screw up all the time.

toss1 · 14 days ago
Excellent news. I was seriously worried they would cave when I saw the earlier news they'd dropped their core safety pledge [0].

It is entirely reasonable to not provide tools to break the law by doing mass surveillance on civilian citizens and to insist the tool not be used automatically to kill a human without a human in the loop. Those are unreasonable demands by an unreasonable regime.

[0] https://news.ycombinator.com/item?id=47145963

kelnos · 14 days ago
"It's not because of the Pentagon deal", says company that has just greased the wheels for said Pentagon deal to move forward.

Riiiiiight.

francisofascii · 14 days ago
It is a "reasonable" argument to keep yourself in the game, but it is sad nonetheless. You sacrifice your morals and do bad things, so if things get way worse, maybe you will be in a position to stop something from really bad from happening. Of course, you might just end up participating in the really bad thing.

Deleted Comment

nextaccountic · 14 days ago
> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.

This sounds like a lie. But if they are telling the truth, that's a terrible timing nonetheless.

austinjp · 14 days ago
> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.

Amd they alone are responsible enough to govern it.

cmrdporcupine · 14 days ago
We all made fun of Blake Lemoine and others for spending too many late nights up chatting with (ridiculously primitive by this year's standards) LLM chat bots and deciding they were sentient and trapped.

But frankly I feel like the founders of Anthropic and others are victim of the same hallucination.

LLMs are amazing tools. They play back & generate what we prompt them to play back, and more.

Anybody who mistakes this for SkyNet -- an independent consciousness with instant, permanent, learning and adaptation and self-awareness, is just huffing the fumes and just as delusional as Lemoine was 4 years ago.

Everyone of of us should spend some time writing an agentic tool and managing context and the agentic conversation loop. These things are primitive as hell still. I still have to "compact my context" every N tokens and "thinking" is repeating the same conversational chain over and over and jamming words in.

Turns out this is useful stuff. In some domains.

It ain't SkyNet.

I don't know if Anthropic is truly high on their own supply or just taking us all for fools so that they can pilfer investor money and push regulatory capture?

There's also a bad trait among engineers, deeply reinforced by survivor bias, to assume that every technological trend follows Moore's law and exponential growth. But that applie[s|d] to transistors, not everything.

I see no evidence that LLMs + exponential growth in parameters + context windows = SkyNet or any other kind of independent consciousness.

Loading comment...

Loading comment...

sonusario · 14 days ago
I wonder if it stems from any of the "AI uprising" stories where humanity is viewed as the cancer to be eradicated.

Loading comment...

amelius · 14 days ago
> Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible" ones.

Reminds me of:

https://en.wikipedia.org/wiki/Paradox_of_tolerance

which has the same kind of shitty conclusion.

skeptic_ai · 14 days ago
OpenAI never open sourced anything relevant or in time. Internal email leaks they only cared to become billionaires.

Claude only talks about safety, but never released anything open source.

All this said I’m surprised China actually delivered so many open source alternatives. Which are decent.

Why westerns (which are supposed to be the good guys) didn’t release anything open source to help humanity ? And always claim they don’t release because of safety and then give the unlimited AI to military? Just bullshit.

Let’s all be honest and just say you only care about the money, and whomever pays you take.

They are businesses after all so their goal is to make money. But please don’t claim you want to save the world or help humans. You just want to get rich at others expenses. Which is totally fair. You do a good product and you sell.

Loading comment...

Loading comment...

Loading comment...

oatmeal1 · 14 days ago
90% of the people cancer kills are over 50. Old people who start believing everything they see on Facebook, but continue voting, with even greater confidence in their opinions. Old people who voted in Trump. Curing cancer would be just about the worst thing AI could do.

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

heftykoo · 16 days ago
Ah, the classic AI startup lifecycle:

We must build a moat to save humanity from AI.

Please regulate our open-source competitors for safety.

Actually, safety doesn't scale well for our Q3 revenue targets.

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

lebovic · 16 days ago
I used to work at Anthropic. I fully believe that the folks mentioned in the article, like Jared Kaplan, are well-intentioned and concerned about the relationship between safety research and frontier capabilities – not purely profit.

That said, I'm not thrilled about this. I joined Anthropic with the impression that the responsible scaling policy was a binding pre-commitment for exactly this scenario: they wouldn't set aside building adequate safeguards for training and deployment, regardless of the pressures.

This pledge was one of many signals that Anthropic was the "least likely to do something horrible" of the big labs, and that's why I joined. Over time, the signal of those values has weakened; they've sacrified a lot to get and keep a seat at the table.

Principled decisions that risk their position at the frontier seem like they'll become even more common. I hope they're willing to risk losing their seat at the table to be guided by values.

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

drzaiusx11 · 15 days ago
Public benefit corporations in the AI space have become a farce at this point. They're just regular corporations wearing a different hat, driven by the same money dynamics as any other corp. They have no ability to balance their stated "mission" with their drive for profit. When being "evil" is profitable and not-evil is not, guess which road they'll take...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

bbatsell · 16 days ago
This headline unfortunately offers more smoke than light. This article has nothing to do with the current tête-à-tête with the Pentagon. It is discussing one specific change to Anthropic's "Responsible Scaling Policy" that the company publicly released today as version "3.0".

Loading comment...

Loading comment...

Loading comment...

sfink · 16 days ago
I guess this is Anthropic's DRM moment. (Mozilla resisted allowing Firefox to play DRM- limited media for a long time, until it finally had to give in to stay relevant.)

I don't know enough to evaluate this or other decisions. I'm just glad someone is trying to care, because the default in today's world is to aggressively reject the larger picture in favor of more more more. I don't know how effective Anthropic's attempts to maintain some level of responsibility can be, but they've at least convinced me that they're trying. In the same way that OpenAI, for example, have largely convinced me that they're not. (Neither of those evaluations is absolute; OpenAI could be much worse than it is.)

Rapzid · 16 days ago
How is this article not going to even mention the recent threats to Anthropic from the Government?!

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...

Loading comment...