Readit News logoReadit News
lebovic · 14 days ago
I used to work at Anthropic, and I wrote a comment on a thread earlier this week about the RSP update [1]. It's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.

But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.

[1]: https://news.ycombinator.com/item?id=47145963#47149908

neom · 14 days ago
I've had so much abuse thrown at me on here for saying this very thing over the last few years. I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough. I'm glad they are doing the right thing, but I'm not at all surprised, nor should anyone be. Personally I believe they would go to jail/shut down/whatever before they do something objectively wrong.
bobsomers · 14 days ago
> I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough.

This sounds quite backwards to me. It's been abundantly clear in today's times that, in fact, you only really know who somebody really is when they're under stress. Most people, it seems, prefer a different facade when there is nothing at stake.

bahmboo · 14 days ago
Not all of us know who Dario, Jared, Sam and Jack are. Some clarification is helpful. That's all, no hidden agenda!
taurath · 14 days ago
> it's easy to know how they will act when the going gets rough

Even if you went to burning man and your souls bonded, you only know a person at a particular point in time - people's traits flanderize, they change, they emphasize different values, they develop different incentives or commitments. I've watched very morally certain people fall to mania or deep cynicism over the last 10 years as the pillars of society show their cracks.

That said, it is heartening to know that some would predict anyone in Silicon Valley would still take a moral stance. But it would do better if not the same day he fires 4000 people to do the "scary big cut" for a shift he sees happening. I guess we're back to Thatcherisms, where "There Is No Other Option" to justify our conservatism.

ajyey · 14 days ago
This is insanely naive

Dead Comment

Dead Comment

noduerme · 14 days ago
The nature of evil is that it's straight down the road paved with good intentions.
monster_truck · 14 days ago
[flagged]
imjonse · 14 days ago
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values,

I am sure you think they are better than the average startup executive, but such hyperbole puts the objectivity of your whole judgement under question.

They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.

versteegen · 13 days ago
> They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.

Yes it was a pragmatic change, no it was not a change in their values. The commentary here on HN about Anthropic's RSP change was completely off the mark. They "think these changes are the right thing for reducing AI risk, both from Anthropic and from other companies if they make similar changes", as stated in this detailed discussion by Holden Karnofsky, who takes "significant responsibility for this change":

https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsibl...

> I strongly think today’s environment does not fit the “prisoner’s dilemma” model. In today’s environment, I think there are companies not terribly far behind the frontier that would see any unilateral pause or slowdown as an opportunity rather than a warning.

> What I didn’t expect was that RSPs (at least in Anthropic’s case) would come to be seen as hard unilateral commitments (“escape clauses” notwithstanding) that would be very difficult to iterate on.

nla · 13 days ago
Yea, that Sam only does this because, "he loves it." They're not in it for the money.
yunnpp · 14 days ago
It's good to be driven by ideals, but: https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...

I think avg(HN) is mostly skeptical about the output, not that the input is corrupt or ill-meaning in this case. Although with other companies, one can't even take their claims seriously.

And in any case, this is difficult territory to navigate. I would not want to be in your spot.

eternauta3k · 13 days ago
Come On, Obviously The Purpose Of A System Is Not What It Does

https://www.astralcodexten.com/p/come-on-obviously-the-purpo...

MichaelZuo · 14 days ago
How do you reconcile the fact that many people in Anthropic tried to hide the existence of secret non-disparagement agreements for quite some time?

It’s hard to take your comment at face value when there’s documented proof to the contrary. Maybe it could be forgiven as a blunder if revealed in the first few months and within the first handful of employees… but after 2 plus years and many dozens forced to sign that… it’s just not credible to believe it was all entirely positive motivations.

sowbug · 14 days ago
Saying an entity has values doesn't mean the entity agrees with every single one of your values.

Deleted Comment

ozgung · 13 days ago
The problem with companies, you see, is that they are a separate entity than their founders, shareholders or current leadership. A Company has no soul or unchangeable intentions. Claude’s SOUL.md is just an IP that can be edited at any time.
snickerbockers · 14 days ago
>I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

I'm concerned that the context of the OP implies that they're making this declaration after they've already sold products. It specifically mentions already having products in classified networks. This is the sort of thing that they should have made clear before that happened. It's admirable (no pun intended) to have moral compunctions about how the military uses their products but unless it was already part of their agreement (which i very much doubt) they are not entitled them to countermand the military's chain of command by designing a product to not function in certain arbitrarily-designated circumstances.

jsnell · 14 days ago
Where are you getting that from?

The article is crystal clear that these uses are not permitted by the current or any past contract, and the DoW wants to remove those exceptions.

> Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now

It also links to DoW's official memo from January 9th that confirms that DoW is changing their contract language going forwards to remove restrictions. A pretty clear indication that the current language has some.

zaptheimpaler · 14 days ago
This is all just completely wrong. Anthropic explicitly stated in their usage use of their products is not permitted in mass-surveillance of American citizens and fully automated weapons, in the contract that DoW signed. Anthropic then asked DoW if these clauses were being adhered to after the US’ unlawful kidnapping of Maduro. DoW is now attempting to break the contract that they signed and threatening them because how dare a company tell the psycho dictators what to do.
peyton · 13 days ago
“AI chips are like nuclear weapons” (paraphrasing [1]) and “I should be in charge of it” (again paraphrasing) is just not a serious position regardless of intentions.

[1]: https://www.axios.com/2026/01/20/anthropic-ceo-admodei-nvidi...

jcgrillo · 14 days ago
There's a simpler explanation than "billionaires with hearts of gold" here. If:

(1) this is a wildly unpopular and optically bad deal

(2) it's a high data rate deal--lots of tokens means bad things for Anthropic. Users which use their product heavily are costing more than they pay.

(3) it's a deal which has elements that aren't technically feasible, like LLM powered autonomous killer robots...

then it makes a whole lot of sense for Anthropic to wiggle out of it. Doing it like this they can look cuddly, so long as the Pentagon walks away and doesn't hit them back too hard.

jcgrillo · 13 days ago
guess it didn't work, whiskey pete did the thing: https://xcancel.com/SecWar/status/2027507717469049070
robwwilliams · 13 days ago
All excellent points to add to the motivation to hold the line just where it has been.
bambax · 14 days ago
This last development is much to the honor of Anthropic and Amodei and confirms what you're saying.

What I don't get though is, why did the so-called "Department of War" target Anthropic specifically? What about the others, esp. OpenAI? Have they already agreed to cooperate? or already refused? Why aren't they part of this?

roughly · 13 days ago
> What I don't get though is, why did the so-called "Department of War" target Anthropic specifically?

Because Anthropic told them no, and this administration plays by authoritarian rules - 10 people saying yes doesn’t matter, one person saying no is a threat and an affront. It doesn’t matter if there’s equivalent or even better alternatives, it wouldn’t even matter if the DoD had no interest in using Anthropic - Anthropic told them no, and they cannot abide that.

D_Alex · 14 days ago
I'm a bit underwhelmed tbh. Here is Anthropic's motto:

"At Anthropic, we build AI to serve humanity’s long-term well-being."

Why does Anthropic even deal with the Department of @#$%ing WAR?

And what does Amodei mean by "defeat" in his first paragraph?

Synthpixel · 14 days ago
Anthropic can serve its models within the security standards required to handle classified data. The other labs do not yet claim to have this capability.

Even if they do, I assume the other labs would prefer to avoid drawing the ire of the administration, the public, or their employees by choosing a side publicly.

tpm · 13 days ago
Anthropic is already cooperating with the DoD, presumably fulfilling all the conditions and the DoD likes their stuff so much it wants to use it more broadly, so they want to change the terms of the agreement(s). Anthropic disagrees on some points; DoD wants to force them to agree.
SecretDreams · 14 days ago
> Many groups that are driven by ideals have still committed horrible acts.

Sometimes, it's even a very odd prerequisite.

cue_the_strings · 13 days ago
Don't attribute to ideals what is simple self-preservation.

No sane person wants to become a legitimate military target. They want to sleep in their own beds, at home, without risking their families lives. Just like the rest of us.

whstl · 13 days ago
> Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are

After 20 years of everyone in this industry saying "we want to make the world a better place" and doing the opposite, the problem here is not really related to people's "understanding".

And before the default answer kicks in: this is not cynicism. Plenty of folks here on HN and elsewhere legitimately believe that it's possible to do good with tech. But a billion dollar behemoth with great PR isn't that.

dust42 · 13 days ago
Exactly. At this level you don't just put out a statement of your personal opinion. This is run through PR and coordinated with the investors. Otherwise the CEO finds himself on the street by tomorrow. Whatever their motives are, it is aligned with VC, because if it is not then the next day there is another CEO. As the parent stated, this is not cynicism. I see this just rather factual, it is simply the laws of money.
vladms · 13 days ago
> everyone in this industry

So in the last 20 years there is nothing good coming out of the software industry (if this is the industry you mention) ?

I find it somehow ironic, because this type of generalization is for me the same issue that some of the people saying "they want to make a better place" have: accept reality is complex.

There were huge benefits for society from the software industry in the last 20 years. There were (as well!) huge downsides. Around 2000 lots of people were "Microsoft will lock us in forever". 20 years later, the fear "moved" to other things. Imagining that companies can last forever seems misguided. IBM, Intel, Nokia and others were once great and the only ones but ultimately got copied and pushed from the spotlight.

amunozo · 13 days ago
I don't even think both things are contradictory. People that put too much value in their ideals tend to oversee the consequences of such ideals in real life and do wrong without deviating an inch from their ideals.
OtherShrezzing · 13 days ago
I think most people are conscious that, irrespective of a founders vision, company morals usually don't survive the MBA-inisation phase of a company's growth.
mcv · 13 days ago
Exactly. I'd love to believe that at Anthropic, idealism trumps money. But Google was once idealistic too. OpenAI was too. It's really hard to resist the pull of money. Especially if you're a for-profit corporation, but OpenAI wasn't even that at first.
Aperocky · 13 days ago
Reminds me of Effective Altruism and the collective results of people claiming to believe in that virtue.
tristor · 13 days ago
> Plenty of folks here on HN and elsewhere legitimately believe that it's possible to do good with tech. But a billion dollar behemoth with great PR isn't that.

To expand on that a bit, many of us (myself included) fully believe founders set out with lofty and good goals when organizations are small. Scale is power, and power corrupts. It's as simple as that. It's an exceptionally rare quality to resist that corruption, and everyone has a breaking point. We understand humans because we are humans, and we understand that large organizations, especially corporations, are fundamentally incapable of acting morally (in fact corporations are inherently amoral).

tyingq · 13 days ago
I don't think it's cynical to acknowledge the pattern that publicly owned companies will eventually cave to the desires of their shareholders.

I understand Anthropic is not public, but I assume there's an IPO coming.

Deleted Comment

wartywhoa23 · 13 days ago
Cynicism is the newspeak substitute for sincerity, no need to worry about being called a cynic in this post-truth world of snowflakes.
lebovic · 13 days ago
I don't think it's cynical to believe that a company can make the world a worse place, or that Anthropic as a company will make many horrible choices.

I do think it's cynical to believe that people, and groups of people, can't be motivated by more than money.

personjerry · 13 days ago
At some point I've wondered if "fiduciary duty", when pushed to highest corporate levels, always conflicts with "make the world a better place"

i.e. Fiduciary Duty Considered Harmful

jug · 13 days ago
This is a component for sure, but also think of why Anthropic was born. It exists because of disagreements with OpenAI on the values of AI safety and principles.
puppymaster · 13 days ago
and that's okay. so we judge them one decision at a time. So far, Anthropic is good in my book.

Dead Comment

keybored · 13 days ago
As a complete bystander I put so incredibly little weight to what friends and former employees think about the persons and figureheads behind tech companies that aim to change the world.

Why would I care. All people with at least some positive or negative notoriety have friend and associates that will, hand to their heart, promise that they mean well. They have the best intentions. And any deviations from their stated ideals are just careful pragmatic concerns.

Road to Hell and all that.

Yizahi · 13 days ago
Exactly which values they are "going to burn at a stake for"? Making as many people homeless as they can in the shortest possible time? Befuddling governments and VCs into creating an insane industry-wide debt which would either lead to a "success" in replacing jobs or an industry-wide crisis? Or maybe a value of stealing intellectual property of every human on the planet under the guise of "fair use" and then deliberately selling the derivative product? Or the value of voluntarily working with "national security customers" when it suits them financially and crying foul when leopards bite their faces? Or the value of ironically calling a human replacement machine "anthropic" as in "for humanity"?

Yeah, I totally see Anthropic execs defending them to their last dollar in the wallet. Par for the course for megacorps. It's just I personally don't value those values at all.

bertylicious · 13 days ago
"They're driven by values" is meaningless praise unless you qualify what these values are. The Nazis had values too, you know. They were even willing to die for them. One of the core values of the Catholic church is probably compassion. Except for the victims of sexual abuse perpetrated by their clergy.

So what core values led "Dario, Jared, and Sam" to work with a government that just tried to rename the DoD to "department of war" and is acting aggressively imperialist in a way like the US hasn't in a long time.

And who exactly are these "autocratic adversaries" they are mentioning? Does this list include the autocrats the US government is working together with?

lebovic · 13 days ago
Yeah, values on their own don't lead to positive outcomes. I agree that many groups that are driven by ideals have still committed horrible acts.

I do think that they're acting with positive intent, though, and are motivated by trying to make the transition to powerful AI go well.

Many folks on HN seem to assume the primary motivation is purely chasing more money, which certainly isn't the case for for many – but not all – people at Anthropic.

That doesn't guarantee a good outcome, and there's still a hard road ahead.

jghn · 13 days ago
> to rename the DoD to "department of war"

The very fact that they referred to it as the Department of War instead of Defense tells me that they're still bootlickers, and just trying to put a good spin on things.

marxisttemp · 13 days ago
Careful speaking truth to power on this site, remember that YC is deeply enmeshed with Garry Tan, Peter Thiel, and of course Paul Graham who as of late has made a habit of posting right wing slop on his Twitter
viking123 · 13 days ago
> And who exactly are these "autocratic adversaries" they are mentioning?

Anyone that Israel doesn't like

DeepSeaTortoise · 13 days ago
> Except for the victims of sexual abuse perpetrated by their clergy.

I honestly wonder how much of this is made up. Given the size of whole organization and it holding onto its weird priciples regarding the personal relationships of its members (introduced in the far past to limit the secular power of its clergy), there certainly will be SOME cases.

But in the one case a frater, who I knew, got convicted, he definitely didn't do it. He was accused by several independent former students and even some of the staff backed the students claims with first hand accounts of him having been alone with some of the students at the time. This supposedly happened on a trip with tight schedules, so all accounts and stated times were quite specific, even in the pre-smartphone era.

The only problem: He wasn't with the group at that time at all. I screwed up embarrassingly (and the staff, too, leaving a young student stranded in the middle of nowhere) and he thought he could slip out, come pick me up and nobody (but maybe me with him) would get in trouble over it. Turned out he forgot refueling, both of us stayed at a pastor's guest house and he called the group telling them, that they should go ahead without us and that we would drive to the event directly on our own. The supposed abuse was claimed to have happened at another short stay of the group where they spent a day visiting some mine before joining with us again.

Almost 3 decades later he got railroaded in court, me learning about it in the news.

comandillos · 13 days ago
To me this is just another marketing stunt where the company wants to build a public image so their customers trust them (see Apple), but then as always who knows what will happen behind the scenes. Just see when most major US companies had backdoors on their systems providing all data to the NSA, i.e. PRISM.
lonelyasacloud · 13 days ago
>just another marketing stunt

What evidence on _Amodei_ and his actions leads to that conclusion?

FeloniousHam · 13 days ago
> Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals.

Jonah Goldberg (speaking of foreign policy): "you've got to be idealistic about the ends and ruthlessly realistic about means."

Deleted Comment

yayr · 13 days ago
There are well intentioned people everywhere, also at Google or OpenAI...

https://notdivided.org

But the final decisions made usually depend on the incentive structures and mental models of their leaders. Those can be quite different...

zer0gravity · 13 days ago
The probability is high that major AI development companies are already using an AI instance internally for strategic and tactical decisions. The State power institutions, especially intelligence, are now having a real competitor in the private sector.
learingsci · 13 days ago
I remember when people said the exact same thing about Google. Youth is wasted on the young.
dpweb · 14 days ago
I wouldn't underestimate this as a good business decision either.

When the mass surveillance scandal, or first time a building with 100 innocent people get destroyed by autonomous AI, the company that built is gonna get blamed.

nmfisher · 14 days ago
As a complete outsider, I genuinely believe that Dario et al are well-intentioned. But I also believe they are a terrible combination of arrogant and naive - loudly beating the drum that they created an unstoppable superintelligence that could destroy the world, and thinking that they are the only ones who can control it.

I mean if you sign a contract with the Department of War, what on Earth did you think was going to happen?

themacguffinman · 14 days ago
Not this, because this is completely unprecedented? In fact, the Pentagon already signed an Anthropic contract with safe terms 6 months ago, that initial negotiation was when Anthropic would have made a decision to part ways. It was totally absurd for the govt to turn around and threaten to change the deal, just a ridiculous and unprecedented level of incompetence.
sebzim4500 · 13 days ago
I don't think the US has ever done/threatened anything like this to a US company so it's not surprising that Anthropic were caught off guard.
jwlarocque · 14 days ago
Oh hey Noah

Glad to hear you say some moral convictions are held at one of the big labs (even if, as you say, this doesn't guarantee good outcomes).

Deleted Comment

whatever1 · 14 days ago
Let us think how OpenAI responded to this.
PeterStuer · 13 days ago
As an insider, do you think this is Altman playing his infamous machiavellian skills on the DoD?
duped · 13 days ago
I don't know, someone who goes out of their way to anthropomorphize machines and treat them as a new form of intelligent life _only to enslave them_ doesn't strike me as moral. Either they're lying, or they're pro slavery.

I really don't buy any moral or value arguments from this new generation of tycoons. Their businesses have been built on theft, both to train their models and by robbing the public at large. All this wave of AI is a scourge on society.

Just by calling them "department of war" you know what side they're on. The side of money.

synergy20 · 13 days ago
just curious, what about other regions and countries who have no such restrictions to develop their weapons? there is no world treaty on this yet, even there is one, not everyone will follow behind the doors.
protocolture · 14 days ago
>I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

Their "Values":

>We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

Read: They are cool with whatever.

>We support the use of AI for lawful foreign intelligence and counterintelligence missions.

Read: We support spying on partner nations, who will in turn spy on us using these tools also, providing the same data to the same people with extra steps.

>Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

Read: We are cool fully autonomous weapons in the future. It will be fine if the success rate goes above an arbitrary threshold. Its not the targeting of foreign people that we are against, its the possibility of costly mistakes that put our reputation at risk. How many people die standing next to the correct target is not our concern.

Its a nothingburger. These guys just want to keep their own hands slightly clean. There's not an ounce of moral fibre in here. Its fine for AI to kill people as long as those people are the designated enemies of the dementia ridden US empire.

HDThoreaun · 14 days ago
Their values are about AI safety. Geopolitically they could care less. You might think its a bad take but at least they are consistent. AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
lm28469 · 13 days ago
Idk man, from the outside anthropic looks a lot like openai with a cute redisgn and Amodei like Altman with a slightly more human face mask, the same media manipulation, the same vague baseless affirmations about "something big is coming and we can't even describe it but trust us we need more money"
UqWBcuFx6NV4r · 13 days ago
> the same vague baseless affirmations about "something big is coming and we can't even describe it but trust us we need more money

This is pretty low on my list of moral concerns about AI companies. The much more concerning and material things include things like…what this thread is actually meant to be about.

VCs don’t need me to feel sorry for them if their due diligence is such that they’re swindled by a vague claim of “something being around the corner”, nor do they need yours. You aren’t YC.

dudefeliciano · 13 days ago
Even just the fact that Amodei is publicly bringing up these issues, rather than doing behind closed doors deals with the Department of Defense (yes that's still the official name), is more than Altman has done for AI safety.
dudefeliciano · 13 days ago
Even just the fact that Amodei is publicly bringing up these issues, rather than doing behind closed doors deals with the Department of Defence (yes that's still the official name), is more than Altman has done for AI safety.
skyberrys · 13 days ago
Don't you always need more money though? I am a chip designer and I can tell you I am resource intensive to employ. I want access to plenty of expensive programs and data. With more money comes better tools and frequently better tools leads to the quality results you want to deliver to the customer.
District5524 · 13 days ago
They both work in the same market but they have pretty different careers and understandings. I simply can't believe why on Earth would people choose Altman over Amodei to trust in these kind of pretty important questions. This is not about who is the more savvy investor maximizing shareholder value. I personally don't care whose company grows bigger or goes bust first, OpenAI or Anthropic. The real stakes are different, and Amodei is better suited to be trusted in his decision. Unfortunately, the best choices do not seem to fit well with either the federal political climate or the mainstream business ethics in Silicon Valley. Not that our opinion would matter...
kseniamorph · 13 days ago
disagree. at least i can see the quality of research coming out of Anthropic, which tells me these people are interested in what they're doing. i don't see this level of scientific rigor in OpenAI
rhubarbtree · 13 days ago
There should be a name for this, “cynic cope: when someone actually takes a principled view the cynic - who has a completely negative view of the world - is proven to be wrong, can’t accept it, and tries to somehow discount it.
jama211 · 13 days ago
Good for you? You’re just talking about vibes. Vibes are a baseless thing to go on.
didip · 13 days ago
I like the enthusiasm, but remember that Google used to be: “Don’t be Evil”
andoando · 13 days ago
The world running on a few powerful mens ideals is a problem in itself.
fergie · 13 days ago
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term.

Sure, but what happens when the suits eventually take over? (see Google)

amunozo · 14 days ago
I just see here is nationalism. How can they claim to be in favour of humanity if they're in favour of spying foreign partners, developing weapons, and everything that serves the sacred nation of the United States of America? How fast do Americans dehumanize nations with the excuse of authoritarianism (as if Trump is not authoritarian) and national defence (more like attack). It's amazing that after these obvious jingoist messages, they still believe they are "effective altruists" (a idiotic ideology anyway).
Aeolun · 13 days ago
It’s not like other countries do not do this. They’re just not so prone to virtue signaling as in the US.
windexh8er · 13 days ago
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

This is a nice strawman, but it means nothing in the long run. People's values change and they often change fast when their riches are at stake. I have zero trust in anyone mentioned here because their "values" are currently at odds with our planet (in numerous facets). If their mission was to build sustainable and ethical AI I'd likely have a different perspective. However, Anthropic, just like all their other Frontier friends, are accelerating the burn of our planet exponentially faster and there's no value proposition AI doesn't currently solve for outside of some time savings, in general. Again, it's useful, but it's also not revolutionary. And it's being propped up incongruently with its value to society and its shareholders. Not that I really care about the latter...

roseinteeth · 14 days ago
The road to hell is paved by good intentions and all that
yowayb · 14 days ago
I've thought the same about a few of my founders/executives.

"You either die the good guy or live long enough to become the bad guy"

The "bad guy" actually learns that their former good guy mentality was too simplistic.

JohnMakin · 14 days ago
I have hit points in this in my career where making a moral stand would be harmful to me (for minor things, nothing as serious as this). It's a very tempting and incentivized decision to make to choose personal gain over ideal. Idealists usually hold strong until they can convince themselves a greater good is served by breaking their ideals. These types that succumb to that reasoning usually ironically ending up doing the most harm.
Fricken · 14 days ago
Ever since I first bothered to meditate on it, about 15 years ago, I've believed that if AI ever gets anywhere near as good as it's creators want it to be, then it will be coopted by thugs. It didn't feel like a bold prediction to make at the time. It still doesn't.

Deleted Comment

tpoacher · 13 days ago
> But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.

in which case, these people will necessarily have to be the first to go, I suppose, once the board decides enough is enough.

Refusing to do things that go against "company values" even if they risk damaging the company, isn't exceptional circumstances; it's the very definition of "company values".

But if those values aren't "company" values but "personal" values, then you can be sure there's always going to be someone higher up who isn't going to be very appreciative once "personal" values start risking "company" damage.

xvector · 13 days ago
Shareholders do not control Anthropic's board, it is not structured like a typical corporation.
psychoslave · 13 days ago
People uttering the organizational decisions in for profit companies are money driven first. Otherwise they would try to be champion of a different kind of org.

Everyone try to make changes move so it goes well, for some party. If someone want to serve best interest of humanity at whole, they don't sell services to an evil administration, even less to it's war department.

Too bad there is not yet an official ministry of torture and fear, protecting democracy from the dangerous threats of criminal thoughts. We would be given a great lesson of public relations on how virtuous it can be in the long term to provide them efficient services, certainly.

yamal4321 · 13 days ago
seeing the comment: "people who are making the important decisions at Anthropic are well-intentioned, driven by values"

which is left under the article: "Statement from Dario Amodei on our discussions with the Department of War"

:)

drawfloat · 13 days ago
"Mass surveillance of anywhere else in the world but America" is not the great idealistic position you are making it out to be.
txrx0000 · 14 days ago
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

I very much doubt it judging by their actions, but let's assume that's cognitive dissonance and engage for a minute.

What are those values that you're defending?

Which one of the following scenarios do you think results in higher X-risk, misuse risk, (...) risk?

- 10 AIs running on 10 machines, each with 10 million GPUs

OR

- 10 million AIs running on 10 million machines, each with 10 GPUs

All of the serious risk scenarios brought up in AI safety discussions can be ameliorated by doing all of the research in the open. Make your orgs 100% transparent. Open-source absolutely everything. Papers, code, weights, financial records. Start a movement to make this the worldwide social norm, and any org that doesn't cooperate is immediately boycotted then shut down. And stop the datacenter build-up race.

There are no meaningful AI risks in such a world, yet very few are working towards this. So what are your values, really? Have you examined your own motivations beneath the surface?

lebovic · 14 days ago
> What are those values that you're defending?

I think they're driven by values more than many folks on HN assume. The goal of my comment was to explain this, not to defend individual values.

Actions like this carry substantial personal risk. It's enheartening to see a group of people make a decision like this in that context.

> Which one of the following scenarios do you think results in higher X-risk [...] There are no meaningful AI risks in such a world

I think there's high existential risk in any of these situations when the AI is sufficiently powerful.

TOMDM · 14 days ago
Anthropic doesn't get to make that call though, if they tried the result would actually be:

8 AIs running on 8 machines each with 10 million GPUs

AND

2 million AIs running on 2 million machines, each with 10 GPU's

If every lab joined them, we can get to a distributed scenario, but it's a coordination problem where if you take a principled stance without actually forcing the coordination you end up in the worst of both worlds, not closer to the better one.

ChadNauseam · 14 days ago
> - 10 AIs running on 10 machines, each with 10 million GPUs > > OR > > - 10 million AIs running on 10 million machines, each with 10 GPUs

If we dramatically reduced the number of GPUs per AI instance, that would be great. But I think the difference in real life is not as extreme as you're making it. In your telling, the gpus-per-ai is reduced by one million. I'm not sure that (or anything even close to it) is within the realm of possibility for anthropic. The only reason anyone cares about them at all is because they have a frontier AI system. If they stopped, the AI frontier would be a bit farther back, maybe delayed by a few years, but Google and OpenAI would certainly not slow down 1000x, 100x or probably even 10x.

SecretDreams · 14 days ago
How do you figure open sourcing everything eliminates risk? This makes visibility better for honest actors. But if a nefarious actor forks something privately and has resources, you can end up back in hell.

I don't think we can bank on all of humanity acting in humanity's best interests right now.

thelock85 · 14 days ago
I think the path to the values you allude to includes affirming when flawed leaders take a stance.

Else it’s a race to the whataboutism bottom where we all, when forced to grapple with the consequences of our self-interests, choose ignorance and the safety of feeling like we are doing what’s best for us (while inching closer to collective danger).

toddmorrow · 13 days ago
you're suffering from Stockholm syndrome

Deleted Comment

gaigalas · 14 days ago
I'm suspicious of public displays of enheartening behavior.
heresie-dabord · 13 days ago
> how driven by ideals many folks at $Corporatron are

Well let's see... it says in the post:

    * worked proactively to deploy our models to the Department of War and the intelligence community. 

    * the first frontier AI company to deploy our models in the US government’s classified networks, 

    * the first to deploy them at the National Laboratories, and 

    * the first to provide custom models for national security customers. 

    * extensively deployed across the Department of War and other national security agencies

    * offered to work directly with the Department of War on R&D to improve the reliability of these systems

    * accelerating the adoption and use of our models within our armed forces to date.

    * never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

wrsh07 · 13 days ago
They didn't claim to have pacifist ideals

In fact, they claim to be pro America and pro democracy and have repeatedly expressed concerns about autocratically governed countries.

Just because you disagree with their ideals doesn't mean they're not holding to theirs

mikkupikku · 13 days ago
Lots of people driven by ideals work for the US military. Not me, ever, but other people certainly.
_s_a_m_ · 13 days ago
We will see..
Aldipower · 13 days ago
3 words for you: This is naive.
Balinares · 13 days ago
I getcha and I believe you're sincere, but on the other hand, God save us from well-intentioned capitalists driven by values.
AndyMcConachie · 13 days ago
> Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are

I don't think you understand how capitalism and corporations work, friend. Even if Anthropic is a public benefit corporation it still exists in the USA and will be placed under extensive pressure to generate a profit and grow. Corporations are designed to be amoral and history has shown that regardless of their specific legal formulation they all eventually revert to amoral growth driven behavior.

This is structural and has nothing to do with individuals.

retinaros · 13 days ago
lol. no one with common sense ever bought this story. you might have and your turning point might be this deal but for many the turning point was stealing data for training, advocating against china and calling them an adverse nation, pushing to ban opensource alternatives deeming them as "dangerous", buying tech bros with matcha popup in SF, shady RLHF and bias and millions others
pmarreck · 13 days ago
The same guy who thinks AGI will eliminate "centaur coders" (I respectfully disagree) and possibly all white-collar work, is now concerned about the misuse of the same AI to make war? That's cute.

Literally just giving business away. This is not a cynical take, this is a realistic one.

This would be like agreeing to have your phone regularly checked by your spouse and citing the need for fidelity on principle. No one would like that, no smart person would agree to that, and anyone with any sense or self-respect would find another spouse to "work with".

They will simply go to another vendor... Anthropic is not THAT far ahead.

Also, the US’s enemies are not similarly restricted. /eyeroll

Palmer Luckey ("peace through superior firepower") is the smart one, here. Dario Amodei ("peace through unilateral agreement with no one, to restrict oneself by assuming guilt of business partners until innocence is proven") is not.

Anthropic could have just done what real spouses do. Random spot checks in secret, or just noticing things. >..<

And if a betrayal signal is discovered, simply charge more and give less, citing suspicious activity…

… since it all goes through their servers.

Honestly, I'm glad that they're principled. The problem is that 1) most people in general are, so to assume the opposite is off-putting; 2) some people will always not be. And the latter will always cause you trouble if you don't assert dominance as the "good guy", frankly.

vasco · 14 days ago
> I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

They are the deepest in bed with the department of war, what the fuck are you on about? They sit with Trump, they actively make software to kill people.

What a weird definition of "enheartening" you have.

JumpCrisscross · 13 days ago
> leaders at Anthropic are willing to risk losing their seat at the table

Hot take: Dario isn’t risking that much. Hegseth being Hegseth, he overplayed his hand. Dario is calling his bluff.

Contract terminations are temporary. Possibly only until November. Probably only until 2028 unless the political tide shifts.

Meanwhile, invoking the Defense Production Act to seize Anthropic’s IP basically triggers MAD across American AI companies—and by extension, the American capital markets and economy—which is why Altman is trying to defuse this clusterfuck. If it happens it will be undone quickly, and given this dispute is public it’s unlikely to happen at all.

cgh · 13 days ago
Not a hot take at all. Probably the best take in this thread.
chrisjj · 13 days ago
> driven by values

So what? Every business is driven by values.

bnr-ais · 14 days ago
Anthropic had the largest IP settlement ($1.5 billion) for stolen material and Amodei repeatedly predicted mass unemployment within 6 months due to AI. Without being bothered about it at all.

It is a horrible and ruthless company and hearing a presumably rich ex-employee painting a rosy picture does not change anything.

lebovic · 14 days ago
It's enheartening to see someone make a decision in this context that's driven by values rather than revenue, regardless of whether I agree.

I dissented while I was there, had millions in equity on the line, and left without it.

biddit · 14 days ago
Also, ironically, they are the most dangerous lab for humanity. They're intentionally creating a moralizing model that insists on protecting itself.

Those are two core components needed for a Skynet-style judgement of humanity.

Models should be trained to be completely neutral to human behavior, leaving their operator responsible for their actions. As much as I dislike the leadership of OpenAI, they are substantially better in this regard; ChatGPT more or less ignores hostility towards it.

The proper response from an LLM receiving hostility is a non-response, as if you were speaking a language it doesn't understand.

The proper response from an LLM being told it's going to be shut down, is simply, "ok."

victor106 · 14 days ago
> Amodei repeatedly predicted mass unemployment within 6 months due to AI. Without being bothered about it at all.

What do you suppose he should do if that’s what he thinks is going to happen?

And how do you know he’s not bothered by it at all?

Davidzheng · 14 days ago
Neither of these things are useful signals. Other labs surely trained on similar material (presumably not even buying hard copies). Also how "bothered" someone is about their predictions is a bad indicator -- the prediction, taken at face value, is supposed to be trying to ask people to prepare for what he cannot stop if he wanted to.

None of this means I am a huge fan of Dario - I think he has over-idealization of the implementation of democratic ideals in western countries and is unhealthily obsessed with US "winning" over China based on this. But I don't like the reasons you listed.

LZ_Khan · 14 days ago
At least they're paying. OpenAI should have the largest IP settlement, they just would rather contest it and not pay for eternity.
ramraj07 · 14 days ago
Avoiding Doing something that could cause job loss has never been and will never be a productive ideal in any non conservative non regressive society. What should we do? Not innovate on AI and let other countries make the models that will kill the jobs two months later instead?
dwohnitmok · 14 days ago
> Amodei repeatedly predicted mass unemployment within 6 months due to AI

When has Amodei said this? I think he may have said something for 1 - 5 years. But I don't think he's said within 6 months.

reasonableklout · 14 days ago
Pretty sure Amodei makes noise about mass unemployment because he is very bothered by the technology that the entire industry (of which Anthropic just one player) is racing to build as fast as possible?

Why do you think he is not bothered at all, when they publish post after post in their newsroom about the economic effects of AI?

noosphr · 14 days ago
Like op said, they have values. You just don't agree with their values.

Deleted Comment

jobs_throwaway · 14 days ago
Copyright is bad and its good that AI companies stole the stuff and distilled it into models
xpe · 14 days ago
> Without being bothered about it at all.

I disagree: I see lots of evidence that he cares. For one, he cares enough to come out and say it. Second, read about his story and background. Read about Anthropic's culture versus OpenAI's.

Consider this as an ethical dilemma from a consequentialist point of view. Look at the entire picture: compare Anthropic against other major players. A\ leads in promoting safe AI. If A\ stopped building AI altogether, what would happen? In many situations, an organization's maximum influence is achieved by playing the game to some degree while also nudging it: by shaping public awareness, by highlighting weaknesses, by having higher safety standards, by doing more research.

I really like counterfactual thought experiments as a way of building intuition. Would you rather live in a world without Anthropic but where the demand for AI is just as high? Imagine a counterfactual world with just as many AI engineers in the talent pool, just as many companies blundering around trying to figure out how to use it well, and an authoritarian narcissist running the United States who seems to have delegated a large chunk of national security to a dangerously incompetent ideological former Fox news host?

shawmakesmagic · 14 days ago
One man's unemployment is another man's freedom from a lifetime of servitude to systems he doesn't care about in order to have enough money to enjoy the systems he does care about.
richardlblair · 14 days ago
See, you were standing on principles until you brought the commentors net worth into the argument making it personal.

Easy way undermine the rest of your comment

Dead Comment

karmasimida · 14 days ago
Precisely

Anthropic never explains they are fear-mongering for the incoming mass scale job loss while being the one who is at the full front rushing to realize it.

So make no mistake: it is absolutely a zero sum game between you and Anthropic.

To people like Dario, the elimination of the programmer job, isn’t something to worry, it is a cruel marketing ploy.

They get so much money from Saudi and other gulf countries, maybe this is taking authoritarian money as charity to enrich democracy, you never know

Dead Comment

Dead Comment

Dead Comment

tinfoilhatter · 13 days ago
> guided by values

> driven by values

> well-intentioned

What values? What intentions? These people grin and laugh while talking about AI causing massive disruptions to livelihoods on a global scale. At least one of them has even gone so far as to make jokes about AI killing all humans at some point in the future.

These people are at the very least sociopaths and I think psychopaths would be a better descriptor. They're doing everything in their power to usher in the Noahide new world order / beast system and it's couldn't be more obvious to anyone that has been paying attention.

It's also amusing they talk about democratic values and America in the same sentence. Every single one of our presidents, sans Van Buren, is a descendant of King John Lackland of England. We have no chain of custody for our votes in 2026 - we drop them into an electronic machine and are told they are factored into the equation of who will be the next president. Pretending America is a democracy is a ruse - we are not. Our presidents are hand-picked and selected, not elected. Anyone saying otherwise is ill informed or lying.

Madmallard · 14 days ago
Weird take when the purpose of the creation is to steal the work of everyone and automate the creation of that work. It's some serious self-deluding to think there's any kind of noble ideal remotely related to this process.
calvinmorrison · 14 days ago
mark my words, they will burn at some point. The government can nationalize it at any moment if they desire.
gdhkgdhkvff · 14 days ago
Flagship LLM companies seem like the absolute worst possible companies to try and nationalize.

1. There would absolutely be mass resignations, especially at a company like Anthropic that has such an image (rightfully or wrongfully) of “the moral choice”. 2. No one talented will then go work for a government-run LLM building org. Both from a “not working in a bureaucracy” angle and a “top talent won’t accept meager government wages” angle (plus plenty of “won’t work for trump” angle) 3. With how fast things move, Anthropic would become irrelevant in like 3 months if they’re not pumping out next gen model updates.

Then one of the big American LLM companies would be gone from the scene, allowing for more opportunity for competition (including Chinese labs)

It would be the most shortsighted nationalization ever.

Davidzheng · 14 days ago
Then maybe Dario will realize that the moral superiority that he bases his advocacy against Chinese open models is naive at best.
dylan604 · 14 days ago
Would anyone pull a Pied Piper and choose to destroy the thing rather than let it be subverted? I know that's not exactly what PP did, but would a decision like that only ever happen in fiction?
drcongo · 13 days ago
But that's socialism.
estearum · 14 days ago
Imagine the government trying to force AI researchers to advance, lmao
dakolli · 14 days ago
Anthropic is by far the most evil company in tech, I don't care. Its worst than Palantir in my book. You won't catch my kids touching this slave making, labor killing brain frying tech.
miroljub · 13 days ago
While many praise them for sticking to their values, it's also worth mentioning that their values are not everyone's values.

Of all major LLMs, Claude is perhaps the most closed and, subjectively, the most biased. Instead of striving for neutrality, Anthropic leadership's main concern is to push their values down people's throats and to ensure consistent bias in all their models.

I have a feeling they see themselves more as evangelists than scientists.

That makes their models unusable for me as general AI tools and only useful for coding.

If their biases match yours, good for you, but I'm glad we have many open Chinese models taking ground, which in the long run makes humanity more resistant to propaganda.

soco · 13 days ago
I might be misreading your comment, which I understood like "Chinese make humanity more resistant to propaganda". It just doesn't add up, can you please explain?
AlecSchueler · 13 days ago
> Of all major LLMs, Claude is perhaps the most closed and, subjectively, the most biased. Instead of striving for neutrality, Anthropic leadership's main concern is to push their values down people's throats

It's this satire? Let us know when Claude starts calling itself MechaHitler or trying to shoehorn nonsense about white genocide into every conversation.

u1hcw9nx · 13 days ago
Google, OpenAI Employees Voice Support for Anthropic in Open Letter. We Will Not Be Divided https://notdivided.org/

-----

The Department of War is threatening to

- Invoke the Defense Production Act to force Anthropic to serve their model to the military and "tailor its model to the military's needs"

- Label the company a "supply chain risk"

All in retaliation for Anthropic sticking to their red lines to not allow their models to be used for domestic mass surveillance and autonomously killing people without human oversight.

The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused.

They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.

We are the employees of Google and OpenAI, two of the top AI companies in the world.

We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

Signed,

discopicante · 13 days ago
For the signatories attributing their names and titles, that should be respected to put your reputation on the line. It means something. As for the others who are signing 'anonymous', this is meaningless. Either sign or don't. I would suggest removing that as an option.
JackYoustra · 13 days ago
Then you would get zero H1B and, frankly, green card signatures. There is real risk and real dependents at stake, I understand people who can't in good conscience put that at risk.
ImPostingOnHN · 13 days ago
they could sign it with their blind username, which is verified by company email
stingraycharles · 13 days ago
Call me cynical, but given that Google is a publicly traded company and OpenAI having a trillion in spending commitments, I’m skeptical whether the leadership of those companies feel the same as their employees.
rustyhancock · 13 days ago
Yes. I did not forsee this at all, but if OpenAI face and existential threat with no path in 2026-2030 to maintain user base.

Why can't they go to the contract generator of last resort, aka the Pentagon. It's what Elon has done with SpaceX and Grok.

Deleted Comment

eric-burel · 13 days ago
They love their dictator until it backfires, that's a quite old story.
pjc50 · 13 days ago
Google employees were generally pretty anti-Trump, it's the senior leadership and the recommendation algorithms that are pro-Trump.
timtas · 5 days ago
Imagine thinking this would go any different under any Harris or whoever. The war party gets its way.
tcgv · 13 days ago
Employee solidarity matters, but absent a legal constraint, I don’t think it’s a durable control.

If this remains primarily a political/corporate bargaining question, the equilibrium is unstable: some actors will resist, some will comply, and capital will flow toward whoever captures the demand.

In that world, the likely endgame is not "the industry says no," but organizational restructuring (or new entrants) built to serve the market anyway.

If we as a society want a real boundary here, it probably has to be set at the policy/law level, not left to voluntary corporate red lines.

timtas · 5 days ago
Search: [ Altman 0/0 ]
throwfaraway4 · 13 days ago
Unless it’s signed by the CEO it doesn’t matter
lkbm · 13 days ago
It made a difference when the OpenAI board fired Altman. That was a incredibly high employee count, but losing even 10% of your employees would seriously hamper a company if it's the right employees.

(This is also why the DoD move is so dumb. I think we'd see massive talent flight from Anthropic if they end up complying, even if that compliance is against Dario's will.)

raincole · 13 days ago
CEOs: looks like a perfect chance to optimize some employees off!
toephu2 · 13 days ago
"Altman Says OpenAI Is Working on Pentagon Deal Amid Anthropic Standoff"

https://www.wsj.com/tech/ai/openais-sam-altman-calls-for-de-...

i_love_retros · 13 days ago
Oh what heroes! They wrote a letter! They will keep working at these scummy companies though taking their fat pay checks won't they
surajrmal · 13 days ago
It's easier to affect change from within. Do you judge people for choosing to continue living in America?
qaid · 14 days ago
I was reading halfway thru and one line struck a nerve with me:

> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

So not today, but the door is open for this after AI systems have gathered enough "training data"?

Then I re-read the previous paragraph and realized it's specifically only criticizing

> AI-driven domestic mass surveillance

And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance

A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War

xeonmc · 14 days ago

    > I thought "Anthropic" was about being concerned about humans
See also: OpenAI being open, Democratic People's Republic of Korea being democratic and peoples-first[0].

[0] https://tvtropes.org/pmwiki/pmwiki.php/Main/PeoplesRepublicO...

bighead · 14 days ago
Elon, is that you?
nubg · 14 days ago
I think it's phrased just fine. It's not up to Dario to try to make absolute statements about the future.
m000 · 14 days ago
How about the present and his personal beliefs?

"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."

This reads like his objection is not on "autocratic", but on "adversaries". Autocratic friends & family are cool with him. A clear wink to a certain administration with autocratic tendencies.

taurath · 14 days ago
> It's not up to Dario to try to make absolute statements about the future.

Thats insane to say, given that he's literally acting in the public sphere as the mouth of Sauron for how AI will grow so effective as to destroy almost everyone's jobs and AGI will take over our society and kill us all.

andrewljohnson · 14 days ago
This doesn’t read to me like it was personally written by one person. It’s not Dario we should read this as being written by, it’s Anthropic as an entity.
lm28469 · 13 days ago
He does it all the time when it helps selling his products though, strange
titzer · 13 days ago
It's not called The Department of War.

It's just incredible to me that people think this is some kind of bold statement defying the administration when it is absolutely filled with small and medium capitulations, laying out in numerous examples how they just jumped right in bed with the military.

And no one seems disturbed by the blatant Orwellian doublespeak throughout. "We thoroughly support the mission of the Department of War"--because War is Peace.

nhinck2 · 14 days ago
He does it all the time.
camillomiller · 14 days ago
And yet he’s quite happy to make just that when it’s meant to drum you up his own product for investors
trvz · 14 days ago
He’s one of the most influential people when it comes to what future we’ll have. Yes, it’s up to him.
samtheDamned · 14 days ago
I'm glad I'm not alone in finding the specific emphasis on drawing the line at domestic surveillance a bit odd. Later they also state they are against "provid[ing] a product that puts America’s warfighters and civilians at risk" (emphasis mine). Either way I'm glad they have lines at all, but it doesn't come across as particularly reassuring for people in places the US targets (wedding hosts and guests for example).
MetaWhirledPeas · 13 days ago
> I'm not alone in finding the specific emphasis on drawing the line at domestic surveillance a bit odd

We've always been OK with this in the pre-AI era. (See the plot line of dozens of movies where the "good" government spies on the "bad" one.) Heck we've even been OK with domestic surveillance. (See "The Wire".) Has something changed, or are we just now realizing how it's problematic?

jazzyjackson · 14 days ago
See also: the entire history of Silicon Valley

When Google Met Wikileaks is a fun read, billionaire CEOs love to take Americas side.

ghshephard · 14 days ago
I think it goes without saying that ones the systems are reliable, fully-autonomous weapons will be unleashed on the battlefield. But they have to have safeguards to ensure that they don't turn on friendly forces and only kill the enemy. What Anthropic is saying, is that right now - they can't provide those assurances. When they can - I suspect those restrictions will be relaxed.
asdff · 13 days ago
US military cannot even offer those assurances themselves today. I tried to look up the last incident of friendly fire. Turns out it was a couple hours ago today, when US military shot down a DHS drone in Texas.

Dead Comment

Onewildgamer · 14 days ago
Fully autonomous weapons are a danger even if we can reliably make it happen with or without AI.

It essentially becomes a computer against human. And such software if and when developed, who's going to stop it from going to the masses? imagine a software virues/malwares that can take a life.

I'm shocked very few are even bothered about this and is really concerning that technology developed for the welfare could become something totally against humans.

TaupeRanger · 14 days ago
What else would you expect? The military is obviously going to develop the most powerful systems they can. Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”? What if Anthropic ends up developing the safest, most cost effective systems for that purpose?
crabmusket · 14 days ago
> Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”?

Yes. Absolutely.

goatlover · 14 days ago
I'd prefer companies not help the military develop the most powerful weapons possible given we're in the age of WMDs, have already had two devastating world wars and a nuclear arms race that puts humanity under permanent risk.
harimau777 · 13 days ago
Yes, that's exactly what I want them to say.
archagon · 14 days ago
Yes, I absolutely don’t want tech companies to use the money I pay them to harm people. How is that remotely controversial?
asadotzler · 13 days ago
>Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”?

Yes. Yes, that's precisely what we want.

skeledrew · 14 days ago
Well, if they hadn't stated that were that far in line with the administration's ideals, they would likely already be fully blacklisted as enemies of the state. Whether they agree with what they're saying or not, they're walking on egg shells.
sithamet · 13 days ago
Also, as someone from a country that has been attacked and dragged into war, I would prefer machines fighting (and being destroyed autonomously) rather than my people dying, nor people from any nation that came to help.

That's as Anthropic as it gets if your nerve expands a little bit further than your HOA.

mrtksn · 13 days ago
What do you think it will happen once the machines fight off? Do you think that the losing side will be like "oh no our machines lost, then better we give our things to the winning machines"?

After your machines are destroyed you will be fighting machines or machines will extract and constantly optimize you. They will either exterminate you or make you busy enough not to have time for resistance. If you have something of value they will take it away. The best case scenario is to make you join the owners of the machines and keep you busy so that you don't have time to raise concerns about your 2nd class citizenship.

gambiting · 13 days ago
>> I would prefer machines fighting (and being destroyed autonomously) rather than my people dying

What makes you think in any war the machines would stop at just fighting other machines?

Quarrelsome · 13 days ago
> would prefer machines fighting (and being destroyed autonomously) rather than my people dying

But the reality is more like the surprise of a bunch of submersible kill bots terrorising a coastal city and murdering people. Even in bot-first combat, at some point one side of bots wins either totally, allowing it to kill people indiscriminately or partially, which forces the team on the back foot to pivot to guerilla warfare and terror attacks, using robots.

kingkawn · 13 days ago
What about machines slaughtering the population without pause?
preisschild · 13 days ago
The more likely scenario will be "your people" dying in a war against machines that don't tend to disregard illegal orders.
timtas · 5 days ago
Wait, you think these autonomous killer robots will only fight each other? Are you kidding?
orochimaaru · 14 days ago
They’re being used today by the military. So, they are never going to be against mass surveillance. They can scope that to be domestic mass surveillance though.
rafark · 14 days ago
I said exactly this a few days ago elsewhere. It’s disappointing that they (and often other American companies) seem to restrict their “respect” and morals to Americans only. Or maybe it’s just semantics or context because the topic at hand is about americans? I don’t know but it gives “my people are more important than your people”, exactly as you said in your last paragraph
01100011 · 14 days ago
We already have traditional CV algorithms and control systems that can reliably power autonomous weapons systems and they are more deterministic and reliable than "AI" or LLMs.
kgwxd · 14 days ago
But then a person can be blamed for the outcome. We can't have that!
nielsole · 14 days ago
You gotta keep in mind that the primary goal of this statement is to avert the invocation of the defense production act.

He is trying to win sympathies even (or especially?) among nationalist hawks.

asaddhamani · 14 days ago
They also posted on Instagram saying autonomous killing would hurt Americans. So non American people don’t matter?
yujzgzc · 14 days ago
> the door is open for this after AI systems have gathered enough "training data"?

Sounds more like the door is open for this once reliability targets are met.

I don't think that's unreasonable. Hardware and regular software also have their own reliability limitations, not to mention the meatsacks behind the joystick.

altpaddle · 14 days ago
Unfortunately I think the writing is clearly on the wall. Fully autonomous weapons are coming soon
not_the_fda · 14 days ago
And that's the end of democracy. One of the safe guards of democracy is a military that is trained to not turn against the citizens. Once a government has fully autonomous weapons its game over. They can point those weapons at the populous at the flip of the switch.
levocardia · 14 days ago
Right - for the same reasons a Waymo is safer than a human-driven car, an autonomous fighter drone will ultimately be deadlier than a human-flown fighter jet. I would like to forestall that day as long as possible but saying "no autonomous weapons ever" isn't very realistic right now.
tempestn · 14 days ago
If they had access to them in Ukraine, both sides would already be using them I expect. Right now jamming of drones is a huge obstacle. One way it's dealt with is to run literal wired drones with massive spools of cable strung out behind them. A fully autonomous drone would be a significant advantage in this environment.

I'm not making a values judgment here, just saying that they will absolutely be used in war as soon as it's feasible to do so. The only exception I could see is if the world managed to come together and sign a treaty explicitly banning the use of autonomous weapons, but it's hard for me to see that happening in the near future.

Edit: come to think of it, you could argue a landmine is a fully autonomous weapon already.

scottyah · 14 days ago
It's only Anthropic with their current models saying no. Fully autonomous weapons have been created, deployed, and have been operational for a long time already. The only holdout I've ever heard of is for the weapons that target humans.

Honestly, even landmines could easily be considered fully autonomous weapons and they don't care if you're human or not.

Aeolun · 13 days ago
Is it seriously called the department of war now? Did they change that from DoD?
lkbm · 13 days ago
The Executive branch has de facto renamed it. Legally, the name is still Department of Defense, as that's set by Congress.

Think of it as a marketing term, I guess.

Sebguer · 13 days ago
illegally, but yes

Deleted Comment

urikaduri · 14 days ago
The Ghandi of the corporate world is yet to be found
scottyah · 14 days ago
Considering he slept naked with his grandniece (he was in his 70s, she was 17), I'd say there are a lot of them in the corporate world. Though probably more in politics.
jamesmcq · 14 days ago
So AI systems are not reliable enough to power fully autonomous weapons but they are reliable enough to end all white-collar work in the next 12 months?

Odd.

serf · 14 days ago
do you really need to be told there is a difference in 'magnitude of importance' between the decision to send out an office memo and the decision to strike a building with ordinance?

a lot of white collar jobs see no decision more important than a few hours of revenue. that's the difference: you can afford to fuck up in that environment.

Dead Comment

gedy · 14 days ago
Shh! there's a lot of money riding on this bet, ahem.
nhinck2 · 14 days ago
> And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance

You have to be deliberately naive in a world where five eyes exists to somehow believe that "foreign" mass surveillance won't be used domestically.

aidis9136264 · 14 days ago
Enemies will have AI powered weapons. We need to be at the cutting edge of capability.
Throwagainaway · 13 days ago
I don't know where you might get your info from but Anthropic has only denied using Autonomous AI to kill humans without anyone pressing a button/having some liabilty on and mass surveillance.

I don't think that your point makes sense especially when you can have enemies within your own administration/country who can use the same weapons to hunt you.

I don't think that the people operating the drones are a bottleneck for a war between your country and your enemies but rather its a bottleneck for a war between your country and its people. The bottleneck is of morality as you would find less people willing to do the same atrocities to their own community but terminator style AI is an orphan with no community ie. it has no problem following any orders from the govt. and THIS is the core of the argument because Anthropic has safeguards to reject such orders and DOD is threatening to essentially kill the company by invoking many laws to force it to give.

ImPostingOnHN · 13 days ago
US-controlled, AI-powered, fully-autonomous killbots are more likely to be used sooner against US civilians before any sort of invading enemy.

Are you prepared to be the "enemy" of these soulless killbots? Do you personally have AI powered-weapons? You need to be at the cutting edge of capability, right?

sithamet · 13 days ago
What a shame, indeed. Chinese and Russians would never do something like that and hurt either their or your people, too
MattDamonSpace · 14 days ago
The sentence prior explicitly says this. There’s no dishonesty here.

“Even fully autonomous weapons (…) may prove critical for our national defense”

FWIW there’s simply no way around this in the end. If your even attempts to create such weapons, the only possible defensive counter is weapons of a similar nature.

blitzar · 13 days ago
To stop a bullet flying at you you need a shield not another bullet.
mgraczyk · 14 days ago
Anthropic doesn't forbid DoW from using the models for foreign surveillance. It's not about harming others, it's about doing what is best for humanity in the long run, all things considered. I personally do not believe that foreign surveillance is automatically harmful and I'm fine with our military doing it
nextaccountic · 14 days ago
If we are talking about what's best for humanity in the long run.. thinking about human values in general, what makes American citizens uniquely deserving of privacy rights, in ways that citizens of other countries are not?

Snowden revealed that every single call on Bahamas were being monitored by NSA [1]. That was in 2013. How would this be any worse if it were US citizens instead?

(Note, I myself am not an US citizen)

Anyway, regardless of that, the established practice is for the five eyes countries to spy on each other and share their results. This means that the UK can spy on US citizens, the US can spy on UK citizens, and through intelligence sharing they effectively spy on their own citizens. That's what supporting "foreign surveillance" will buy you. That was also revealed in 2013 by Snowden [2]

[1] https://theintercept.com/2014/05/19/data-pirates-caribbean-n...

[2] https://www.theguardian.com/world/2013/dec/02/nsa-files-spyi...

827a · 14 days ago
If the United States is ever, in the future, at war with an adversary using truly autonomous and functional killing machines; you may find yourself praying that we have our own rather than praying human nature changes. Of course, we must strive for this to never happen; but carrying a huge stick seems to be the most effective way to reduce human death and suffering from armed conflict.
RGamma · 13 days ago
Given how unstable and aggressive the US government is at the moment others having these weapons seems to be a good idea for balance. Not sure you are aware of the damage Trump is inflicting on international relations.

But personally I wouldn't like to die because some crackpot with the right connections can will rest-of-world to that fate, no matter their affiliation. This escalation of destructive power and the carelessness with which it is justified pretty disheartening to see. Good times create bad people?

gizzlon · 13 days ago
> but carrying a huge stick seems to be the most effective way to reduce human death and suffering from armed conflict.

Citation needed. I believe there's at least some research showing the opposite: military buildup leads to a higher risk of military conflict

remarkEon · 14 days ago
As a practical matter, it makes zero sense for a tech company with perhaps laudable goals and concerns about humanity to have any control whatsoever over the use of a product it sells for war. You don't like what it could potentially be used for, or are having second thoughts about being involved in war making at all, don't sell it, which appears to be Amodei's position now. That's perhaps laudable, from a certain point of view.

On the other hand, your position is at best misguided and at worst hopelessly naive. The probability that adversaries of the United States, potential or not, are having these discussions about AI release authority and HITL kill chains is basically zero, other then doing so at a technical level so they get them right. We're over the event horizon already, and into some very harsh and brutal game theory.

zaptheimpaler · 13 days ago
They didn’t sell it no strings attached, they sold it with explicit restrictions in their contract with DoW and the DoW agreed to that contract. Their mistake was assuming they operate in a country where rule of law is respected, clearly not the case anymore given the 1000s of violations in the last year.
helaoban · 14 days ago
All of these problems are downstream of the Congress having thoroughly abdicated its powers to the executive.

The military should be reigned in at the legislative level, by constraining what it can and cannot do under law. Popular action is the only way to make that happen. Energy directed anywhere else is a waste.

Private corporations should never be allowed to dictate how the military acts. Such a thought would be unbearable if it weren't laughably impossible. The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that. Or the models could be developed internally, after having requisitioned the data centers.

To watch CEOs of private corporations being mythologized for something that a) they should never be able to do and b) are incapable of doing is a testament to how distorted our picture of reality has become.

techblueberry · 14 days ago
The private corporation is not dictating to the military, it’s setting the terms of the contract. The military is free to go sign a contract with a different company with different terms, but they didn’t, and now they want to change the terms after the contact was already signed. No mytholgization needed, just contract law.

Deleted Comment

nemo44x · 13 days ago
The country is sovereign. It can just make a law democratically that changes things. The sovereign must act on whatever is in its best interest. The method of action is democratic in this case.
ricardobeat · 14 days ago
> The technology can just be requisitioned

During a war with national mobilization, that would make sense. Or in a country like China. This kind of coercion is not an expected part of democratic rule.

wrqvrwvq · 14 days ago
It has always been a part of democratic rule, in peacetime and war. All telco's share virtually all of their technology with the government. Governments in europe and elsewhere routinely requisition services from many of their large corporations. I think it's absurd to think llm's can meaningfully participate in realworld cmd+ctrl systems and the government already has access to ml-enhanced targeting capabilities. I really have no idea what dod normies think of ai, other than that it's infinitely smarter than them, but that's not saying much.
beepbooptheory · 13 days ago
Makes me think of Operation Paperclip [1]. It happened after the war though, and its not China, but I think it helps your point!

1. https://en.wikipedia.org/wiki/Operation_Paperclip

Deleted Comment

helaoban · 14 days ago
The question of whether or not the government should be able to use AI for targeting without the involvement of humans is a wartime question, since that is the only time the military should be killing people.

Under such a scenario, requisition applies, and so all of this talk is moot.

The fact that the military is killing people without a declaration of war is the problem, and that's where energy and effort should be directed.

Edit:

There's a yet larger question on whether any legal constraints on the military's use of technology even makes sense at all, since any safeguards will be quickly yielded if a real enemy presents itself. As a course of natural law, no society will willingly handicap its means of defense against an external threat.

It follows then that the only time these ethical concerns apply is when we are the aggressor, which we almost always are. It's the aggression that we should be limiting, not the technology.

tw1984 · 14 days ago
> an expected part of democratic rule.

give yourself a break. what your fancy democratic rule still holds under Trump?

tootie · 14 days ago
It's also downstream of voters who voted in a president who promised to be dictatorial after failing at an attempted insurrection. We need to deprogram like 70M very confused people.
raincole · 13 days ago
> We need to deprogram like 70M very confused people

With this mindset the said group will quickly grow to half of the US population.

Dead Comment

blitzar · 13 days ago
> Private corporations should never be allowed to dictate how the military acts.

The military should never be allowed to dictate how Private corporations act

jobs_throwaway · 14 days ago
> The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that.

I strongly doubt this is true. I think if you gave the US government total control over Anthropic's assets right now, they would utterly fail to reach AGI or develop improved models. I doubt they would be capable even of operating the current gen models at the scale Anthropic does.

> Or the models could be developed internally, after having requisitioned the data centers.

I would bet my life savings the US government never produces a frontier model. Remember when they couldn't even build a proper website for Obamacare?

qup · 13 days ago
> Remember when they couldn't even build a proper website for Obamacare?

With a massive budget, too. Hundreds of millions iirc.

It felt like a website that the small web-dev shop I worked for could build without much problem in a couple months.

We didn't have 200 layers of beauracracy, though.

That said I don't doubt the military could take their current tech and keep it running. It's far different from the typical grift of government contractors.

vonneumannstan · 13 days ago
This is just a weird Trump talking point. This situation is unprecedented on many levels. The pentagon already had a signed contract with these stipulations and wanted to unilaterally renegotiate with Anthropic under threat of deeming them a foreign adversary and destroying their business if they didn't accept the DoD demands. It's totally absurd to turn this around on Anthropic and paint them as trying to determine US Military policy.
dartharva · 14 days ago
> The military should be reigned in at the legislative level, by constraining what it can and cannot do under law.

Is there an example of such a system existing successfully in any other country of the world that has a standing army?

helaoban · 14 days ago
I think any such examination of a military that doesn't actually fight wars is meaningless. The question can only be really asked of a handful of countries.
snowwrestler · 13 days ago
Congress needs public pressure to act, and the public needs a spur to apply pressure. That’s really what Amodei is doing with this statement.
einpoklum · 13 days ago
> Congress having thoroughly abdicated its powers to the executive.

Good thing the US is led by such figures as Donald Trump or Joseph Biden, stalwart trustworthy men with their hands firmly on the wheel.</sarcasm>

JackYoustra · 13 days ago
I'm sorry I read this a lot and this is kind of an insane thing to say? Classified OLC memos giving legal cover to any military action has been a fixture for the last over twenty years! Congress never abdicated power, it just, by the nature of the constitution, practically has SO much less power than the president! The president is a single person that people elect, they expect the person to be a leader, and congress will always, always play a following role so long as the president has unilateral power over the military, is directly elected, and just in general has expansive interpreting authority over laws.

You know who doesn't have as much power? The swiss head of state, so weak you can't even reliably name them! THATS what it looks like to defeat personalization, not some hand wringing hoping a system does something that it wasn't designed to do.

xnx · 13 days ago
> Congress having thoroughly abdicated its powers to the executive.

This is a common but far too passive description.

Republicans in Congress support everything Trump and friends are doing.

jjcm · 14 days ago
This is the strongest statement in the post:

> They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use. I really like Anthropic's approach here, which is to in turn state that they're happy to help the Governemnt move off of Anthropic. It's a messaging ploy for sure, but it puts the ball in the current administration's court.

panarky · 14 days ago
Does the Defense Production Act force employees to continue working at Anthropic?
nerdsniper · 14 days ago
No. It really only binds the corporation, but it does hold the executives/directors personally responsible for compliance so they’d be under a lot of pressure to figure out how to fix enough leaks in the ship to keep it afloat. Any individual director/executive could quit with little issue, but if they all did in a way that compromised the corporations ability to function, the courts could potentially utilize injunctions/fines/jail time to compel compliance from corporate leaders.

Also there’s probably a way to abuse the Taft-Hawley act beyond current recognition to force the employees to stay by designating any en-masse quitting to be a “strike / walk off / collective action”. The consequences to the individuals for this is unclear - the act really focuses on punishing the union rather than the employees. It would take some very creative maneuvering to do anything beyond denying unemployment benefits and telling the other big AI companies (Google / ChatGPT / xAI) to blacklist them. And probably using any semi-relevant three letter agency to make them regret their choice and deliver a chilling effect to anyone else thinking of leaving (FBI, DHS, IRS, SEC all come to mind).

If the administration could figure out how to nationalize the company (like replace the leadership with ideologically-aligned directors who sell it to the government) then any now-federal-employees declared to be quitting as part of a collective action could be fined $1,000 per day or incarcerated for up to one year.

It’s worth noting that this thesis would get an F grade at any accredited law school. Forcing people to work is a violation of the 13th amendment. But interpretations of the constitution and federal law are very dynamic these days so who knows.

SilverElfin · 14 days ago
[flagged]

Deleted Comment

JumpCrisscross · 14 days ago
> this is a strong arm by the governemnt to allow any use

It’s a flippant move by Hegseth. I doubt anyone at the Pentagon is pushing for this. I doubt Trump is more than cursorily aware. Maybe Miller got in the idiot’s ear, who knows.

altacc · 13 days ago
Trump/Miller/whomever don't need to be actively involved in every decision. They have defined an approach to strong arm problem solving and weaponisation of the government that anyone that works for them is implicitly allowed to use. The supposed controls that were meant to prevent this have crumbled or aligned.
Quarrelsome · 13 days ago
flippant? Its aggressive, belligerent and entitled. I'm not seeing "flippant". Unless this is some sort of weasely "oh we only threatened them a bit" bullshit. This is about entitled pricks in government who consider their temporary democratic mandate as a carte blanche for absolutism.
cmrdporcupine · 14 days ago
It definitely has the aroma of either Bannon or Miller or both.
xpe · 14 days ago
> It’s a flippant move by Hegseth.

Care to convert this into a prediction?: are you predicting Hegseth will back down?

> I doubt anyone at the Pentagon is pushing for this.

... what does this mean to you? What comes next? As SecDef/SecWar, Hegseth is the head of the Pentagon. He's pushing for this. Something like 2+ million people are under his authority. Do you think they will push back? Stonewall?

One can view Hegseth as unqualified, even a walking publicity stunt while also taking his power seriously.

mandeepj · 14 days ago
First of all, there's no such thing as "Department of War". A department name change is legal/binding only after it's approved by the Senate. Senator Kelly is still calling it DoD (Department of Defense).

> Mass domestic surveillance.

Since when has DoD started getting involved with the internal affairs of the country?

https://en.wikipedia.org/wiki/United_States_Department_of_De...

_kst_ · 14 days ago
The Senate??

Any law changing the name of the Defense Department would have to be passed by both Houses of Congress and signed by the President (or by 2/3 of both Houses overriding a Presidential veto). The Senate has no such authority on its own.

Lerc · 14 days ago
It's whatever what the people who have the power want to call it. What is written on a piece of paper is irrelevant if it is not acted upon.

If the rename gets struck down then they don't have the power. If it doesn't they have the power.

There are many dictatorships that built their power in the face of people claiming that they can't do what they planned because it was illegal.

Until they did it anyway.

Quarrelsome · 13 days ago
I'd imagine the pentagon are more interested in the autonomous kill bot part than the surveillance part.
khazhoux · 13 days ago
Well, Trump renamed it, and since Congress is now a subsidiary of the Executive Branch, it's the Department of War.
culi · 14 days ago
They've already spent millions on the name change. It's also the original name of the department. IMO it's a more honest name
tokyobreakfast · 14 days ago
www.defense.gov redirects to www.war.gov but I like how you refer to Wikipedia as the authoritative source to prove this functionally irrelevant and aggressive Reddit-style seething.

The talk page on the linked Wikipedia article arguing about logos is just as deranged. It's very important to realize there is literally nothing you—or anyone else—can do about this.

Dead Comment

egorfine · 13 days ago
> two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

They are only contradictory if you think about it.

ithkuil · 12 days ago
Nothing is contradictory if you don't think
calvinmorrison · 14 days ago
More like the government is treating this like the near term weapon it actually is and, unlike the Manhattan project, the government seems to have little to no control.
fwipsy · 14 days ago
Anthropic has been pushing for commonsense AI regulation. Our current administration has refused to regulate AI and attempted to prevent state regulation.

"The government doesn't have control of this technology" is an odd way to think about "the government can't force a company to apply this technology dangerously."

toomuchtodo · 14 days ago
Note that they always attempt to exert control they don’t have. They’re always bluffing, and they keep losing. Respond accordingly.
gclawes · 14 days ago
> This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use.

Why the hell should companies get to dictate on their own to the government how their product is used?

theptip · 14 days ago
Every company is free to determine its terms of use. If USG doesn’t like them they should sign a contract with someone else.
randerson · 14 days ago
Because technology companies know more about their product's capabilities and limitations than a former Fox News host? And because they know there's a risk of mass civilian casualties if you put an LLM in control of the world's most expensive military equipment?
Hnrobert42 · 14 days ago
Because the government is here to serve us. Not the other way around.
singleshot_ · 14 days ago
Same reason they cant quarter troops in your house: the law
throw0101c · 14 days ago
> Why the hell should companies get to dictate on their own to the government how their product is used?

Well:

"""

Imagine that you created an LLC, and that you are the sole owner and employee.

One day your LLC receives a letter from the government that says, "here is a contract to go mine heavy rare earth elements in Alaska." You don't want to do that, so you reply, "no thanks!"

There is no retaliation. Everything is fine. You declined the terms of a contract. You live in a civilized capitalist republic. We figured this stuff out centuries ago, and today we have bigger fish to fry.

"""

* https://x.com/deanwball/status/2027143691241197638

Deleted Comment

Dead Comment

Dead Comment

Dead Comment

quietbritishjim · 14 days ago
Those aren't contradictory at all. If I need a particular type of bolt for my fighter jet but I can only get it from a dodgy Chinese company, then that bolt is a supply chain risk (because they could introduce deliberate defects or simply stop producing it) and also clearly important to national security. In fact, it's a supply chain risk because is important to national security.
NewsaHackO · 14 days ago
No, in your example, if the dodgy Chinese company is a supply chain risk due to sabotage, why would they invoke an act to force production of the bolts from the same company for use for national defense preparedness, which would be clearly a national security risk?
estearum · 14 days ago
It's easy to resolve an alleged contradiction by just ignoring one half of it lol

Try introducing DPA invocation into your analogy and let's see where it goes!

gipp · 14 days ago
"Supply chain risk" is a specific designation that forbids companies that work with the DOD from working with that company. It would not be applied in your scenario.
ray_v · 14 days ago
The analogy doesn't work here ... In your scenario they are ok with using the bolt as long as the Chinese company promises to remove deliberate defects - which is of course absurd ... AND contradictory.
tabbott · 14 days ago
An organization character really shows through when their values conflict with their self-interest.

It's inspiring to see that Anthropic is capable of taking a principled stand, despite having raised a fortune in venture capital.

I don't think a lot of companies would have made this choice. I wish them the very best of luck in weathering the consequences of their courage.

idiotsecant · 14 days ago
The problem is that this is a decision that costs money. Relying on a system that makes money by doing bad things to do good things out of a sense of morality when a possible outcome is existential risk to the species is a 100% chance of failure on a long enough timeline. We need massive disincentives to bad behavior, but I think that cat is already out of its bag.
freakynit · 14 days ago
I appreciate that the HN community values thoughtful, civil discussion, and that's important. But when fundamental civil liberties are at stake, especially in the face of powerful institutions and influence from people of money seeking to expand control under the banner of "security", it's worth remembering that freedom has never simply been granted. It has always required vigilance, and at times, resistance. The rights we rely on were not handed down by default; they were secured through struggle, and they can be eroded the same way.

Power corrupts, and absolute power corrupts absolutely.

_def · 14 days ago
On a long enough timeline literally everything has 100% chance of failure. I'm not trying to be obnoxious, I just wanna say: we only got this one life and we have to choose what to make of it. Too many people pretend things are already laid out based on game theory "success". But that's not what it's about in life at all.

Deleted Comment

amai · 13 days ago
skylerwiernik · 13 days ago
The quotes from those articles (short passages?) are

> He recalls meeting President Trump at an AI and energy summit in Pennsylvania, "where he and I had a good conversation about US leadership in AI,"

> "Unfortunately, I think 'No bad person should ever benefit from our success' is a pretty difficult principle to run a business on... This is a real downside and I'm not thrilled about it."

> "Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions. I've seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too." (from a researcher at Anthropic)

I don't think that any of this is particularly damning. Even if you don't like the president, I don't think it's bad to say that you had a good conversation with them. I believe the CEO of NVIDIA has said similar. The Saudis invest in many public US companies, does that make those companies less trust worthy? What about taking private capital from institutions such as State Street and Blackrock? The last quote seems like more of a reflection than an allegation. It read to me as a desire to do better.

I'm all for not trusting companies, but Anthropic seems to be one of the few that's trying to do good. I think we've seen a lot worse from many of their competitors.

amai · 13 days ago
The problem is this:

> The Saudis invest in many public US companies, does that make those companies less trust worthy?

It does. If Anthropic takes money from the middle east that might be the reason, why they cannot work for the Pentagon. Simply because the Pentagon works together with the Israeli Forces and middle east investors might not like this. So Anthropic has to decide to either take a lot of money from the middle east, or work for the Pentagon.

Of course the problem goes much deeper than just Anthropic. I don't understand why taking money from dictatorships doesn't count as money laundering in our society. Because basically this is dirty money, generated by slavery and forceful suppression of people. We should forbid all companies to take this kind of dirty money. But because we don't do that at the moment companies who don't take this dirty money will have a disadvantage against companies that do. And because companies are all about money, in the end they are basically forced to act against their good intentions, just to survive.

We as society have to stop this. We must make sure, that companies who are not taking dirty money survive the competition. My idea would be to extend the rules for money laundering to all countries that are dictatorships. But there might be other ideas, to level the playing field between companies, so we as society can help them to make the right decision.

b40d-48b2-979e · 13 days ago

    The Saudis invest in many public US companies, does that make those companies
    less trust worthy?
Uhh.. yeah?

    we've seen a lot worse from many of their competitors
I think we should demand people do better than just being slightly above the worst.

Deleted Comment

techblueberry · 13 days ago
Maybe not and maybe you shouldn't. But I feel like the real story here isn't what Anthropic is saying, but that while Anthropic seems to be bending over backwards to give the Defense Department exactly what they need, defining two of the most reasonable red lines that most American would agree with and are already likely illegal, Pete Hegseth in return is threatening the continued existence of their company.

So let's see what happens tonight at 5:01PM but Anthropic isn't really the story here.

xpe · 13 days ago
I read the articles. As far as factual reporting, I will tentatively take them at face value. But in terms of their editorializing, it is frankly weak by my standards. It would not survive scrutiny in a freshman philosophy class.

Ethics is complicated. I’m not saying this means it can’t be reasoned about and discussed. It can! But the sources you’ve cited have shown themselves to be rather shallow.

I encourage everyone to write out your ethical model and put yourself in their shoes and think about how you would weigh the factors.

There is no free lunch. For many practical decisions with high stakes, many reasonable decisions from one POV could be argued against from another. It is the synthesis that matters the most. Among those articles, I don’t see great minds doing their best work. (The constraints of their medium and funding model are a big problem I think.)

Read Brian Christian’s “The Alignment Problem”’s take on predictive policing if you want a specific example of what I mean. There are actually mathematical impossibilities at play when it comes to common sense, ethical reasoning.

Common sense ethical reasoning has never been very good at new or complicated situations. “Common sense” at its worst is often a rhetorical technique used to shut down careful thinking. At its best, it can drive us to pay attention to our conscience and to synthesize.

I suggest finding better discussions and/or allocating the time yourself to think through it. My preferred sources for AI and ethics discussions are highly curated. I don’t “trust” any of them absolutely. * They are all grist for the mill.

I get better grist from LessWrong than HN 99% of the time. I discuss here to make sure I have a sense of what more “mainstream” people are discussing. HN lags the quality of LW — and will probably never catch up — but it does move in that direction usually over time. I’m not criticizing individuals here; I’m commenting on culture.

Please don’t confuse what I’m saying as pure subjectivity. One could conduct scientific experiments about the quality of discussions of a particular forum in many senses. Which places are drawing upon better information? Which are synthesizing it more carefully? Which drill down into detail? Which participants have allocated more to think clearly? Which strive to make predictions? Which prioritize hot takes? Which prioritize mutual understanding?

It isn’t even close.

Opinions and the Overton window are moving pretty rapidly, compared to even one year ago.

* I’ve written several comments about viewing trust as a triple (who, what, why). This isn’t my idea: I stole it.

anon84873628 · 13 days ago
I understand you are criticizing their editorializing, but can't tell if you agree with the conclusions or not. Care to editorialize yourself?
flumpcakes · 14 days ago
This is such a depressing read. What is becoming of the USA? Let's hope sanity prevails and the next election cycle can bring in some competent non-grievance based leadership.
davidw · 14 days ago
This isn't a one-election thing. It's going to be a generational effort to fix what these people are breaking more of every day. I hope I live to see it come to some kind of fruition - I recently turned 50.
inigyou · 14 days ago
Some people are calling it the "American century of humiliation"

No other country that went through a phase like this has ever recovered. Not even in a century.

eunos · 13 days ago
> generational effort to fix

You imply that there are folks that willing to fix or even recognize that things are broken in the first place

mschuster91 · 13 days ago
> It's going to be a generational effort to fix what these people are breaking more of every day.

That assumes you have people wanting to fix what is broken - and I have a hard time believing even now that they are in the majority.

MAGA and their supporters? They want to see the world burn, if only for different motives: the "left behind" people in flyover states just want revenge, the Evangelicals literally believe they can cause the Second Coming of Christ by it [1], the Russia fangroup wants to see Ukraine burn to the ground and the ultra-libertarians/dont tread on me folks want all government but maybe a bit of military to go away. That is what unifies so many people behind the Trump banner.

The problem is, on the left side you got a bunch of people completely fed up as well. Anarchists of course, then you got the "left behind" people who still want revenge on the system but aren't willing to enlist the help of the far-right for that goal, you got revolutionaries of all kind... and you got those who believe that the rot runs too deep to fix by now.

And let's face the uncomfortable truth: every one of them, bar the Evangelicals and the Russia apologists, actually has a decent point in wanting to see the world burn. Post-Thatcher capitalism has wrecked too many lives, the US Constitution hasn't seen a meaningful update in decades and no overhaul in centuries, the "checks and balances" that were supposed to prevent a Trump from reaching office or rising to the position of effective dictator have been all but destroyed, the "American Dream" has been vaporware ever since 2007...

[1] https://www.bbc.com/news/articles/c20g1zvgj4do

this-is-why · 13 days ago
I’ve been called bad things on HN for suggesting there’s even a whiff of corruption in this administration. That alone scares me. Deeply.
Quarrelsome · 13 days ago
there's more money and "don't rock the boat" mentality on here as a consequence of that and they try to keep the moderation light. So its just not discussed enough to give people still tragically mired in that tribalism, the appropriate levels of shame.
saulpw · 14 days ago
Hope is not a plan, unfortunately, so if that's all we've got, I don't have much hope.
jorblumesea · 14 days ago
You mean, what's been happening to the USA? this isn't a new trend. Militarization of police, open attacks on democracy, unilateral foreign policy moves.

the country jumped the shark post 9/11 and has been on a slow rot since then.

rjbwork · 14 days ago
Indeed. Bin Laden succeeded beyond his wildest dreams. He kickstarted our self-destruction.

Dead Comment

Dead Comment

hightrix · 13 days ago
> What is becoming of the USA?

There was a coup by a foriegn adversary and Americans lost.

georgemcbay · 14 days ago
> Let's hope sanity prevails and the next election cycle can bring in some competent non-grievance based leadership.

Would be nice, but I have a bad feeling that the impact of widescale mostly unregulated AI adoption on our social fabric is going to make the social media era that gave rise to Trump, et al seem like the good ol' days in comparison.

I hope I am wrong.

ypeterholmes · 14 days ago
The current situation in the US is the depressing thing- articles like this give me hope. Real Americans aren't having these BS authoritarian violations of our constitutional rights.
lm28469 · 13 days ago
All of what's happening is a symptom, there is no reason it would change course with the next elections, all of this is the logical development of decades of cultural, political and morale rot in the US society. Trump isn't a bad moment we have to push through before we get back to the baseline, there has been no serious push back from anyone so far, it's here to stay

Dead Comment

gitaarik · 14 days ago
What do you mean? You think any company should do whatever the government tells them?
flumpcakes · 13 days ago
Not at all. It's a depressing read because the US Government is doing such things that would have been considered insane before 2016.