Readit News logoReadit News
emsign · 19 days ago
So the Pentagon is strongarming a company into cooperation? That reminds of how my alcoholic neighbor used to treat his family. It's almost as if someone let a mean drunk be in charge of the Pentagon.
basch · 19 days ago
Without reading every word of every embedded tweet, a part missing from the conversation is HOW they are strongarming.

It isn't in private. It's a public threat in the court of public opinion to apply societal pressure on the company. They are attempting to reshape Anthropic's decision into a tribal one, and hurt the brand's reputation within the tribe unless it capitulates.

throw0101a · 19 days ago
> Without reading every word of every embedded tweet, a part missing from the conversation is HOW they are strongarming.

There are two possibilities:

> The government would likely argue that dropping the contractual restrictions doesn't change the product. Claude is the same model with the same weights and the same capabilities—the government just wants different contractual terms. […] Anthropic would likely argue the opposite: that its usage restrictions are part of what Claude is as a commercial service, and that Claude-without-guardrails is a product it doesn't offer to anyone. On this view, the government is asking for a new product, and the statute doesn't clearly authorize that.

and

> The more extreme possibility would be the government compelling Anthropic to retrain Claude—to strip the safety guardrails baked into the model's training, not merely modify the access terms. Here the characterization question seems easier: a retrained model looks much more like a new product than dropping contractual restrictions does. Admittedly, the government has a textual argument in its favor: the DPA's definitions of "services" include “development … of a critical critical technology item,” and the government could frame retraining Claude as exactly that. Whether courts would accept that framing, especially in light of the major questions doctrine, is another matter.

* https://www.lawfaremedia.org/article/what-the-defense-produc...

* https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950

A more extreme situation: could the DPA be used to nationalize the model so the government has ownership, and then allow access to more amenable AI players?

EA-3167 · 19 days ago
The top line of the article gives a big old hint: Anthropic signed a contract with the “Killing people” part of the government and now they’re putting on a show. No contract, no leverage.

The only threat the Pentagon has is to terminate the contract.

foogazi · 19 days ago
> It isn't in private.

We don’t know this

ljm · 19 days ago
I wouldn't start up a new company in the US knowing that they are going full tyrant like this.
nickff · 19 days ago
It seems like an unfortunate reality that being a government contractor puts any company in any country at the whim of their government. AFAIK every government has 'pulled the rug out' from at least some contractors at some point.
AbstractH24 · 19 days ago
Which country would you pick?
nickff · 19 days ago
The whole government 'strong-arms' many of its counter-parties in a variety of situations; this is unfortunately nothing new, and far from an innovation by Hegseth. A more clearly illegal example (because the government was acting as a regulator, not a purchaser) is Operation Choke Point, though there are many others: https://en.wikipedia.org/wiki/Operation_Choke_Point

Dead Comment

CodingJeebus · 19 days ago
As if governments throughout history haven't constantly used threats to gain leverage? No need to take a personal shot at the guy in charge when this is SOP throughout the administration.
mrandish · 19 days ago
I don't like the "guy in charge" anyway but it's not clear the other major party would stand united against this if they were in power. While I believe they'd probably have hearings and debate it more, this may be one of those issues where the defense establishment usually gets what it wants no matter which party is in control. One party protesting an issue when they're in the minority can just be performative "point scoring" against their opposition - not a guarantee of what result they'd participate in engineering if they were in power.

Much like FISA court-enabled unaccountable surveillance, this may be another of the increasing number of things where neither major party is will actually stop it. In terms of real-world outcomes, it doesn't much matter whether the party in control has just enough of their members (in the safest seats) vote with the minority to pass an unpopular measure or if they all vote for it. When the votes are stage managed in advance, the count being close is merely optics to further the narrative that the two major parties represent meaningfully different outcomes on every major issue.

ljm · 19 days ago
Why do you personally feel the need to defend this person given his involvement in what the administration is doing?
EdwardDiego · 19 days ago
Besides he takes enough shots as it is. Ahoyooo! Thank you, I'll be here all week.
quickthrowman · 18 days ago
Why are you defending an (active) alcoholic former news anchor that has no business in his position?
buellerbueller · 19 days ago
Guy is an unqualified alcoholic in charge of our safety. All shots are warranted.
linkregister · 19 days ago
Personal shots at the guy in charge have happened many times in history. Aren't you violating the principle defined in your first sentence?

Deleted Comment

mrandish · 19 days ago
Obviously, domestic surveillance of U.S. citizens is bad but before even getting to that, the thing that doesn't make sense is: it's illegal for the DoD to do that (unless the citizens are military or DoD employees).

And, does anyone seriously think developing autonomous kill-bots without a human in the loop in the next 3 years is something the DoD should be unilaterally doing now without congressional review? Personally, I think autonomous kill bots with a human in the loop, with congressional review, and even 10 years from now are categorically a terrible idea.

However, I can imagine some reasonable people perhaps quibbling over saying never by citing things like "sufficient safeguards", "congressional oversight" and at a future time where AIs don't hallucinate constantly. But none of that is in contention here. The DoD is publicly proclaiming their need to do things right now which are either A. illegal, or B. no serious person thinks is sane.

NewJazz · 19 days ago
Personally, I think autonomous kill bots with a human in the loop, with congressional review, and even 10 years from now are categorically a terrible idea.

Pretty sure these exist today...

bigyabai · 19 days ago
https://en.wikipedia.org/wiki/MIM-104_Patriot

  Patriot was one of the first tactical systems in the U.S. Department of Defense (DoD) to employ lethal autonomy in combat.

unyttigfjelltol · 19 days ago
Techno futurist:

1. Builds tool extremely capable of mass surveillance and running autonomous warfighting capabilities.

2. Expresses shock — shock — when the Department of War insists on using the tool for mass surveillance and autonomous warfighting systems.

Thrymr · 19 days ago
I don't doubt that Claude is capable of mass surveillance, but surely it is not too much of a stretch to say it may not be suitable for automated killbots?
godelski · 19 days ago

  We kill people based on metadata 
  - General Hayden
  Former Director of NSA
  Former Director of CIA
This goes far beyond metadata...

[source] https://www.youtube.com/watch?v=tL8_caB35Pg

ozlikethewizard · 19 days ago
I assume the techs at the pentagon know that, and itd be more used for intelligence (Equally as worrying, because if theres one thing GPTs arent, its intelligent)
groby_b · 19 days ago
IDK, depends on how much you care about outcomes.

I don't think Drunk Pete does, very much.

diydsp · 19 days ago
1. The article points out Claude has resisted being trained for that. AI in general could, but Claude can not.
supern0va · 18 days ago
I think the biggest problem is whether Claude could be tricked into doing so. I could see how mass surveillance could be repacked as "summarize my conversations", or autonomous killbots could be playing a video game.

Deleted Comment

spidersenses · 19 days ago
Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create The Torment Nexus
EA-3167 · 19 days ago
Step 1.5 is also the one being ignored by 95% of comments here: the leverage the Pentagon is using is the lucrative contract Anthropic signed with them. The only threat here is Anthropic sucking up less money from the DoD.
unsnap_biceps · 19 days ago
the article lists three things, two of which are concerning beyond just losing some money. Granted, I have no idea how realistic the later two are.

    These consequences are generally understood to be some mix of :
    
    canceling the contract
    
    using the Defense Production Act, a law which lets the Pentagon force companies to do things, to force Anthropic to agree.
    
    the nuclear option, designating Anthropic a “supply chain risk”. This would ban US companies that use Anthropic products from doing business with the military2. Since many companies do some business with the government, this would lock Anthropic out of large parts of the corporate world and be potentially fatal to their business3. The “supply chain risk” designation has previously only been used for foreign companies like Huawei that we think are using their connections to spy on or implant malware in American infrastructure. Using it as a bargaining chip to threaten a domestic company in contract negotiations is unprecedented.

Balinares · 19 days ago
It's been amazing watching them cosplay ethicality while twisting themselves into knots attempting to justify selling their service to Satan.

Who could have predicted that Satan would turn around and screw them, outside of everyone ever. Maybe they should have asked a person instead of Claude.

hoopleheaded · 19 days ago
Exactly - step 2 should be sign $200MM contract with party obviously and extremely interested in mass surveillance and autonomous warfighting capabilities.

Then comes the shock.

xiphias2 · 19 days ago
,,Needless to say, I support Anthropic here. I’m a sensible moderate on the killbot issue (we’ll probably get them eventually, and I doubt they’ll make things much worse compared to AI “only” having unfettered access to every Internet-enabled computer in the world). But AI-enabled mass surveillance of US citizens seems like the sort of thing we should at least have a chance to think over, rather than demanding it from the get-go.''

Why would killbots be sensible moderate with the number of hallucinations LLMs have right now?

They just need to have one rm -rf bug somewhere to so something disasterous, and at least Antrhopic's CEO understands the limitations of the software.

propagandist · 19 days ago
If the killbots are ok for the periphery, surveillance will surely be arriving for the metropole's inhabitants.
bink · 19 days ago
Imagine a world where in order to do business in the US you must grant the government control of your company. This sounds worse than even the most alarmist China takes.
phkahler · 19 days ago
Sounds exactly like China to me.
MarcelOlsz · 19 days ago
Yeah except in their society some cool shit happens at least.
bdangubic · 19 days ago
This is exactly America’s path. All this time we were “fighting” regimes like Chinese and Russian and now it is like “can’t beat them, join them” banana republic
ks2048 · 19 days ago
You can change just change the last word and get Latin American foreign policy for the past 130 years,

"Imagine a world where in order to do business in the US you must grant the government control of your country".

tehjoker · 19 days ago
I don't even understand why it is thought that letting a small non-elected clique run economically important infrastructure and control the lives of thousands of employees isn't considered dystopian. Public ownership at least has democratic legitimacy.
mrandish · 19 days ago
My strong initial reaction to even the idea of "fully autonomous AI killbots" made me miss a subtle distinction about what the real danger is. We already have a variety of non-AI killbots. Conceptually, any area denial weapon like a proximity triggered Claymore mine is a non-AI "killbot". And just tying one or more sensors to trigger a gun or explosive already works today without AI. . So what's gained by adding full AI?

Such non-AI automatic triggering and targeting can already be constrained by location, range, time frame, remote-control, etc using fairly sophisticated non-AI heuristics. If non-AI devices can already <always pull trigger if X, Y and Z conditions = TRUE>, this is really about not pulling the trigger based on more complex judgements. That really only enables leaving such systems armed and active in far larger, less constrained contexts where 'friend or foe' judgements exceed basic true/false sensor conditions. That the military feels such urgent need for that capability is much more worrying to me.

vonneumannstan · 19 days ago
Point blank one of the most nakedly evil things the government has ever tried to do. Apparently Anthropic's sticking points were no using the model for autonomous kill orders and no mass surveillance...
freejazz · 19 days ago
It probably wouldn't crack the top 100
emsign · 19 days ago
It's just another good example of why everyone should avoid doing business with US companies.
knollimar · 19 days ago
Crazy to me that they don't expect this reaction.

Between military threats and this, are they trying to slaughter the golden geese of things the US has going for it?

colek42 · 19 days ago
The voters and congress tell the military how to use technology, not Anthropic. Shifting the decision to Anthropic takes away power from the citizenship.

Edit: The point is, go vote if you don't agree with what the administration is doing. Somebody will sell the DoD whatever they want no matter what Anthropic does.

enoch_r · 19 days ago
Say I own a spoon company. The government says "hey, I'd like to buy a million spoons from you!" I say "sure, sounds great." We sign a contract stating that I'll give them 1M spoons and they'll send me $1M.

Then the government comes to me and says "hey, actually, turns out we need 500,000 forks and 300,000 knives and only 200,000 spoons."

I say "no, we are a spoon company. Very passionate about spoons. Producing forks and knives would be an entirely different business, and our contract was for spoons."

The military now threatens to destroy my company unless I give them forks and knives instead of spoons.

You say "the voters and congress tell the military how to use utensils, not SpoonCo. Shifting the decision to SpoonCo takes power away from the citizenship."

The military can sign contracts if they wish! They can decline to sign contracts if they wish!

But private citizens can also choose whether to sign or not sign contracts with the military. Threatening to destroy their business if they don't sign contracts the military likes (or to renegotiate existing contracts in the military's favor) is a huge violation.

kalkin · 19 days ago
What percentage of voters do you think want the Pentagon to institute an AI-powered domestic mass surveillance program?
blargey · 19 days ago
The poll linked in the article shows even trump voters have <30% approval for the pentagon’s actions here, so if the citizenship tells the military how to do things…
oceanplexian · 19 days ago
You might want to go look at the laws that were passed in the wake of WWII. The US could trivially nationalize Anthopic if they want to play games with a weapons technology.
mattnewton · 19 days ago
Sounds like the voters and congress should buy from someone else then if this is what they want?
vonneumannstan · 19 days ago
I'm sorry but the Pentagon already had a contract with Anthropic and is now threatening to use the supply chain risk law to essentially kill their entire company because they wanted to re-write the contract. They could easily just not sign the contract and move to a competitor. Its an incredibly disturbing and chilling move by the Pentagon...
buellerbueller · 19 days ago
The government is bound by its contracts. The government is not Darth Vader: "I am altering the deal; pray I don't alter it any further."
sandworm101 · 19 days ago
If voters had any say in how software services were delivered, Windows 11 would be such a s--t pile.

There is a name for a system of government whereby a ruling party dictates how industry should employ its property, and it isn't democracy.

7777777phil · 19 days ago
sing the "supply chain risk" designation against a domestic AI company is wild. Not sure that tool had vendors who won't rewrite their ToS on demand in mind.

Meanwhile the Pentagon could just build its own capacity. Commercial AI outspends federal science R&D 75:1 right now.