But the "Anthropic fight" is mostly fake. Palantir was using Claude as base model. Anthropic allegedly took issue with unsupervised kills because the technology wasn't ready (or something along the lines).
Also, I remember reading this guy has close ties to Anthropic. Also, I find it suspicious how he came to prominence out of nowhere. Like Big Tech and the establishment are propping podcasts of controlled narrative/opposition. I don't buy any of it.
Agreed. There is way too much money in flight to believe that anything people say in public is true. If there was ever a time to get rich as a spin doctor it’s now.
In my view, this guy's podcast didn't get big talking about AI, it's best known for cold war history and foreign policy discussions with Sarah Paine of the U.S. Naval War College.
Dwarkesh definitely got big in Silicon Valley from his AI podcasts. He's one of the few people who can get famous researchers on and also have them say something genuinely new.
After that, he become well-known to the general public through his Sarah Paine podcasts (which are excellent).
> I flew to the Bahamas to interview Sam Bankman-Fried, the CEO of FTX! He talks about FTX’s plan to infiltrate traditional finance, giving $100m this year to AI + pandemic risk, scaling slowly + hiring A-players, and much more.
And that was right in the middle of FTX being accused by many prominent people .
It's entirely fake, sure Palantir uses Claude, but it takes about 10 minutes to pull all their federal contracts and realize the little involvement they have in the kill chain is preliminary
It doesn't matter what you know so much as who you know. Networking is the most precious currency. He met the right people, got the right guests, and surfed a wave of fortunate occurrences. He was roommates with Dylan Patel of TheIjnformation, and John Y of Asionometry, and has since developed a wide range of high level industry contacts.
Sometimes people succeed without earning it, and what matters is what they do with the success afterwards. I'd say Dwarkesh earned it, but got lucky and caught the right waves, and has surfed the hell out of his success. He's had consistently well informed, level headed takes, and has engaged the field with insight and honest curiousity.
When I see people surf like that, I applaud it. There's nothing grifty or shady, he's just had a great series of excellent opportunities and has played them for everything they're worth. Once he had a few billionaires on, that was all the social cache he needed to continue attracting guests and high level researchers and other figures in AI.
I might be old, but he strikes me as a shallow valley Bro. His CV has nothing of significance. But he had a lot of Big Tech guests and even that Navy intelligence woman. He got a boost by being endorsed by Bezos. It smells of BS to me. Again, maybe I'm just a grumpy greybeard and this is a Gen Z thing.
It seems to me that AI target-selection systems are being used not just for efficiency, but as a way to distance military staff from responsibility for what they are killing. Current AI models naturally speculate and hallucinate if you don’t tightly constrain them. we see this all the time as software engineers when working with agentic coding.
This creates a dangerous dynamic. AI can generate targets that a human operator might not be able to justify manually, and when something goes wrong the blame can always be shifted to the system, such as the recent incident where roughly 180 children were killed due to faulty targeting.
Israel’s way of fighting this war looks more like pure destruction than a conventional military campaign, and AI systems like this are very easy to abuse in that context. At this point it’s clear that even the U.S. is willing to eliminate targets even when the collateral damage includes the person’s family or neighbors. I don’t think that would have been acceptable in previous administrations. Israel has lowered the bar.
That may be why Anthropic moved early to denounce this kind of usage, even though they had previously partnered with the Department of War.
Now let’s look at the statements made by Anthropic and Hegseth:
From Anthropic’s own statement, we hear that they have actually been quite closely partnered. In Hegseth’s tweet we see:
“Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.”
This shows that Anthropic is still currently being actively used by the Department of War.
My view is that Anthropic and its investors eventually realized that the American war machine will use their technology in reckless ways, and that this will certainly create a massive PR disaster or, in an ideal world, even legal consequences. That realization likely pushed them to adopt what they now frame as a more “humanitarian” position.
"Within 20 years 99% of the military will be AIs"
That smells like such a baseless speculation that from the get go I'm not convinced of the author's rigor.
To be fair, we don’t have many government mules these days, but it wasn’t always so, and not so long ago.
The current amount of horsepower on the hoof is a rounding error, but before mechanized farming and war-fighting, these distinctions were the difference.
If we consider the capacity of technology to act as a force multiplier, it is reasonable to assume that current and future AI-assisted fighting forces can achieve more with less traditional materiel and with fewer personnel.
Drones are an especially likely way that these many AIs will become embodied and diversify, in which case I don’t think the percentages are so far-fetched.
> Further ahead in the future, it wants its machines to be programmed to travel autonomously to a location, carry out its task - such as watching out for advancing enemy soldiers and engaging them if necessary - and then return to base after a certain time.
> The whole background of this AI conversation is that we’re in a race with China, and we have to win. But what is the reason we want America to win the AI race? It’s because we want to make sure free open societies can defend themselves. We don’t want the winner of the AI race to be a government which operates on the principle that there is no such thing as a truly private company or a private citizen.
In the US currently, there are private citizens, and there are 'not-the-1%' citizens, where a Kavanaugh stop is legal, your voter information may be (or may have already been) seized by the DoJ or FBI, you may be tracked by out of state or federal agents on ALPRs with no warrant, for any reason, and where attending a legal protest may have your biometrics added to a database of potential domestic terrorists.
Or maybe your tax money will just be used to blow up unidentified boaters or bomb girls' schools and homes, and you'll get no say in whether that's the case because the elected body that is there to issue a declaration of war (or not) as representatives of you, has abdicated that power to a cabinet of unelected white nationalists.
But go off about how we're such a better country that believes in freedom and goodness.
Great take. If the past year has taught us anything, it’s that the US can’t really be seen as the “good guys” in such a simple way. Many of these things have been happening for years, but war crimes, disregard for international law, blackmailing allies, killing their own citizens without accountability, and allowing foreign governments to heavily influence policy are all troubling signs.
It’s easy to point to China as a place where freedom of speech isn’t present, but try asking members of the current administration or even Supreme Court judges who won the 2020 election and see what kind of responses you get. That alone says a lot about the current state of things.
>It’s easy to point to China as a place where freedom of speech isn’t present, but try asking members of the current administration or even Supreme Court judges who won the 2020 election and see what kind of responses you get.
Freedom of speech and regard for the facts are independent concerns. People absolutely have the right to call out lies about the 2020 election and have repeatedly done so.
Even if more illegal wars are started in the Middle East, even if inequality gets more obscene, VCs on HN are still going to insist that We The Good Guys are the champions of freedom, equality, justice, all the good stuff that we don’t practice (but we have great ideas about).
Add to that all the military posturing over Taiwan and it's clear that it's not "China doesn't do what the US does", it's "China hasn't done it...yet."
The idea that anyone would be better off with China supplanting the US is asinine. This is the same government that committed the Tiananmen square massacre and still doesn't acknowledge that anything happened.
Is Trump really not a dictator? Meanwhile, China has been focusing on domestic development and investing in underdeveloped regions, including across Africa. China hasn't bombed girls' schools and then lied that it's their own country thrown the bomb.
I love that you are allowed to go off about how we are a worse country without fear of jail or shunning or anything like that. You are using your rights properly!
You are assuming that they are American and that the account is tied to their real identity and that they are not willing to take risks to state the truth. The Trump administration is already attempting to persecute critics[1], including some for random comments posted online[2]. If "freedom of speech" is your metric for what makes you a better country, you are in fact literally proving their point.
People have also been detained with intention to be deported for their views about Palestine, with online comments being part of how they're chosen for targeting:
There was also someone jailed for a month for quoting Trump's own words about a school shooting, "we have to get over it", in the context of Charlie Kirk's death, along with many other noted instances of retaliation against online comments around that incident:
ICE asking for a list of social media profiles of its detractors doesn't sound like "without fear of jail or shunning or anything like that." to me. Through data mining and 3rd parties, the local PD has a dossier on me based on what I write here that would come up if I did something to get their attention. That has a chilling effect on what say on here in public.
People who buy the USA-vs-China race to a specific goal - do they really believe if China gets "AGI" first, they will immediately try to conquer the USA? How exactly will that go?
It's more likely they will continue expansionist policy in Asia which counters several American diplomatic goals:
1. Democracy and freedom worldwide
2. Economic access+prosperity with Asia
3. Pro-American sentiment
(Not in order of importance, which shifts constantly)
I think assuming China would beat the US in conventional war if they reach 'AGI' first is a stretch, even if this actually grants them a force advantage it's not like the US can no longer reach AGI. The risk is really more that if they reach 'AGI' and subsequently a force advantage, that they would no longer be deterred and more decisively move on Taiwan next year. Taiwan is key to [1] and [2] above.
There is nothing unconstitutional about the first paragraph of your criticism. What is unconstitutional is restricting your ability to write this criticism, which is not breached.
You _could_ argue that this is a flaw in the constitution, and that none of the above should be legal, and that people who support those things should be restricted in their speech or ability to hold office. This was the status quo in politics for a while! These things have all existed for a long time but this seems particularly targeted at Trump, who was famously banned from most social media platforms for years.
There are a lot of democracies (most of the EU for example) that take this stance on freedoms and will even overturn elections to prevent those who support those policies. The question is really 'does doing that protect freedom and democracy or infringe it?'
As for the second paragraph, this is just a lie, Congress has not abdicated any type of war powers to the Cabinet. There has not been any type of declaration of war, and if Congress wanted to stop the DoD, they very much could and in fact came very close to doing so. If your Congress representative did not represent your interests (in this case voted nay), you can call email etc. them and their office or vote them out.
> better country that believes in freedom and goodness
I think you're letting your strong feelings here cloud your judgement, you can hold all of these opinions above without needing to fellate China, which is objectively worse on freedoms than the US. It's also important not to conflate "believes in freedom" with "perfectly meets my line of freedom."
> But within 20 years, 99% of the workforce in the military, the government, and the private sector will be AIs. This includes the soldiers (by which I mean the robot armies), the superhumanly intelligent advisors and engineers, the police, you name it.
Frankly, I find that less 'naive' than I do 'dangerously possible'.
Autonomous weaponry is one of the few ways that a fascist state could reasonably maintain violent control over a large and hostile populace.
I guarantee Trump would rather have perfectly obedient killbots than critically thinking soldiers, or even just the 5 murderous assholes required to oversee tasking for 1000 semi-autonomous police drones.
The least plausible part is the private sector, which just doesn't work that way.
>What we’re learning from this episode is that the government actually has way more leverage over private companies than we realized.
Who is learning this for the first time only now? Even just restricting ourselves to the current administration, look at how many times Trump has directed punitive actions against private entities! Look at his actions against law firms like Perkins Coie or Covington & Burling. This is not something that just arose out of nowhere with Anthropic.
This stood out to me too - there's an underlying assumption that private entities _can_ say no to governments, but that's only true to a point. If the government decides it needs AI-powered killbots as a matter of national security, it can and will nationalize whatever entities it needs to build them.
> So what’s the Pentagon’s plan — to coerce and threaten to destroy every single company that won’t give them what they want on exactly their terms?
I mean... isn't that pretty much the way the current administration behaves in general? If the answer to this question is "yes", and the US executive does not in fact share the values of the author about free and open society, then the rest of the article is kinda moot (except the point that we should be talking about these things now, and encouraging congress to act).
The administration believes that rights, in this case the right of corporate existence, are granted by the state. This is opposed to the liberal conception that rights are a product of natural existence - an essential feature of being.
The right of corporate existence is granted (or at least regulated, heavily) by the state.
This administration believes that they don't need to treat all businesses equally under the law, and can use strong-arm intimidation tactics to get what they want. That is the problem.
As much as I dislike Trump, I can't imagine that the military, under ANY administration, would hesitate to seize any technology that they thought was critical and was being withheld from them, especially if they can claim we're at war when special provisions apply.
I remember thinking about this - basically AGI - decades ago, and it was always obvious to me that if you created such a thing there'd come a day when the MIB would be ringing the doorbell.
One thing I’ve never heard a good answer to: If Anthropic is a supplier not to the Department of Defense itself, but to Palantir, why isn’t supply chain risk the proper designation (assuming the government’s concerns with Anthropic having authority over military missions is valid)?
As for whether code written with Claude Code should be so considered - if it’s just code that is subject to human review, I would argue that this use shouldn’t be a supply chain risk. But with Claude Code PR Review and similar products, the chance that an AI product (not limiting to Anthropic here) could own a load-bearing part of the lifecycle of a critical piece of code becomes much larger, and deserves scrutiny.
I'm not sure that "supply chain risk" is even the right term to be discussing.
What Hegseth/Trump want to do is not just stop Anthropic models from being used by any military supplier pursuant to goods/services they are providing to the military, but rather say that if you do business with the military then you must not use Anthropic at all, even if that usage is entirely unrelated to your military contracts.
This is explicitly not what they have done, not how government contractors ever interpret this designation, nor something they could do even if they wanted to do.
It is also common corporate doctrine to use a subsidiary for government contracting to avoid having to evidence that a commercial vendor is utilized for government, so this won't even be 'annoying' for contractors.
ITAR and compliance frameworks (e.g. FedRAMP and CMMC) already mandate this for any non-US company, yet AWS commercial still has offerings in other countries and from non-US vendors, Palantir still has an IG business, etc.
This admin technically can't do a lot of the things it does. They do it anyway with utter contempt for the rule of law. Congress is useless and gives them a blank check and the Supreme Court just stalls everything using the shadow docket.
Also, I remember reading this guy has close ties to Anthropic. Also, I find it suspicious how he came to prominence out of nowhere. Like Big Tech and the establishment are propping podcasts of controlled narrative/opposition. I don't buy any of it.
Dead Comment
Really Anthropic doesn't seem to be fighting for anyone but a narrow subset of people.
So who cares, none of the but AI providers are particularly ethical. Pick your poison as your conscious and needs allow.
After that, he become well-known to the general public through his Sarah Paine podcasts (which are excellent).
He was first funded by FTX
SBF was in Patel's previous podcast in July 2022 and FTX unraveled in November 2022. Hmm.
https://www.dwarkesh.com/p/sbf
> I flew to the Bahamas to interview Sam Bankman-Fried, the CEO of FTX! He talks about FTX’s plan to infiltrate traditional finance, giving $100m this year to AI + pandemic risk, scaling slowly + hiring A-players, and much more.
And that was right in the middle of FTX being accused by many prominent people .
April 29, 2022 https://x.com/AlderLaneEggs/status/1520023221294145536
June 20, 2022 https://x.com/MartyBent/status/1538645746655936519
Sometimes people succeed without earning it, and what matters is what they do with the success afterwards. I'd say Dwarkesh earned it, but got lucky and caught the right waves, and has surfed the hell out of his success. He's had consistently well informed, level headed takes, and has engaged the field with insight and honest curiousity.
When I see people surf like that, I applaud it. There's nothing grifty or shady, he's just had a great series of excellent opportunities and has played them for everything they're worth. Once he had a few billionaires on, that was all the social cache he needed to continue attracting guests and high level researchers and other figures in AI.
SemiAnalysis
Also, somewhat spitefully, find it funny that he has multiple roommates.
This creates a dangerous dynamic. AI can generate targets that a human operator might not be able to justify manually, and when something goes wrong the blame can always be shifted to the system, such as the recent incident where roughly 180 children were killed due to faulty targeting.
Israel’s way of fighting this war looks more like pure destruction than a conventional military campaign, and AI systems like this are very easy to abuse in that context. At this point it’s clear that even the U.S. is willing to eliminate targets even when the collateral damage includes the person’s family or neighbors. I don’t think that would have been acceptable in previous administrations. Israel has lowered the bar.
That may be why Anthropic moved early to denounce this kind of usage, even though they had previously partnered with the Department of War.
Now let’s look at the statements made by Anthropic and Hegseth:
https://www.anthropic.com/news/where-stand-department-war
https://x.com/SecWar/status/2027507717469049070
From Anthropic’s own statement, we hear that they have actually been quite closely partnered. In Hegseth’s tweet we see:
“Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.”
This shows that Anthropic is still currently being actively used by the Department of War.
My view is that Anthropic and its investors eventually realized that the American war machine will use their technology in reckless ways, and that this will certainly create a massive PR disaster or, in an ideal world, even legal consequences. That realization likely pushed them to adopt what they now frame as a more “humanitarian” position.
The current amount of horsepower on the hoof is a rounding error, but before mechanized farming and war-fighting, these distinctions were the difference.
If we consider the capacity of technology to act as a force multiplier, it is reasonable to assume that current and future AI-assisted fighting forces can achieve more with less traditional materiel and with fewer personnel.
Drones are an especially likely way that these many AIs will become embodied and diversify, in which case I don’t think the percentages are so far-fetched.
https://www.bbc.com/news/articles/c62662gzlp8o
> Further ahead in the future, it wants its machines to be programmed to travel autonomously to a location, carry out its task - such as watching out for advancing enemy soldiers and engaging them if necessary - and then return to base after a certain time.
> “Preface to the highest stakes negotiations in history.”
Like come on. The cuban missile crisis, for starters? Bro needs to calm tf down.
> The whole background of this AI conversation is that we’re in a race with China, and we have to win. But what is the reason we want America to win the AI race? It’s because we want to make sure free open societies can defend themselves. We don’t want the winner of the AI race to be a government which operates on the principle that there is no such thing as a truly private company or a private citizen.
In the US currently, there are private citizens, and there are 'not-the-1%' citizens, where a Kavanaugh stop is legal, your voter information may be (or may have already been) seized by the DoJ or FBI, you may be tracked by out of state or federal agents on ALPRs with no warrant, for any reason, and where attending a legal protest may have your biometrics added to a database of potential domestic terrorists.
Or maybe your tax money will just be used to blow up unidentified boaters or bomb girls' schools and homes, and you'll get no say in whether that's the case because the elected body that is there to issue a declaration of war (or not) as representatives of you, has abdicated that power to a cabinet of unelected white nationalists.
But go off about how we're such a better country that believes in freedom and goodness.
It’s easy to point to China as a place where freedom of speech isn’t present, but try asking members of the current administration or even Supreme Court judges who won the 2020 election and see what kind of responses you get. That alone says a lot about the current state of things.
Freedom of speech and regard for the facts are independent concerns. People absolutely have the right to call out lies about the 2020 election and have repeatedly done so.
More like the past 200 years. America have never been the "good guys", and it is only Americans who seem to think they ever were.
-- signed, rest of the world :|
Better than China as a global model? Still, yes, probably. Potentially. Depends on how the next few years ago.
Even if America fails, I’d argue a global republic is a brighter potential future than a global dictatorship.
Just like being a billionaire (or, super-wealther, if you will), you don't get to be a superpower by doing good things.
China and the US can both be bad, and they're both going to use AI for mass internal and external surveillance and weapon targeting.
The idea that anyone would be better off with China supplanting the US is asinine. This is the same government that committed the Tiananmen square massacre and still doesn't acknowledge that anything happened.
[1] https://www.washingtonpost.com/national-security/2026/02/13/...
[2] https://www.nytimes.com/2026/02/13/technology/dhs-anti-ice-s...
People have also been detained with intention to be deported for their views about Palestine, with online comments being part of how they're chosen for targeting:
[3] https://www.columbiaspectator.com/news/2026/01/28/federal-go...
There was also someone jailed for a month for quoting Trump's own words about a school shooting, "we have to get over it", in the context of Charlie Kirk's death, along with many other noted instances of retaliation against online comments around that incident:
[4] https://www.cnn.com/2025/12/17/politics/retired-cop-jailed-o...
ICE asking for a list of social media profiles of its detractors doesn't sound like "without fear of jail or shunning or anything like that." to me. Through data mining and 3rd parties, the local PD has a dossier on me based on what I write here that would come up if I did something to get their attention. That has a chilling effect on what say on here in public.
1. Democracy and freedom worldwide
2. Economic access+prosperity with Asia
3. Pro-American sentiment
(Not in order of importance, which shifts constantly)
I think assuming China would beat the US in conventional war if they reach 'AGI' first is a stretch, even if this actually grants them a force advantage it's not like the US can no longer reach AGI. The risk is really more that if they reach 'AGI' and subsequently a force advantage, that they would no longer be deterred and more decisively move on Taiwan next year. Taiwan is key to [1] and [2] above.
You _could_ argue that this is a flaw in the constitution, and that none of the above should be legal, and that people who support those things should be restricted in their speech or ability to hold office. This was the status quo in politics for a while! These things have all existed for a long time but this seems particularly targeted at Trump, who was famously banned from most social media platforms for years.
There are a lot of democracies (most of the EU for example) that take this stance on freedoms and will even overturn elections to prevent those who support those policies. The question is really 'does doing that protect freedom and democracy or infringe it?'
As for the second paragraph, this is just a lie, Congress has not abdicated any type of war powers to the Cabinet. There has not been any type of declaration of war, and if Congress wanted to stop the DoD, they very much could and in fact came very close to doing so. If your Congress representative did not represent your interests (in this case voted nay), you can call email etc. them and their office or vote them out.
> better country that believes in freedom and goodness
I think you're letting your strong feelings here cloud your judgement, you can hold all of these opinions above without needing to fellate China, which is objectively worse on freedoms than the US. It's also important not to conflate "believes in freedom" with "perfectly meets my line of freedom."
> But within 20 years, 99% of the workforce in the military, the government, and the private sector will be AIs. This includes the soldiers (by which I mean the robot armies), the superhumanly intelligent advisors and engineers, the police, you name it.
Autonomous weaponry is one of the few ways that a fascist state could reasonably maintain violent control over a large and hostile populace.
I guarantee Trump would rather have perfectly obedient killbots than critically thinking soldiers, or even just the 5 murderous assholes required to oversee tasking for 1000 semi-autonomous police drones.
The least plausible part is the private sector, which just doesn't work that way.
Dead Comment
The part of the Pentagon that did this is, to put it politely, not the part that's good at planning.
Who is learning this for the first time only now? Even just restricting ourselves to the current administration, look at how many times Trump has directed punitive actions against private entities! Look at his actions against law firms like Perkins Coie or Covington & Burling. This is not something that just arose out of nowhere with Anthropic.
A teenager, probably. Not everyone is 100 years old.
I mean... isn't that pretty much the way the current administration behaves in general? If the answer to this question is "yes", and the US executive does not in fact share the values of the author about free and open society, then the rest of the article is kinda moot (except the point that we should be talking about these things now, and encouraging congress to act).
This administration believes that they don't need to treat all businesses equally under the law, and can use strong-arm intimidation tactics to get what they want. That is the problem.
I remember thinking about this - basically AGI - decades ago, and it was always obvious to me that if you created such a thing there'd come a day when the MIB would be ringing the doorbell.
As for whether code written with Claude Code should be so considered - if it’s just code that is subject to human review, I would argue that this use shouldn’t be a supply chain risk. But with Claude Code PR Review and similar products, the chance that an AI product (not limiting to Anthropic here) could own a load-bearing part of the lifecycle of a critical piece of code becomes much larger, and deserves scrutiny.
What Hegseth/Trump want to do is not just stop Anthropic models from being used by any military supplier pursuant to goods/services they are providing to the military, but rather say that if you do business with the military then you must not use Anthropic at all, even if that usage is entirely unrelated to your military contracts.
It is also common corporate doctrine to use a subsidiary for government contracting to avoid having to evidence that a commercial vendor is utilized for government, so this won't even be 'annoying' for contractors.
ITAR and compliance frameworks (e.g. FedRAMP and CMMC) already mandate this for any non-US company, yet AWS commercial still has offerings in other countries and from non-US vendors, Palantir still has an IG business, etc.
Because you can't designate a company a SCR because you don't like the contract you signed with them.
I speculate we'll discover there's very few unambiguously ethical uses of AI, much less for military applications. Them's the breaks.