The irony is how quickly we had shifted from AI will help in curing cancer and other diseases to using AI to destroy and kill our enemies. What weird times to be alive!
> The irony is how quickly we had shifted from AI will help in curing cancer and other diseases to using AI to destroy and kill our enemies.
"We" have been mainstream (?) talking about AI killing since (at least) the first Terminator movie in 1984. The geeks/nerds have much earlier: Frank Herbert talked about humans outsourcing their thinking and being 'enslaved' in Dune with the Butlerian Jihad in 1965. Isaac Asimov's Three Laws of Robotics are from 1942.
Magical thinking is rarely constructive as an argument, but as a fig leaf, it might keep opposition talking for long enough to force through a fait accompli.
Only a matter of time until Department of War starts blaming AI for its errors. I predict that it will soon replace "I don't remember that" as the standard excuse.
My homebaked threat models predict exactly this. But I imagine there could be some financial style hiccups too, for sci-fi, perhaps something that ushers a CBDC as the solution. Stay tuned.... And remember FDIC cannot handle a large event. It's been admitted.
Should we really buy the many months of switching difficulty argument?
Surely the main API surface is a HTTP API like ChatCompletions? If it's the exact shape of Anthropic's API, the difference is surely minor. There are likely up to 2 API surfaces, that's it. If the OpenAI model APIs are more flexible (esp. with the new 1M context of GPT-5.4), then it should have little difficulty adapting. Then there is LiteLLM and similar that make it even easier, half of their tooling should be using something that abstracts like that anyway. Yes it needs evals and prompt engineering work to optimise it, but they should be used to that by now. Presumably they could even clean-room fine-tune an OpenAI model to match the same Claude shape with low loss. So I don't buy it.
As is pointed out in my links, they are using Palantir's solution which Palantir has built around Claude AI (including custom agents/chatbots/etc.)
After Trump's tantrum with Anthropic, no doubt Palantir will be switching to OpenAI based models/agents/chatbots.
From the pov of data analysis and inference, they should be comparable though Anthropic's AI predictions _might_ be better than OpenAI's (maybe the reason why Palantir chose them in the first place).
"You're right, my fears about potentially starting WW3, millions of innocent people being killed and crashing the global economy were over blown...Now I have all the details, I think your plan sounds wonderful! Should we go ahead with that military operation right away?"
"We" have been mainstream (?) talking about AI killing since (at least) the first Terminator movie in 1984. The geeks/nerds have much earlier: Frank Herbert talked about humans outsourcing their thinking and being 'enslaved' in Dune with the Butlerian Jihad in 1965. Isaac Asimov's Three Laws of Robotics are from 1942.
https://www.youtube.com/watch?v=9fa9lVwHHqg
It is actually Palantir using Claude AI in its "Maven Smart System" for real-time battlefield analysis which is being used by the US Military.
More details at - https://news.ycombinator.com/item?id=47275936
Also see Palantir’s Double Conflict of Interest in the War Against Iran - https://bylinetimes.com/2026/03/05/palantirs-double-conflict...
After Trump's tantrum with Anthropic, no doubt Palantir will be switching to OpenAI based models/agents/chatbots.
From the pov of data analysis and inference, they should be comparable though Anthropic's AI predictions _might_ be better than OpenAI's (maybe the reason why Palantir chose them in the first place).