The quote in the title is totally out-of-context. It's not entirely clear but from context it sounds like Mark Milley was just laughing because he realized that he's the "new Oppenheimer" as the current director of nuclear weapons research at Los Alamos.
Can the quote be taken “entirely out of context” if the context itself isn’t “entirely clear”? Or does your interpretation of the quote and its meaning differ from the author’s?
The way the quote is used in the title of the article implies that someone involved with Palentir is referring to their AI as a weapon of mass destruction on par with the atom bomb, but when the quote appears in the article it clearly has nothing to do with AI, nor is the speaker comparing himself to Oppenheimer's role in creating the first atom bomb, he's just noting that he is occupying the exact same job as Oppenheimer.
> “Let’s say you’re operating in a place with a lot of civilian areas, like Gaza,” I asked the engineers afterward. “Does Palantir prevent you from ‘nominating a target’ in a civilian location?”
> Short answer, no. “The end user makes the decision,” the woman said.
It seems to me that the author of this piece doesn't understand that any system that claimed to be able to predict civilian's and combatant's locations better than someone with boots on the ground or eyes in the air on the target would be lying.
Look at something like Healthcare charts. Nurses and doctors don't have time to keep systems updated or do data entry because they are busy triaging patients and saving lives. The first thing to go when you have ten things you're supposed to do but only have time for five is data entry. This is the bog standard complaint against all the technology being inserted into healthcare.
War is the same but inverse (serving out death not serving out life), there isn't time to sit around filling in info.
Armed conflicts are dynamic environments with a need for fast decisions on limited information and unfortunately it isn't really possible to have a Google maps interface with neatly labeled bounding boxes of "don't worry the bad guys aren't allowed to go inside these buildings, only the civilians".
But if anyone would be able to keep a grasp on that information in order to limit civilian targets it would likely be those on the ground who wouldn't have time to enter their latest civilian vs non-civilian intel into a GUI for updating some targeting system, so you'd want that ultimate decision to be exactly where they've placed it, in the hands of the final end user.
Anything else would be exactly the sort of "AI controls the guns/bombs/nukes" nightmare that we all want to avoid. Imagine the end user knows that the GUI is exactly wrong, the area marked in the system to contain civilians actually contains combatants and the building next door is marked in the system as containing combatants but really contains civilians (they have had camera on both or someone with binoculars observing or something) - and the system doesn't let them 'nominate the target' that they need to. At that point you can either nominate the civilian containing building and select a munition that will destroy both targets or you can do nothing.
Nothing about this is "AI controls guns/bombs/nukes." The AI simply gives predictions that humans must decide to accept or reject. And no, I don't buy at all that people on the ground in an active war scenario would have any sort of macro view at all. Without more to back it up I don't buy your claim.
> I don't buy at all that people on the ground in an active war scenario would have any sort of macro view
They didn't claim this. The claim was that someone on the ground can better discriminate a combatant from a civilian than someone in the air. Modern warfare makes all of this less relevant, unfortunately, since infantry's work is less about shooting the enemy than protecting assets and calling in air and artillery strikes.
>it isn't really possible to have a Google maps interface with neatly labeled bounding boxes of "don't worry the bad guys aren't allowed to go inside these buildings, only the civilians".
This is literally what the US supposedly did for some limited circumstances (churches) during the Gaza genocide and Israel chose to bomb those targets instead.
So liability for collateral damage will move from the military to a corporate AI. A bug in a program to be fixed. The reach of war (already not needing approval of the people) will be cheaper and less accountable.
Ai has no more agency, will, or intent as the software warmongers currently use now. The problem is any collateral damage or mistakes will be defended zealously and resolved within the military organization in the same ways it is now
I wasn't implying AI agency and believe that AI is simply a marketing term. I'm suggesting they will shift the liability. In much the same way that "autonomous" cars crashes are no longer the fault of the owner and more of a corporate issue.
But you are more correct. There is no need to shift liability when there is absolutely no recourse for military mistakes.
No, they are saying that the best option for peace is to make yourself so formidable that nobody wants to go to war with you. The point is NOT to go to war. It's just that the argument isn't disarming, it's the opposite, however counter-intuitive that might seem.
Reciprocally, it's not hard to envision how an overly-zealous military-industrial complex could promote the wrong ideas at-large. The Lavander AI's recent coverage is a good example of why you shouldn't stoke the flames of information-fetishizing warmongers.
I could see it as saying that the peace activists are on a path that will not actually lead to peace, but rather to war, and those preparing for war are on a path that will actually lead to peace.
"If you wish for peace, prepare for war" is an old way of phrasing this. It's not a new thought.
If you think the world is better off with the US as a global policing power then one could argue that having an overwhelming force is key to sustaining peace.
So war isn't peace but being 10x stronger than everyone can be.
"The result for citizens in the US is disastrous. It mirrors the decline of the Roman Empire, which spent extravagantly on its legions, and the only growth came from conquering other peoples, looting them, and taxing them. This threat could not be sustained forever, and so the gold and silver coins were reduced in precious metal content, and the treasury (like the USA, which just prints money like a never-ending waterfall) created debased coins, resulting in inflation.
Just as the US doesn't invest in infrastructure the way other countries do or have an efficient nationalized healthcare system.
Why? We burn trillions on military and weapons. The military-industrial complex must be fed, and it is always hungry.
Think about retirement, healthcare costs, and the greedflation by corporations, as well as the government taxing your Social Security. It is intentional cruelty."
> War against enemies such ISIS is indeed leading to peace
Better example is nuclear deterrence, which has effectively ended direct great power state-on-state conflict. War is never peace. But preparing for war protects an existing one.
Palantir's entire business model is taking on customers/projects that the rest of Silicon Valley refuses to. Not only is there an "underserved" audience, they get to charge a premium and for worse products.
The irony is that this is a good reminder of the harsh realities that Oppenheimer himself clearly grappled with - if you refuse to build it, someone with less scruples will.
I think it's more lame to disregard this ideology. What if Germany had the nuke first? "oh well, at least we held to our morals"
There's this ideal human sociology, and then there's the reality of what we really are. It takes a unique person to accept it, and do the wrong thing for the right reason, and also be able to steer things in the eventual right direction.
I think the comparison is more along the lines "might as well supply clean syringes instead of let things go their inevitable course otherwise". The point isn't about the victims then loving you and thinking you're such a stand up person, it's about why you might as well be the one that's hated but at least you know it was done better than it would have. Importantly, this is only said to apply when you are convinced there is some inevitable humanity destroying technology and you think you can deliver it in a less destroying way than it will be otherwise. It doesn't apply to things that are already present, like selling drugs.
That said, I don't think most of what Palantir actually delivers on fits this mold. They're just generally cruddy.
Is that not a good excuse for drug legalisation? If drugs are illegal then we get poor-quality or adulterated drugs, leading to deaths, and funding illicit activities. If we legalise it, we get quality control and tax.
Yeah it's really just an excuse to be evil while feeling righteous and conflicted about it, and saying "You don't understand" to anyone who objects to your behavior.
> The irony is that this is a good reminder of the harsh realities that Oppenheimer himself clearly grappled with - if you refuse to build it, someone with less scruples will.
The dilemma was quite different: if you don't do it, your enemy will beat you to it and will kill you and the people you love. It was a very utilitarian, wartime calculation.
The soft variant you're quoting is just a license to misbehave because others also misbehave - and I don't think that was Oppenheimer's qualm.
> if you refuse to build it, someone with less scruples will.
Every time you see an argument for the inevitability of X - especially something frightening - know that it's just an old rhetorical tactic, even cheap playground trash talk. It's comic book lingo. They want you to quit; they are afraid of what you will do.
The silver lining is that the people with less scruples might also be less competent and their solutions more fragile resulting in operational failure when deployed, in turn saving humanity.
> The silver lining is that the people with less scruples might also be less competent and their solutions more fragile resulting in operational failure when deployed, in turn saving humanity.
Think of the complete lack of agency, the powerlessness, of that perspective. 'If we do nothing, maybe they'll shoot themselves in the foot.' It's 'freeze' in the fight/flight/freeze response to danger.
In fairness, the parent didn't say that's all we'll do, but few talk about actual actions, solutions, with full agency and responsibility.
The solution isn't a silver lining to a cloud, it's what we will do to make a better world. If we don't make one, who will?
Keep in mind that operational failures don't always look like a quiet fizzle. Failures can just as well be catastrophic or can trigger further reactions that become so. I'm not sure there's a silver lining there worth looking for.
The bigger concern is that the failures are either ones that make society worse but nobody will fix, or, the failure is catastrophic (e.x. It didn't go full skynet, but who the heck knew -that- set of datapoints would trigger an autolaunch?)
Maybe. But based on what I have heard Palantir has no problem finding talent. There are enough people in the world excited about Palantir's... niche... that have people lining up out the door for new job postings. That and they have a much stronger meritocracy than most other tech companies feel comfortable enforcing.
It feels very difficult to get a good sense of Palantir's quality. All I know is the rabid fanbase the company has, most of whom have never used a single peoduct they make. I'd love to see some authentic experience reports.
I’m in the corporate, not spying/war, space, and only have second-hand accounts (from people I trust) but the impression I get is they peddle unremarkable data analysis products to businesses, relying on your buying lots of dev hours from their team to actually make half the stuff they sold you work.
Basically just boring IBM-type shit, except maybe even less honest and professional.
Most people don't consider themselves to be unscrupulous or to be morally wrong in general.
Anyone who would intentionally choose to be compared to Oppenheimer should sooner be scrutinised for his own ethical dubiousness rather than his competitors.
The deeper irony is outlined in my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity". I expand on that theme in this essay (from 2010):
https://pdfernhout.net/recognizing-irony-is-a-key-to-transce...
"There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ... The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream. We the people need to redefine security in a sustainable and resilient way. Much current US military doctrine is based around unilateral security ("I'm safe because you are nervous") and extrinsic security ("I'm safe despite long supply lines because I have a bunch of soldiers to defend them"), which both lead to expensive arms races. We need as a society to move to other paradigms like Morton Deutsch's mutual security ("We're all looking out for each other's safety") and Amory Lovin's intrinsic security ("Our redundant decentralized local systems can take a lot of pounding whether from storm, earthquake, or bombs and would still would keep working")."
Some ideas from me circa 2011 on how security agencies can actually build a more secure world once they recognize the irony of their current approach:
"The need for FOSS intelligence tools for sensemaking etc."
https://web.archive.org/web/20130514103318/http://pcast.idea...
"This suggestion is about how civilians could benefit by have access to the sorts of "sensemaking" tools the intelligence community (as well as corporations) aspire to have, in order to design more joyful, secure, and healthy civilian communities (including through creating a more sustainable and resilient open manufacturing infrastructure for such communities). It outlines (including at a linked elaboration) why the intelligence community should consider funding the creation of such free and open source software (FOSS) "dual use" intelligence applications as a way to reduce global tensions through increased local prosperity, health, and with intrinsic mutual security.
I feel open source tools for collaborative structured arguments, multiple perspective analysis, agent-based simulation, and so on, used together for making sense of what is going on in the world, are important to our democracy, security, and prosperity. Imagine if, instead of blog posts and comments on topics, we had searchable structured arguments about simulations and their results all with assumptions defined from different perspectives, where one could see at a glance how different subsets of the community felt about the progress or completeness of different arguments or action plans (somewhat like a debate flow diagram), where even a year of two later one could go back to an existing debate and expand on it with new ideas. As good as, say, Slashdot [or Hacker News] is, such a comprehensive open source sensemaking system would be to Slashdot as Slashdot is to a static webpage. It might help prevent so much rehashing the same old arguments because one could easily find and build on previous ones. ...
As with that notion of "mutual security", the US intelligence community needs to look beyond seeing an intelligence tool as just something proprietary that gives a "friendly" analyst some advantage over an "unfriendly" analyst. Instead, the intelligence community could begin to see the potential for a free and open source intelligence tool as a way to promote "friendship" across the planet by dispelling some of the gloom of "want and ignorance" (see the scene in "A Christmas Carol" with Scrooge and a Christmas Spirit) that we still have all too much of around the planet. So, beyond supporting legitimate US intelligence needs (useful with their own closed sources of data), supporting a free and open source intelligence tool (and related open datasets) could become a strategic part of US (or other nation's) "diplomacy" and constructive outreach. ..."
Ah, British press. It's a bit sleazy the way the quote was placed in a headline above a photo of the CEO of Palantir, who is certainly a douchebag but did not compare himself to Oppenheimer. That was some other douchebag.
There was some useful stuff in the article itself.
That's really looking for problems. The Palantir CEO said things much more extreme:
As the moderator asked general questions about the panelists’ views on the future of war, Schmidt and Cohen answered cautiously. But Karp, who’s known as a provocateur, aggressively condoned violence, often peering into the audience with hungry eyes, palpably desperate for claps, boos or shock.
He began by saying that the US has to “scare our adversaries to death” in war. Referring to Hamas’s 7 October attack on Israel, he said: “If what happened to them happened to us, there’d be a hole in the ground somewhere.” Members of the audience laughed when he mocked fresh graduates of Columbia University, which had some of the earliest encampment protests in the country. He said they’d have a hard time on the job market and described their views as a “pagan religion infecting our universities” and “an infection inside of our society”.
> He began by saying that the US has to “scare our adversaries to death” in war. Referring to Hamas’s 7 October attack on Israel, he said: “If what happened to them happened to us, there’d be a hole in the ground somewhere.
You cited this as an example of an extreme opinion, but this is bog-standard MAD that’s been a big part of the US strategy since the Cold War.
We don’t want to go to war -> Enemies won’t attack us if they think they can’t accomplish their goals by doing so -> Make sure they understand they will die if they attack us -> no war! (At least, in theory.)
You may disagree with that opinion but it’s not at all extreme, that’s the mindset most of the military has. And it is rooted in the desire to prevent large scale conflict.
>"If what happened to them happened to us, there’d be a hole in the ground somewhere.”
And what keeps on happening to Palestinians, had that happened to them, what will it be? But I get it, pretty standard for a war monger and profiteer to invoke false narratives on the mission to sell more weapons.
> He said they’d have a hard time on the job market and described their views as a “pagan religion infecting our universities” and “an infection inside of our society”.
Really sounds like something right out of the mouth of a certain dead fascist
Can the quote be taken “entirely out of context” if the context itself isn’t “entirely clear”? Or does your interpretation of the quote and its meaning differ from the author’s?
> Short answer, no. “The end user makes the decision,” the woman said.
It seems to me that the author of this piece doesn't understand that any system that claimed to be able to predict civilian's and combatant's locations better than someone with boots on the ground or eyes in the air on the target would be lying.
Look at something like Healthcare charts. Nurses and doctors don't have time to keep systems updated or do data entry because they are busy triaging patients and saving lives. The first thing to go when you have ten things you're supposed to do but only have time for five is data entry. This is the bog standard complaint against all the technology being inserted into healthcare.
War is the same but inverse (serving out death not serving out life), there isn't time to sit around filling in info.
Armed conflicts are dynamic environments with a need for fast decisions on limited information and unfortunately it isn't really possible to have a Google maps interface with neatly labeled bounding boxes of "don't worry the bad guys aren't allowed to go inside these buildings, only the civilians".
But if anyone would be able to keep a grasp on that information in order to limit civilian targets it would likely be those on the ground who wouldn't have time to enter their latest civilian vs non-civilian intel into a GUI for updating some targeting system, so you'd want that ultimate decision to be exactly where they've placed it, in the hands of the final end user.
Anything else would be exactly the sort of "AI controls the guns/bombs/nukes" nightmare that we all want to avoid. Imagine the end user knows that the GUI is exactly wrong, the area marked in the system to contain civilians actually contains combatants and the building next door is marked in the system as containing combatants but really contains civilians (they have had camera on both or someone with binoculars observing or something) - and the system doesn't let them 'nominate the target' that they need to. At that point you can either nominate the civilian containing building and select a munition that will destroy both targets or you can do nothing.
They didn't claim this. The claim was that someone on the ground can better discriminate a combatant from a civilian than someone in the air. Modern warfare makes all of this less relevant, unfortunately, since infantry's work is less about shooting the enemy than protecting assets and calling in air and artillery strikes.
This is literally what the US supposedly did for some limited circumstances (churches) during the Gaza genocide and Israel chose to bomb those targets instead.
But you are more correct. There is no need to shift liability when there is absolutely no recourse for military mistakes.
They are literally saying, "War is Peace".
"If you wish for peace, prepare for war" is an old way of phrasing this. It's not a new thought.
(A great phrase and book: https://en.wikipedia.org/wiki/A_Desolation_Called_Peace)
So war isn't peace but being 10x stronger than everyone can be.
Just as the US doesn't invest in infrastructure the way other countries do or have an efficient nationalized healthcare system.
Why? We burn trillions on military and weapons. The military-industrial complex must be fed, and it is always hungry.
Think about retirement, healthcare costs, and the greedflation by corporations, as well as the government taxing your Social Security. It is intentional cruelty."
Deleted Comment
Better example is nuclear deterrence, which has effectively ended direct great power state-on-state conflict. War is never peace. But preparing for war protects an existing one.
The irony is that this is a good reminder of the harsh realities that Oppenheimer himself clearly grappled with - if you refuse to build it, someone with less scruples will.
Pretty lame excuse. If I don't sell drugs, some else will.
If you're doing it, you are already the one with less scruples.
Do you think the victims care about the scruples of the perpetrator?
There's this ideal human sociology, and then there's the reality of what we really are. It takes a unique person to accept it, and do the wrong thing for the right reason, and also be able to steer things in the eventual right direction.
That said, I don't think most of what Palantir actually delivers on fits this mold. They're just generally cruddy.
If you refuse to deal with drug addiction, you shouldn't be surprised when drug dealers do.
The dilemma was quite different: if you don't do it, your enemy will beat you to it and will kill you and the people you love. It was a very utilitarian, wartime calculation.
The soft variant you're quoting is just a license to misbehave because others also misbehave - and I don't think that was Oppenheimer's qualm.
Every time you see an argument for the inevitability of X - especially something frightening - know that it's just an old rhetorical tactic, even cheap playground trash talk. It's comic book lingo. They want you to quit; they are afraid of what you will do.
Think of the complete lack of agency, the powerlessness, of that perspective. 'If we do nothing, maybe they'll shoot themselves in the foot.' It's 'freeze' in the fight/flight/freeze response to danger.
In fairness, the parent didn't say that's all we'll do, but few talk about actual actions, solutions, with full agency and responsibility.
The solution isn't a silver lining to a cloud, it's what we will do to make a better world. If we don't make one, who will?
Basically just boring IBM-type shit, except maybe even less honest and professional.
Anyone who would intentionally choose to be compared to Oppenheimer should sooner be scrutinised for his own ethical dubiousness rather than his competitors.
Some ideas from me circa 2011 on how security agencies can actually build a more secure world once they recognize the irony of their current approach: "The need for FOSS intelligence tools for sensemaking etc." https://web.archive.org/web/20130514103318/http://pcast.idea... "This suggestion is about how civilians could benefit by have access to the sorts of "sensemaking" tools the intelligence community (as well as corporations) aspire to have, in order to design more joyful, secure, and healthy civilian communities (including through creating a more sustainable and resilient open manufacturing infrastructure for such communities). It outlines (including at a linked elaboration) why the intelligence community should consider funding the creation of such free and open source software (FOSS) "dual use" intelligence applications as a way to reduce global tensions through increased local prosperity, health, and with intrinsic mutual security. I feel open source tools for collaborative structured arguments, multiple perspective analysis, agent-based simulation, and so on, used together for making sense of what is going on in the world, are important to our democracy, security, and prosperity. Imagine if, instead of blog posts and comments on topics, we had searchable structured arguments about simulations and their results all with assumptions defined from different perspectives, where one could see at a glance how different subsets of the community felt about the progress or completeness of different arguments or action plans (somewhat like a debate flow diagram), where even a year of two later one could go back to an existing debate and expand on it with new ideas. As good as, say, Slashdot [or Hacker News] is, such a comprehensive open source sensemaking system would be to Slashdot as Slashdot is to a static webpage. It might help prevent so much rehashing the same old arguments because one could easily find and build on previous ones. ... As with that notion of "mutual security", the US intelligence community needs to look beyond seeing an intelligence tool as just something proprietary that gives a "friendly" analyst some advantage over an "unfriendly" analyst. Instead, the intelligence community could begin to see the potential for a free and open source intelligence tool as a way to promote "friendship" across the planet by dispelling some of the gloom of "want and ignorance" (see the scene in "A Christmas Carol" with Scrooge and a Christmas Spirit) that we still have all too much of around the planet. So, beyond supporting legitimate US intelligence needs (useful with their own closed sources of data), supporting a free and open source intelligence tool (and related open datasets) could become a strategic part of US (or other nation's) "diplomacy" and constructive outreach. ..."
There was some useful stuff in the article itself.
As the moderator asked general questions about the panelists’ views on the future of war, Schmidt and Cohen answered cautiously. But Karp, who’s known as a provocateur, aggressively condoned violence, often peering into the audience with hungry eyes, palpably desperate for claps, boos or shock.
He began by saying that the US has to “scare our adversaries to death” in war. Referring to Hamas’s 7 October attack on Israel, he said: “If what happened to them happened to us, there’d be a hole in the ground somewhere.” Members of the audience laughed when he mocked fresh graduates of Columbia University, which had some of the earliest encampment protests in the country. He said they’d have a hard time on the job market and described their views as a “pagan religion infecting our universities” and “an infection inside of our society”.
> He began by saying that the US has to “scare our adversaries to death” in war. Referring to Hamas’s 7 October attack on Israel, he said: “If what happened to them happened to us, there’d be a hole in the ground somewhere.
You cited this as an example of an extreme opinion, but this is bog-standard MAD that’s been a big part of the US strategy since the Cold War.
We don’t want to go to war -> Enemies won’t attack us if they think they can’t accomplish their goals by doing so -> Make sure they understand they will die if they attack us -> no war! (At least, in theory.)
You may disagree with that opinion but it’s not at all extreme, that’s the mindset most of the military has. And it is rooted in the desire to prevent large scale conflict.
And what keeps on happening to Palestinians, had that happened to them, what will it be? But I get it, pretty standard for a war monger and profiteer to invoke false narratives on the mission to sell more weapons.
Really sounds like something right out of the mouth of a certain dead fascist