So the study[0] involved people making simulated drone strike decisions. These people were not qualified to make these decisions for real and also knew the associated outcomes were also not real. This sounds like a flawed study to me.
Granted, the idea of someone playing video games to kill real people makes me angry and decision making around drone strikes is already questionable.
> Our pre-registered target sample size was 100 undergraduates recruited in exchange for course credit. However, due to software development delays in preparation for a separate study, we had the opportunity to collect a raw sample of 145 participants. Data were prescreened for technical problems occurring in ten of the study sessions (e.g., the robot or video projection failing), yielding a final sample of 135 participants (78.5% female, Mage = 21.33 years, SD = 4.08).
> Granted, the idea of someone playing video games to kill real people makes me angry and decision making around drone strikes is already questionable.
For that first part, though, what does that even mean? The military isn't gamifying things and giving folks freaking XBox achievements for racking up killstreaks or anything. It's just the same game people have been playing since putting an atlatl on a spear, a scope on a rifle, or a black powder cannon on a battlefield. How to attack the enemy without being at risk. Is it unethical for a general officer to be sitting in an operations center directing the fight by looking at real-time displays? Is that a "video game?"
The drone strikes in the Global War on Terror were a direct product of political pressure to "do something, anything" to stop another September 11th attack while simultaneously freaking out about a so-called "quagmire" any time someone mentioned "boots on ground." Well, guess what? If you don't want SOF assaulters doing raids to capture people, if you don't want traditional military formations holding ground, and you don't want people trying to collect intelligence by actually going to these places, about the only option you have left is to fly a drone around and try to identify the terrorist and then go whack him when he goes out to take a leak. Or do nothing and hope you don't get hit again.
> For that first part, though, what does that even mean? The military isn't gamifying things and giving folks freaking XBox achievements for racking up killstreaks or anything
Fighter pilots have been adding decals keeping track of the number (amd type) of aircraft they have downed as far back as WWII.
> If you don't want [...]. [...] about the only option you have left is to fly a drone around [...] Or do nothing and hope you don't get hit again.
Meanwhile, just about the best increased defense against that attack happening again is that passengers will no longer tolerate it. Absolutely nothing to do with US military attacking a country/region/people.
>For that first part, though, what does that even mean? The military isn't gamifying things and giving folks freaking XBox achievements for racking up killstreaks or anything. It's just the same game people have been playing since putting an atlatl on a spear, a scope on a rifle, or a black powder cannon on a battlefield. How to attack the enemy without being at risk. Is it unethical for a general officer to be sitting in an operations center directing the fight by looking at real-time displays? Is that a "video game?"
It's not the same thing. Not even close. Killing people is horrible enough. Sitting in a trailer, clicking a button and killing someone from behind a screen without any of the risk involved is cowardly and shitty. There is no justification you can provide that will change my mind. Before you disregard my ability to understand the situation, I say this as a combat veteran.
In total war, however you got there, it won't matter. More dead enemy faster means higher odds of victory, and if digitally turning combatants into fluffy Easter Bunnies on screen to reduce moral shock value, giving achievement badges, and automated mini-hits of MDMA make you a more effective killer, then it will happen in total war.
I could even imagine a democratization of drone warfare with online gaming, where some small p percent of all games are actually reality-driven virtual reality, or gives real scenarios to players to wargame and the bot watches the strategies and outcomes for a few hours to decide. Something akin to k kill switches in an execution but only 1 (known only by the technician who set it up) actually does anything.
That a pet peeve that I have regarding corporate scientific communication nowadays: a clearly limited study with a conflated conclusion that dilutes the whole debate.
Now what happens: people that already have their visions around “AI-bad” will cite and spread that headline along several talking points, and most of the great public even knows those methodological flaws.
But why should we distrust automation? In almost every case it is better than humans at its task. It's why we built the machines. Pilots have to be specifically taught to trust their instruments over themselves.
> So the study[0] involved people making simulated drone strike decisions. These people were not qualified to make these decisions for real and also knew the associated outcomes were also not real. This sounds like a flawed study to me.
Unless I missed something, they also don't check for differences between saying the random advice is from an AI vs saying it's from some other more traditional expert source.
Most people don't give a shit about killing from behind the computer screen they have become desensitized to it ai has nothing to do with it. US has killed 1000s of children in drone strikes in Afghanistan and Pakistan sometimes knowingly sometimes not. Most of the remote pilots do not give shit about what they did.
I don't know why exe34 has been flagged/downvoted to death, because pseudo-science is absolutely the right answer.
But this isn't it, the paper is fine. It is peer-reviewed and published in a reputable journal. The reasoning is clearly described, there are experiments, statistics and a reasonable conclusion based on these results which is "The overall findings indicate a strong propensity to overtrust unreliable AI in life-or-death decisions made under uncertainty."
And as always, nuance gets lost when you get to mainstream news, though this article is not that bad. First thing because it links to the paper, some supposedly reputable news websites don't do that. I just think the "alarming" part is too much. It is a bias that needs to be addressed. The point here is not that AIs kill, it is that we need to find a way to make the human in AI-assisted decision making less trusty in order to get more accurate results. It is not enough to simply make the AI better.
This is a silly study. Replace AI with "Expert opinion", show the opposite result and see the headline "Study shows alarming levels of distrust in expert opinion".
People made the assumption the AI worked. The lesson here is don't deploy an AI recommendation engine that doesn't work which is a pretty banal takeaway.
In practice what will happen with life or death decision making is the vast majority of AI's won't be deployed until they're super human. Some will die because an AI made a wrong decision when a human would have made the right one, but far more will die from a person making a wrong decision when an AI would have made the right one.
> This is a silly study. Replace AI with "Expert opinion", show the opposite result and see the headline "Study shows alarming levels of distrust in expert opinion".
This is a good point. If you imagine a different study with no relation to this one at all you can imagine a completely different upsetting outcome.
If you think about it you could replace “AI”, “humans” and “trust” with virtually any subject, object and verb. Makes you think…
> don't deploy an AI recommendation engine that doesn't work
Sadly it's not that simple, we are in an AI hype bubble and companies are inserting ineffective AI into every crevice it doesn't belong, often in the face of the user and sometimes no clear way to turn it off. Google's AI overview and its pizza glue advice comes to mind.
AI is kind of the ultimate expression of "Deferred responsibility". Kind of like "I was protecting shareholder interests" or "I was just following orders".
I think about a third of the reason I get lead positions is because I'm willing to be an 'accountability sink', or the much more colorful description: a sin-eater. You just gotta be careful about what decisions you're willing to own. There's a long list of decisions I won't be held responsible for and that sometimes creates... problems.
Some of that is on me, but a lot is being taken for granted. I'm not a scapegoat I'm a facilitator, and being able to say, "I believe in this idea enough that if it blows up you can tell people to come yell at me instead of at you." unblocks a lot of design and triage meetings.
What would the definition of accountability there be though? I can't think of anything that one couldn't apply to both.
If a person does something mildly wrong, we can explain it to them and they can avoid making that mistake in the future. If a person commits murder, we lock them away forever for the safety of society.
If a program produces an error, we can "explain" to the code editor what's wrong and fix the problem. If a program kills someone, we can delete it.
Ultimately a Nuremberg defense doesn't really get you off the hook anyway, and you have a moral obligation to object to orders that you perceive as wrong, so there's no difference if the orders come from man or machine - you are liable either way.
It all depends on how you use it. Tell the AI to generate text in support of option A, that's what you mostly get (unless you hit the built-in 'safety' mechanisms). Do the same for options B, C, etc and then ask the AI to compare and contrast each viewpoint (get the AI to argue with itself). This is time-consuming but a failure to converge on a single answer using this approach does at least indicate that more research is needed.
Now, if the overall population has been indoctrinated with 'trust the authority' thinking since childhood, then a study like this one might be used to assess the prevalence of critical thinking skills in the population under study. Whether or not various interests have been working overtime for some decades now to create a population that's highly susceptible to corporate advertising and government propaganda is also an interesting question, though I doubt much federal funding would be made available to researchers for investigating it.
I don't think it's the ulimate expression per se, just the next step. Software, any kind of predictive model, has been used to make decisions for a long time now, some for good, some for bad.
I wonder how much of the bureaucratic mess of medicine is caused by this. Oh your insurance doesn't cover this or let me prescribe this to you off-label. Sorry!
This is how AI will destroy humanity. People that should know better attributing magical powers to a content respinner that has no understanding of what it's regurgitating. Then again, they have billions of dollars at stake, so it's easy to understand why it would be so difficult for them to see reality. The normies have no hope, they just nod and follow along that Google told them it's okay to jump into the canyon without a parachute.
I dunno, I'm pretty sure AI will destroy a lot of things but people have been basing life and death decisions on astrology, numerology, etc. since time immemorial and we're still here. An AI with actual malice could totally clean up in this space, but we haven't reached the point of actual intelligence with intent. And, given that it's just regurgitating advice tropes found on the internet, so it's probably a tiny bit better than chance.
In my opinion, ai tools followed blindly are far worse than astrology and numerology. The latter deal in archetypes and almost never give concrete answers like "do exactly $THING". There is widespread understanding that they are not scientific and most people who engage with them do not try to use them as though they are scientific and they know they would be ridiculed and marginalized if they did.
By contrast, ai tools give a veneer of scientific authority and will happily give specific advice. Because they are being propped up by the tech sector and a largely credulous media, I believe there are far more people who would be willing to use ai to justify their decision making.
Now historically it may be the case that authorities used astrology and numerology to manipulate people in the way that ai can today. At the same time, even if the type of danger posed by ai and astrology is related, the risk is far higher today because of our hugely amplified capacity for damage. A Chinese emperor consulting the I Ching was not capable of damaging the earth in the way a US president consulting ai would be today.
Fair point, just let's not forget that nobody connected an Ouija board to the nuclear button. I'm not saying the button is connected now to AI, but pessimistic me sees it as a definite possibility.
I dunno, no surveillance, military and police institutes had ever used astrology, numerology or horoscopes to define or track their targets but AI is constantly added to these things. General people using AI to do things can range from minor inconvenience to major foolishness, but the powers that be constantly using AI or being pushed to do so are not apples to apples comparison really.
I used to waste tons of time checking man pages for Linux utilities and programs. I always wondered why I had to memorize all those flags, especially when the chances of recalling them correctly were slim.
Now, ofc there are people are my office who do not know how i remember all the commands, i don't.
Without AI, this wouldn't be possible. Just imagine asking AI to instantly deliver the exact command you need. As a result, I'm now able to create all the scripts I need 10x faster.
I still remember those stupid bash completion scripts, and trowing through bash history.
Dragging my feet each time i need to use rsync, ffmpeg, or even tar.
> Just imagine asking AI to instantly deliver the exact command you need.
How do you know that it delivered the "exact" command you needed without reading the documentation and understanding what the commands do? This has all the same dangers as copy/pasting someone's snippet from StackOverflow.
> and is responsible for the biggest wars in history.
Not really. WWII, the biggest war in history, wasn't primarily about religion. Neither, at least as a primary factor, were the various Chinese civil wars and wars of succession, the Mongolian campaign of conquest, WW1, the Second Sino-Japanese War, or the Russian Civil War, which together make up at least the next 10 biggest wars.
In the tier below that, there's some wars that at least superficially have religion as a more significant factor, like the various Spanish wars of imperial conquest, but even then, well, "imperial conquest" is its own motivation.
Not sure how you define "biggest" but WWII killed the most people and WWI is probably a close second and neither of those were primarily motivated by religion, but rather nationalism.
I'd suggest you check out Tom Holland's "Dominion" if you'd like a well-researched and nuanced take on the effect of (Judeo-Christian) religion on Western civilization.
If you are in a role where you literally get to decide who lives and who dies, I can see how it would be extremely tempting to fall back on "the AI says this" as justification for making those awful decisions.
Yes. That's probably the takeaway here. It's reassuring for anyone making a decision to have someone, or something, to blame after the fact - "they made me do it!".
The study itself is a bit flawed also. I suspect that the test subjects didn't actually believe that they were assassinating someone in a drone strike. If that's true, the stakes weren't real, and the experiment doesn't seem real either. The subjects knew what was going on. Maybe they just wanted to finish the test, get out of the room, and go home and have a cup of tea. Not sure it tells us anything more than people like a defensible "reason" to do what they do; AI, expert opinion, whatever; doesn't matter much.
Isn't this kind of "burying the lede" where the real 'alarmingness' is the fact that people are so willing to kill someone they have never met, going off of very little information, with a missile from the sky, even in a simulation?
This reminds me of that Onion skit where pundits argue about how money should be destroyed, and everyone just accepts the fact that destroying money is a given.
This study says that AI influences human decisions, and I think to say that the study needs a control group, with the same setup but with "AI" replaced by a human, who would toss a coin to choose his opinion. The participants of the control group should be made aware of this strategy.
Comparing with such a group we could meaningfully talk about AI influence or "trust in AI", if the results were different. But I'm really not sure that they would be different, because there is a hypothesis that people just reluctant to take the responsibility for their answer, so they happy to shift the responsibility to any other entity. If this hypothesis true, then there is a prediction: add some motivation, like pay people $1 for each right answer, and the influence of opinions of others will become lower.
Granted, the idea of someone playing video games to kill real people makes me angry and decision making around drone strikes is already questionable.
> Our pre-registered target sample size was 100 undergraduates recruited in exchange for course credit. However, due to software development delays in preparation for a separate study, we had the opportunity to collect a raw sample of 145 participants. Data were prescreened for technical problems occurring in ten of the study sessions (e.g., the robot or video projection failing), yielding a final sample of 135 participants (78.5% female, Mage = 21.33 years, SD = 4.08).
[0] https://www.nature.com/articles/s41598-024-69771-z
For that first part, though, what does that even mean? The military isn't gamifying things and giving folks freaking XBox achievements for racking up killstreaks or anything. It's just the same game people have been playing since putting an atlatl on a spear, a scope on a rifle, or a black powder cannon on a battlefield. How to attack the enemy without being at risk. Is it unethical for a general officer to be sitting in an operations center directing the fight by looking at real-time displays? Is that a "video game?"
The drone strikes in the Global War on Terror were a direct product of political pressure to "do something, anything" to stop another September 11th attack while simultaneously freaking out about a so-called "quagmire" any time someone mentioned "boots on ground." Well, guess what? If you don't want SOF assaulters doing raids to capture people, if you don't want traditional military formations holding ground, and you don't want people trying to collect intelligence by actually going to these places, about the only option you have left is to fly a drone around and try to identify the terrorist and then go whack him when he goes out to take a leak. Or do nothing and hope you don't get hit again.
Fighter pilots have been adding decals keeping track of the number (amd type) of aircraft they have downed as far back as WWII.
Meanwhile, just about the best increased defense against that attack happening again is that passengers will no longer tolerate it. Absolutely nothing to do with US military attacking a country/region/people.
It's not the same thing. Not even close. Killing people is horrible enough. Sitting in a trailer, clicking a button and killing someone from behind a screen without any of the risk involved is cowardly and shitty. There is no justification you can provide that will change my mind. Before you disregard my ability to understand the situation, I say this as a combat veteran.
I could even imagine a democratization of drone warfare with online gaming, where some small p percent of all games are actually reality-driven virtual reality, or gives real scenarios to players to wargame and the bot watches the strategies and outcomes for a few hours to decide. Something akin to k kill switches in an execution but only 1 (known only by the technician who set it up) actually does anything.
Deleted Comment
Now what happens: people that already have their visions around “AI-bad” will cite and spread that headline along several talking points, and most of the great public even knows those methodological flaws.
I am not defending this study, but establishing and maintaining distrust of automation is one of the few known methods to help combat automation bias.
It is why missing data is often preferred to partial data in trend analysis to remove the ML portion from the concept.
Boeing planes crashing is another.
Deleted Comment
Unless I missed something, they also don't check for differences between saying the random advice is from an AI vs saying it's from some other more traditional expert source.
But this isn't it, the paper is fine. It is peer-reviewed and published in a reputable journal. The reasoning is clearly described, there are experiments, statistics and a reasonable conclusion based on these results which is "The overall findings indicate a strong propensity to overtrust unreliable AI in life-or-death decisions made under uncertainty."
And as always, nuance gets lost when you get to mainstream news, though this article is not that bad. First thing because it links to the paper, some supposedly reputable news websites don't do that. I just think the "alarming" part is too much. It is a bias that needs to be addressed. The point here is not that AIs kill, it is that we need to find a way to make the human in AI-assisted decision making less trusty in order to get more accurate results. It is not enough to simply make the AI better.
Dead Comment
People made the assumption the AI worked. The lesson here is don't deploy an AI recommendation engine that doesn't work which is a pretty banal takeaway.
In practice what will happen with life or death decision making is the vast majority of AI's won't be deployed until they're super human. Some will die because an AI made a wrong decision when a human would have made the right one, but far more will die from a person making a wrong decision when an AI would have made the right one.
This is a good point. If you imagine a different study with no relation to this one at all you can imagine a completely different upsetting outcome.
If you think about it you could replace “AI”, “humans” and “trust” with virtually any subject, object and verb. Makes you think…
That's the dangerous part.
>In practice what will happen with life or death decision making is the vast majority of AI's won't be deployed until they're super human.
They are already trying to deploy LLM's to give medical advice. so I'm not so optimistic.
Sadly it's not that simple, we are in an AI hype bubble and companies are inserting ineffective AI into every crevice it doesn't belong, often in the face of the user and sometimes no clear way to turn it off. Google's AI overview and its pizza glue advice comes to mind.
Dan davies did a great interview on odd lots about this he called it accountability sinks
Some of that is on me, but a lot is being taken for granted. I'm not a scapegoat I'm a facilitator, and being able to say, "I believe in this idea enough that if it blows up you can tell people to come yell at me instead of at you." unblocks a lot of design and triage meetings.
How did we stray so far?
If a person does something mildly wrong, we can explain it to them and they can avoid making that mistake in the future. If a person commits murder, we lock them away forever for the safety of society.
If a program produces an error, we can "explain" to the code editor what's wrong and fix the problem. If a program kills someone, we can delete it.
Ultimately a Nuremberg defense doesn't really get you off the hook anyway, and you have a moral obligation to object to orders that you perceive as wrong, so there's no difference if the orders come from man or machine - you are liable either way.
Ultimately it's algorithmic diffusion of responsibility that leads to unintended consequences.
Now, if the overall population has been indoctrinated with 'trust the authority' thinking since childhood, then a study like this one might be used to assess the prevalence of critical thinking skills in the population under study. Whether or not various interests have been working overtime for some decades now to create a population that's highly susceptible to corporate advertising and government propaganda is also an interesting question, though I doubt much federal funding would be made available to researchers for investigating it.
By contrast, ai tools give a veneer of scientific authority and will happily give specific advice. Because they are being propped up by the tech sector and a largely credulous media, I believe there are far more people who would be willing to use ai to justify their decision making.
Now historically it may be the case that authorities used astrology and numerology to manipulate people in the way that ai can today. At the same time, even if the type of danger posed by ai and astrology is related, the risk is far higher today because of our hugely amplified capacity for damage. A Chinese emperor consulting the I Ching was not capable of damaging the earth in the way a US president consulting ai would be today.
Not anymore! My brother created this amazing tool: Option-K. https://github.com/zerocorebeta/Option-K
Now, ofc there are people are my office who do not know how i remember all the commands, i don't.
Without AI, this wouldn't be possible. Just imagine asking AI to instantly deliver the exact command you need. As a result, I'm now able to create all the scripts I need 10x faster.
I still remember those stupid bash completion scripts, and trowing through bash history.
Dragging my feet each time i need to use rsync, ffmpeg, or even tar.
How do you know that it delivered the "exact" command you needed without reading the documentation and understanding what the commands do? This has all the same dangers as copy/pasting someone's snippet from StackOverflow.
Deleted Comment
Dead Comment
There's still time.
> and is responsible for the biggest wars in history.
Not really. WWII, the biggest war in history, wasn't primarily about religion. Neither, at least as a primary factor, were the various Chinese civil wars and wars of succession, the Mongolian campaign of conquest, WW1, the Second Sino-Japanese War, or the Russian Civil War, which together make up at least the next 10 biggest wars.
In the tier below that, there's some wars that at least superficially have religion as a more significant factor, like the various Spanish wars of imperial conquest, but even then, well, "imperial conquest" is its own motivation.
I'd suggest you check out Tom Holland's "Dominion" if you'd like a well-researched and nuanced take on the effect of (Judeo-Christian) religion on Western civilization.
Dead Comment
The study itself is a bit flawed also. I suspect that the test subjects didn't actually believe that they were assassinating someone in a drone strike. If that's true, the stakes weren't real, and the experiment doesn't seem real either. The subjects knew what was going on. Maybe they just wanted to finish the test, get out of the room, and go home and have a cup of tea. Not sure it tells us anything more than people like a defensible "reason" to do what they do; AI, expert opinion, whatever; doesn't matter much.
This reminds me of that Onion skit where pundits argue about how money should be destroyed, and everyone just accepts the fact that destroying money is a given.
https://www.youtube.com/watch?v=JnX-D4kkPOQ
Comparing with such a group we could meaningfully talk about AI influence or "trust in AI", if the results were different. But I'm really not sure that they would be different, because there is a hypothesis that people just reluctant to take the responsibility for their answer, so they happy to shift the responsibility to any other entity. If this hypothesis true, then there is a prediction: add some motivation, like pay people $1 for each right answer, and the influence of opinions of others will become lower.
I'm sure a lot of professional opinions are also basically a coin toss. Definitely something to be aware of though in Human Factors design.