Regardless of what I think about this technology in particular, I want to respond to this line from the second comment: [1]
> on the contrary, it must be developed.
No, it mustn’t. There’s not a gun to your head, forcing you to do this. You want to develop this technology. That’s why it’s happening.
Technology isn’t inevitable. It’s a choice we make. And we can go in circles about whether it’s good or bad, but at least cop to the fact that you have agency and you’re making that choice.
From the main page: “Communication groups [...] • mrdeepfakes - the biggest NSFW English deepfake community”, which strains my assumption of good faith debate regarding their ethics.
If someone makes a deepfake on their own computer, watches it, and doesn't share it with anybody, I don't see how that's markedly different (morally) from just imagining the same thing. Some people have a very strong visual imagination and others don't have it at all, it's only fair if they can use a technological substitute.
Also some entertainment works by artificially instilling a desire which cannot be fulfilled. If people can use deepfakes and masturbation to defuse that desire it might be a moral positive for them.
TBH I got curious. Clicked on one celeb, oh man, I've seen 90's Photoshop cut and paste the face onto a different body better than those "deepfakes"...
I understand the sentiment but I disagree. It's just another disruptive piece of technology that people will have to adjust to. If you can't trust a digital face, then nobody will and using this tech for scamming purposes will simply fizzle out after an initial "adjustment period".
I know that sounds rough but society might react differently than you think, i.e.: with digital faces being useless for identifying people, and maybe even becoming creepy or unsettling because of the implicated fakeness, meeting people in real life (opening bank accounts, transactions in general, anything where trust is valuable) will have more value.
Don't fight progress in the hopes you can one day become complacent.
I don't see a difference between any technology. So if we can frown upon some nation for developing nukes, we can frown upon a group people for developing this.
On the other hand somebody can say "if we don't, then others will". So regulate and ban the technology, then.
> Don't fight progress in the hopes you can one day become complacent.
I'm not convinced this represents "progress". That aside, the goal of resisting it isn't to allow you to become complacent some day. The goal is to avoid being harmed.
"must" is probably the wrong word, but if they don't develop it, someone else will. At most, it would be delayed, and I'm not sure there's much difference in that.
Can we agree that “someone would have done this anyway” is a poor justification for one’s actions? That’s what you say when you 1) recognize that you did a bad thing, but 2) don’t want to be blamed for it.
Potential technology is an infinite set. A gigabyte contains 2^800,000,000 possibilities; a number billions of digits long. Many are junk or functionally duplicates — but many are not, and even out of that we choose a small subset that we think is worth pursuing.
Saying “everyone with means to develop would need to choose not to” is like saying “every writer would need to choose not to write this book”. It’s not inevitable.
The problem is that there is a big confusion about building the technology and using it in some specific, wrong way. Surely anyone here will agree that building the technology itself should not be forbidden. You can do it for some valid reason, or simply for fun. It’s not for anyone to judge or stop you from doing it. Those who use it for some nefarious reason must then answer for that. Guns are the obvious comparison (although maybe not the best) as in they are needed for protection also. But another example is how explosive devices and technologies can be used in a variety of useful ways, such as mining, terraforming, etc. Using someone’s image without their consent is already a crime and applicable in many other non-DF scenarios.
What's the confusion? The allegation is that choosing to build this represents, at best, incredibly poor judgment. The GitHub issue and GP post are good faith appeals to kindest possible interpretations of this work. To use your own example, explosives can be useful, but if you discovered that your neighbor was building pipe bombs in his garage, you'd probably want them to stop. "I'm not gonna use them. I just think they're neat." is not a convincing argument for them to continue. To extend the analogy a bit further, this seems comparable to discovering a box full of pipe bombs with a big "free" sign in front of their house.
Let's imagine you have a horrific face malformation, or went through a terrible accident that left you disfigured. Why not allow you to pass for an ordinary person, at least in video calls, so you can have a somewhat normal life?
It took me two minutes to come up with a legitimate use for this technology. I imagine there are many more only a couple minutes away.
I took the OP's comment as meaning "it's not necessary for this to happen" rather than "it's necessary for this not to happen", a nuance which can be confusing, especially for non-native English speakers. The comment in the article suggests there is no choice but for this technology to be developed, and I think the OP is disagreeing with that assessment and saying "there is a choice".
If I've understood that right, both your comments can agree. There are strong arguments to say we _should_ develop this technology, but there also many good counter-arguments.
It is much more useful for privacy concious people not wanting to give Zoom more data for their face detection database. All in all it's good for humanity when technology can counteract other evil tech that will spy on you.
i disagree, society could stop inventing and humanity would likely continue to exist. However, that would would require the population to stop being interested in the development of technology which i think wont happen.
The point is, if this engineer doesnt write this code, someone else will. If a person can imagine a tool, someone will eventually want to make it a reality.
i think deepfakes and ai generation are going to change the world in a way that im a bit leery of, it feels safer to just stop here, but that isnt a possibility, even if deepfakes are made illegal, people will still create their own tools and have their own harddrives for storage.
maybe these things should be illegal or something, hard to say. But the tech will move forward either way imo.
We all have responsibilities as technologists to be ethical (to our own standards, at least), and to keep in mind the impact of the things we build. That applies regardless of whether other tech with a negative social impact exists, or for that matter whether it will exist. You can still be a good person, even if no one else is doing so, and you're under no pressure to be a bad person (in this context, anyway).
Agreed. Convincing folks who think that of anything else is quite difficult.
I’ll promote here one of my favorite reads, The Technological Society by Jacques Ellul. It feels very apropos here as we embark on an new smorgasbord of technologies that can do us harm and good. There will always be people who want to develop technologies even if they are harmful. We need not only ask ourselves “what is the benefit”, but “at what cost?”
>Technology isn’t inevitable. It’s a choice we make.
See and that's the problem, you can just talk about "you" not "we", because someone else will do it. Technology, and with it information cannot be stopped.
Knowledge isn't useful if you can't apply it in practice.
My work's IT has a habit of sending out phishing emails that closely match emails legitimate emails our employees and vendors send. We're supposed to be vigilant for looking at the Reply-To field, which is hidden by default on mobile devices. (As is the url things are going to.)
I will never not trigger those. Why? My brain can't tap the header field for every single email I read. And I can't ignore legitimate emails.
Every time on my phone, I'll click the link, then check the url in the browser. I can't get my brain to check before clicking the link. (Trust me, I'm neuro-divergent and I'd tried for years. It won't reprogram.)
As you can't enforce stopping development of it, it is better to also develop it to understand it better and create countermeasures like ml which detects deepfakes.
This also leads potentially to more or better image/video signing Technologie.
And yes just because someone develops this in the open doesn't mean that someone else is not developing it in parallel but hidden.
There is ZERO credibility that this development will be used to "create a countermeasure", especially mrdeepfakes is a key communication channel there.
There's a book in Dutch literature, The Assault by Harry Mulisch. It's a great read if you're interested in multi-faceted morality. It deals with guilt and responsibility when there's a chain of things that need to happen for a bad outcome.
During WWII the protagonist's parents are shot by Germans after a body of a collaborator was found in front of their house. The parents were arguing with the soldiers about something when arrested and ended up shot during the arrest.
The collaborator was shot by the resistance in front of their neighbours house, and the neighbours moved the body in front of the protagonist's house.
Over the years, he encounters many people involved in this event, and starts seeing things from many sides. One of the themes that's explored is who bears moral responsibility for his parents' death? The Germans for shooting them? His mother for arguing? The neighbours for moving the body? The resistance for shooting the collaborator? The collaborator for collaborating? All of their actions were a necessary link in the chain that lead to their death.
One of the characters utters a simple and powerful way of dealing with that morality: "He who did it, did it, and not somebody else. The only useful truth is that everybody is killed, by who he is killed, and not by anyone else."
It's a self-serving morality, because the character was part of the resistance group that shot the collaborator in a time where reprisals where very common. But it's also very appealing in its simplicity and clarity.
I find myself referring back to it, in cases like this. In the imagined future, where this tech is used for Bad Things, who is responsible for those Bad Things? The person that did the Bad Thing? The person that developed this tech? The people that developed the tech that lead up to this development?
I'm much inclined to only lay blame on the person that did the Bad Thing.
I think the appropriate way to understand blame and guilt is as social technologies. Who is responsible for some event depends on perspective. Any actor should assign blame, guilt and responsibility in a way that leads to preferred outcomes in the future, e.g. by selecting other agents to collaborate with, communicating rules or contracts and enforcing incentives. Different actors may assign blame differently and there need not always be an objectively correct assignment.
For example, in the story above, the argument against the Germans being to blame would be that the protagonist has no agency with respect to them being there and shooting people. The counter argument "He who did it, did it, and not somebody else" is shifting perspective from the protagonist to society, which does have agency with respect to the Germans shooting people.
I think so too. If the next time you can change something you would have done, such that less people suffer or die, then it seems like you should go ahead and do that. In the end this is the simple rule.
Sometimes you just didn't have the complete picture and you might have been acting ideally given the limited information you had; that is, if you changed your behavior, it might change for the worst in other situations. It's not about self-punishing, it's about learning!
Yeah, it is Germans for shooting them. Everyone else is a victim there. Blaming mom for arguing, blaming neighbours for not wanting to be shot themselves is just victim blaming at its finest.
It was Germany who created the situation as part of their quest for world dominance, it was Germany who trained their soldiers to be violent and cruel, who created the policies and it was German soldiers who shot.
You do realize that “Germany” does not exist as a real-life entity, with behavior and agency in a way that assigning blame would make any sense. In fact, it could be argued, that assigning blame to the entire country of Germany, namely after WW I, was the very cause of Hitker coming to power in the first place.
In my experience, in the US, the lawyers will find the "deep pockets" to go after, and may establish precedent.
In the US, we have the Second Amendment, which has probably prevented many, many lawsuits against gun manufacturers and dealers (but there have still been a few).
Unless there's a constitutional amendment, protecting the tech, it's pretty likely that some hungry lawyers will figure out how to go up the food chain, and get some money.
If the tech is used to go after politicians and oligarchs (almost guaranteed), you can bet that some laws will appear, soon.
In the US, threat of lawsuits govern many decisions. It's a totally legitimate fear, and politicians are notorious for protecting their hunting grounds, and their privileges.
Look at the fight over encryption. It is a "no-brainer," if you are aware of the tech, yet politicians have a very real chance of doing great damage to it.
The second amendment doesn't prevent lawsuits against gun manufactures. The PLCA, passed in 2005, is what provides special legal cover to the gun industry.
If you'll allow me to be cynical for a moment; I believe the PLCA was passed because the people in power felt safe from guns, behind their security and metal detectors. I don't think they feel safe from deep fakes or hackers (nor is there much lobbying money), so a similar law protecting technology would never pass.
This technology is protected by the first ammendment. The first ammendment is the main reason why the law does not have much to say about what kind of programs people cannot share.
I'm much inclined to only lay blame on the person that did the Bad Thing.
There's no technology that can cure humanity of moral failing. Most things contain potential good and evil uses.
You remind me of a story I was told years ago which describes a series of "good" and "bad" events and each subsequent event changes the seeming meaning of the prior one.
The part I remember is where a man is given a horse and rejoices at his good fortune and then he is thrown from the horse, breaking his leg, and he decries his misfortune. Then war breaks out and he is not conscripted because his leg is in a cast and he again feels relieved and fortunate.
If you get a motorcycle license in the US, you’ll likely take the MSF safety course where one point is hammered home: a crash is caused by an interaction of factors.
Imagine a car suddenly swerves in front of a motorcycle rider, which causes him to crash. Perhaps it was a rainy day, the rider was talking on the phone, they were traveling slightly above the speed limit, and they were sitting in the car’s blind spot. It was the car that was at fault for causing the crash, but if one of the other factors in the rider’s situation was eliminated, he may have been able to find a safe escape path to perform an emergency maneuver and avoid the crash.
Ultimately, it’s the final piece in a chain of events that directly causes it, but there are a multitude of variables that are necessary to lead to the situation. Blame can be assigned to a higher degree for a single party, but I believe blame can also be applied to varying degrees to multiple parties.
It seems the authors of the tool in question do not see where their technology fits into a larger puzzle. They may not be the ones who ultimately use it for malice, but it is worrying that they so readily shrug off ethical considerations when they have agency over this piece of the equation.
I don't think this is a good way to characterize this situation. In this situation someone is saying, look, these are the consequences of your actions, and they are washing their hands of the consequences in advance, knowing full well what they are. This isn't a series of disconnected accidents leading to an outcome.
A better analogy might be a guy who likes to hand out free bolt cutters on the street.
And so every executioner, and not the malfeasant lawyer, is responsible for the death of the wrongly executed?
A better heuristic, and almost as simple, is to rate responsibility in proportion to power, ie those with the most leverage to steer events take the most blame. Blame is information. It has no mass, and can be divided effortlessly.
That's a very convenient kind of morals for the people who invested billions of dollars into developing this tech, fully aware what it could be used for. (And OpenAI was aware, it's their entire justification why they don't publish the models)
Again and again I am astonished that people without ethics exist, that they are confident in what they are doing and that they appear to be completely unable to reflect upon their actions. They just don't care and appear to be proud of it.
If this is used to scam old people out of their belongings then you really have to question your actions and imho bear some responsibility. Was it worth it? Do the positive uses outweigh their negatives? They use examples of misuse of technology as if that would free them of any guilt. As if previous errors would allow them to do anything because greater mistakes were made.
You are not, of course, completely responsible for the actions others take but if you create something you have to keep in mind bad actors exist. You can't just close your eyes if your actions lead to a strictly worse world. Old people scammed out of their savings are real people and it is real pain. I can't imagine the desperation and the helplessness following that. It really makes me angry how someone can ignore so much pain and not even engage in an argument whether it's the right thing to do.
What's as worrying, judging by the comments here and in that GitHub thread, is that there is no correlation between technical ability and ethical understanding. Perhaps naively, I'd have thought someone intelligent enough to develop a technology like this would also be intelligent enough to understand the complex ethical issues it raises. It seems, unfortunately, that there is no such correlation.
In fact, anecdotally, it seems the people with the technical ability are least likely to have a nuanced understanding of the ethical impact of their work (or, more optimistically, it's only people with the conjunction of technical ability and ethical idiocy who would work on this, and we're not seeing all the capable people who choose not to).
Also, what's with all the people in this thread coming up with implausible edge cases in which deep fake tech could be used ethically to justify a technology that will very obviously be used unethically in the vast majority of cases? It's almost useless for anything except deception—it is intrinsically deceptive. All the 'yeah but cars kill people so should we ban all cars?' comments miss the obvious point that cars are extremely useful, so we accept the relatively small negatives. The ethical balance is the other way around for deep fake tech. It's almost entirely harmful, with some small use cases that might arguably be valuable to someone.
> Perhaps naively, I'd have thought someone intelligent enough to develop a technology like this would also be intelligent enough to understand the complex ethical issues it raises. It seems, unfortunately, that there is no such correlation.
Correct. There is a tendency to think that because a person is exceptionally intelligent or skilled in one area, they must also be intelligent in other areas. It's simply not the case. An expert is authoritative in the areas of their expertise, but outside of those, their opinions are no more likely to be correct than anyone else's.
This error is often leveraged in persuasion campaigns -- thinking that, for instance, a brilliant physicist's opinions on social policies are more likely to be accurate than any random person on the street.
I can already fire off emails and text messages claiming to be whomever I want. I can even hire impersonators and lookalikes. I don’t get the big push against these newer technologies.
Agree, I've come to believe that social ethics is something that needs to be indoctrinated from an early age, otherwise it can't naturally be developed in many people.
If you need to indoctrinate somebody with a certain belief from an early age, then your belief must be so poor that it cannot be used to convince adults, so how can such an action be justified?
At one point we tried to teach "good citizenship" at schools (and in most of Europe, we still do). Then the unethical people started claiming that schools were brainwashing their children, and here we are.
I don't understand why you're astonished? Psychopaths are all around us, from corporate "leadership" to government lobbyists to fake reviewers on Amazon and the App Store. Confidence men (and women!) have existed since the dawn of time. From selling "snake oil," to 419 scams, to "Microsoft" computer support, technology has only ever aided the process of helping psychopaths find victims. Talent and technical ability are orthogonal to empathy. Always has been; always will be.
This deepfake stuff is a difference in degree, not kind, and once these people figure out how to use AI to help them, everyone is going to have to level up their defenses all over again. You ain't seen nothing yet.
HN seems to actively work on a cognitive dissonance: on the one hand producing inspirational stories of entrepreneurs changing the world and on the other abandoning all hope that technology/market forces can be controlled in any way.
I'm thinking now this is to justify away the collective guilt of bringing into the mainstream harmful products.
It seems to come from the same origins as "crypto can't be regulated", "government can't to anything", "it's ok because it's legal" and it's always worrying me to not really see any sort of moral stance being taken anymore.
The old: Encryption is evil because it is used to hide information... let's ban encryption. Breaking encryption is evil because it can be used to steal information... let's incriminate breaking encryption.
Let's just get it over with... Technology is evil. Let's ban technology.
Funny that you picked on this specific example on a broader point.
Every time serious regulators have made a strong move, crypto markets and general population access to crypto products have actually been affected. I'm pretty sure if the US government would try to ban all crypto networks, most activity would stop, leaving only a few die-hard activists and some actual criminals.
Well for me, crypto is highly moral and should be developed to prevent artificial trading restrictions and eliminate borders. There are many people who consider individual freedom to be of highest importance, and who are willing to develop any technology promoting it.
You seem to be thinking ideologically-driven, while I'm more goal-driven. Is absolute freedom the end-all purpose? Shouldn't I be allowed to murder my enemies, if I'm willing to risk the vengeance of their group? I know it's an absurd question, but my point is that the social negotiation for where regulation and limits should come into effect has been abandoned by many.
I think it's a good thing. Not that it's being used for evil things, but because it should help make it obvious that you can't trust anything you see on a screen.
Using fake media to trick people into believing anything used to be a privileged reserved for nation states and the ultra rich. Now that _ANYONE_ and their cat can do it, it should follow that nobody can believe anything that's on a screen anymore (this comment included).
> but because it should help make it obvious that you can't trust anything you see on a screen.
I think this in general (text, audio, video) will produce a societal earthquake, in a way send us into the Middle Ages - you can't really verify things yourself, because everything can be faked, all you can do is anchoring yourself to some trusted authority.
Imagine you read on hacker news an (AI generated) article about a new breakthrough in physics - new convincing evidence for the cyclical universe hypothesis. In the discussion, there will be a lot of seemingly informed comments arguing about this (all AI generated), links to video presentations from reputable scientists (all AI generated) and papers (all AI generated). It will be all wrong (= there wasn't any breakthrough in the first place), but impossible for a non-physicist to assess correctly.
In a way it will lead to centralization of internet and knowledge, people will stick only to their trusted sources. For some it may be Wikipedia and NYTimes, for others some AI-generated island of knowledge/manipulation.
I also wonder what effects this will have on social platforms when 99% of content is generated by AI.
My thought is: if you think conspiracy culture war shitshows are bad now with whatever people on both sides having talking heads saying whatever feeds red meat to their base, imagine trying to bridge the gap in a world with infinite red meat generators competing with each other for audience eyeballs forever.
In Accepting an honorary degree from the University of Notre Dame a few years ago, General David Sarnoff made this statement: “We are too prone to make technological instruments the scapegoats for the sins of those who wield them. The products of modern science are not in themselves good or bad; it is the way they are used that determines their value.” That is the voice of the current somnambulism. Suppose we were to say, “Apple pie is in itself neither good nor bad; it is the way it is used that determines its value.” Or, “The smallpox virus is in itself neither good nor bad; it is the way it is used that determines their value.” That is, if the slugs reach the right people firearms are good. If the TV tubes fire the right ammunition at the right people it is good. I am not being perverse. There is simply nothing in the Sarnoff statement that will bear scrutiny, for it ignores the nature of the medium, of any and all media, in the true Narcissus style of one hypnotized by the amputation and extension of his own being in a new technical form. General Sarnoff went on to explain his attitude to the technology of print, saying that it was true that print caused much trash to circulate, but it had also disseminated the Bible and the thoughts of seers and philosophers. It has never occurred to General Sarnoff that any technology could do anything but add itself on to what we already are. (p. 11)
Whats your alternative? Attempt to hide the fact that everything can be easily faked, and that you don't need to be a hollywood studio to do it?
What good do you think that does? Instill a false sense of security and trust in a medium which cannot bear it ? How's that not worse?
Even outright banning this technology won't make a dent in the bad uses, since those individuals are highly motivated, highly competent and don't care the least about the ban..
It's not like the development won't happen, it will just be hidden, making it less obvious to the potential victims (increasing the size of this group, since you then need to be "in the know").
So no, the only way forward is to have this out in the open as much as possible so that as many people as possible become aware of how trivial it already is to fake stuff.
I am already not safe from it, but it is less obvious because people like to think the technology doesn't exist merely because it is kind of expensive.
I'm convinced that this idea that technology is completely neutral is wrong. It is not neutral in the face of human psychology. The human species is a different animal that the human individual, and it is powerful, but does not make truly conscious decisions.
Once you let the genie out of the bottle, a wish will be made. A technology might not be inherently bad, but neither are knives, and we don't leave those lying around.
That said, it is the human species that develops technology, rarely is one human individual capable of holding back a technology.
'Technology is neutral' is simply a banal observation that technology is only ever used by humans who can decide what to do with it. It's not saying that a new technology doesn't enable new and terrible human choices, simply that those choices rest with the people, not the technology itself.
Knives seem like the perfect example, really. We do leave them lying around in our drawers, usually under zero real security against any malicious guest, but we recognise it is about the choices of the people we invite into our houses, not the existence of the knives themselves, that is the real danger.
You can certainly argue though as you have that humans simply shouldn't have the choice to do certain things - what comes to mind immediately is nuclear weapons.
Knives always seem like a perfect example because they really do have so many good uses, along with some bad ones. But just because two pieces of technology both have good uses and bad uses doesn't mean they're morally equivalent. As to your example, even if I trust people who enter my house I wouldn't want to store a nuke in my kitchen drawers :) And there are many ways to discuss the ethics of tech beyond "good and bad uses", such as to what extent a user of the technology controls it or is controlled by it, what are the negative externalities, does it increase power asymmetry, etc.
> Knives seem like the perfect example, really. We do leave them lying around in our drawers, usually under zero real security against any malicious guest, but we recognise it is about the choices of the people we invite into our houses, not the existence of the knives themselves, that is the real danger.
The problem comes from the scale, it is a knife vs nuke question, but you seem to stop a bit too short in your reasoning. Yes both can kill, they just don't operate on the same scale.
You could forge documents 400 years ago, it doesn't mean it would be ethical to release a software that let you forge any document at scale instantly and for free in a single click
One steam engine is a marvel, 1.4B ICE cars on earth is a nightmare
It's always about scale, almost never about the original purpose/intent of the tech. Modern tech develops and spread infinitely faster than anything from even 20 years ago
Yeah, my argument with the knives was pretty weak and I was trying to make the point you make in your last paragraph. I think I meant to type "lying around children". I should've said bombs.
The idea technology is completely neutral is obviously stupid and obviously wrong.
Neil Postman has two amazing books on this subject.
It is not even that technology is good or bad, any technology will have good and bad aspects but the main issue is that we have surrendered culture and agency to technology as a society.
There is no going back or fix now. The fix for all was a culture that devalues bad uses of technology so that there is no money in creating something shitty. We basically have the opposite of that. Even completely useless technology is worth a ton of money because of our culture.
The solution is not Luddism either. Especially a ridiculous techno-Luddism.
For me, the Faustian bargain with technology has already been signed in blood and there is nothing to do other than enjoy the roller coaster ride.
The purported neutrality of technology is an old saw that people who are practitioners of technology ought to discard. The fact they haven't shows how terrible a job we're doing educating them.
Technology is not neutral and never will be because the nature of technology is to be a means to an end, and as such, to be inseparable from its end.
Technologies enable certain future outcomes.
That some of those outcomes are deemed "good" or "bad" just depends on whether they're compatible with the ends that were pursued initially and the worldview that supports them.
Some technologies have unexpected outcomes, but that doesn't make those outcomes independent of those technologies, either.
> It is not neutral in the face of human psychology.
It's worth noting that nuclear weapons seem to have prevented more deaths than they caused. The MAD doctrine has prevented direct confrontation between the superpowers and limited their military conflicts to proxy wars.
And this is a technology that has almost no application beyond vaporizing cities almost instantaneously.
This is a fair point, but we have no way of knowing how close we came or will come to a nuclear apocalypse, so the risk posed by nuclear technology is difficult to evaluate.
We are a single human lifetime into a world that has that technology. If we are to live we will live with it for thousands more. Let's give it a bit more time before we decide how well it's turned out?
If it happens even once those “lives saved” gains will be wiped away. If the stock market is rising for a hundred years then literally goes to zero, you’ve still lost the long game, buying short term gains at the expense of the long term.
If our children all die in a nuclear holocaust, it doesn’t really matter how many lives we saved, does it?
Can anybody demonstrate a legitimate use of deepfake software? Has it ever been used to facilitate a socially positive or desirable outcome? While I recognize my experiences are far from definitive, I hazard most would be hard pressed to name anything positive that came out of deepfake technology.
edit: I’ll take your knee-jerk DV, and any others, as an admission of an inability to speak to positive utility of this technology.
Edit: this comment is referring to deepfakes more broadly, and is not a commentary on the validity of the source linked here. I can't speak to the reputability of the community developing this, or how it has been used so far.
--
I'm a fairly visual and imaginative person, and it's pretty easy for me to come up with some very useful applications. No hostility intended, genuinely sharing my thoughts:
1. CGI for video editing - lower the bar of entry to de-age actors, or use a stand-in. Actor can't make it to a shoot that day? No worries, replace their face in post easily.
2. Identity protection - Cold call with someone that reached out to you, you're not sure if they're safe or dangerous, could be a good way to protect yourself.
3. Social media content for clients - become a fake avatar for hire essentially, customize your narrator for any video or brand. Video call centers with fake video (they already have voice modifiers and fake names), Enhanced VTuber sort of things (virtual avatars for streaming).
4. Unexpected outcomes: for example Holly Herndon created (and sold) access to an AI replica of her singing voice (n1), and I could see artists selling or renting access to their faces.
Obviously this can and will be used maliciously, but I personally could see myself using it for more positive reasons.
First let me thank you for a thoughtful riposte! I do appreciate that. My question was an honest one and I imagine, not the easiest to conceive an answer to. I genuinely appreciate your taking the time to share your thoughts.
With that said almost every use-case cited was financial or monetary gain whereas I enquired about social utility and value.
That dishonesty ie creation of a fake avatar is cited as being of social utility strikes me as a reach. I don’t see how adding more dishonesty and facades to the world adds social value, but then I may just be of limited imagination.
#2 sounds really interesting! I’m not sure of the psychological ramifications, but I can’t imagine they’d be different than any other sort of prosthesis save for an inability to actually touch it.
I could see it being used in AR to conceal identity to facilitate more equitable medical outcomes, I suppose.
Thank you again for the input! I was honestly at a loss for positive applications outside of financial gain.
I haven’t seen any ads driven by deepfake, or at least I don’t think I have. That advertising bit does sound rather obnoxious though!
That is pretty neat, any sort of art does add cultural and social utility to a degree. Thanks for the heads up, because just about every mention I’ve seen published on the topic is more or less a horror story. I wasn’t being facetious in my query. Thanks again for the input!
What is the boundary between "deepfake" and "photoshop" (i.e. regular human "fake" or edit?)
I suspect it's going to become popular for both consensual-deepfake of oneself (PR, magazines, actors, pop stars, any form of public speaker) and "bought out" deepfake (actors selling out their image rights and then losing creative control; dead actors, etc.)
The political-deepfake is really going to accelerate the debate over how much free speech permits you to just lie about people, though.
The number of man hours that would be necessary to plausibly fake even a short film in Photoshop, if I had to guess. It strikes me as analogous to owning sidearms versus BMGs and rocket launchers. One of these tools makes doing bad things far easier.
Another analogy. Say somebody makes some hacking kit. Say it uses zero day exploits to compromise Windows, Mac, and Linux. Would any of us take issue with that? Would it be a different story if it was made into a push-button tool like WinNuke was in the 1990s? Or automated to the extent that somebody who can make a word doc could employ it against your systems? Is there really no feasible line of distinction here, in your eyes?
The social good of deepfake technology will be the destruction of the unwarranted power which has been given to image, and which the Internet has amplified.
Think about it: people choose to trust or not trust based on a face. When deepfaking becomes a tool easily available to every average joe, appearance will lose some of its power. People will learn to lose their irrational trust in face.
The technology isn't just deep fake, deep faking is one ability of techniques that do more general object/person replacement. It is such a small step between techniques for things like digital de-aging to a full fake face, that working on one makes the others possible and trying to ban one will have unintended consequences on the others.
I've been playing TTRPGs via videochat with my friends since pandemic, and I've often thought about setting up video avatars for our characters. It would be especially cool for the DM to be able to switch personas on the fly, and for players to have their characters in video chat.
This kind of technology is far too useful to repressive regimes and those who wish to do nasty things with it.
This means that the incentive to develop this technology is already there, and so it WILL be developed no matter how much people wish it wouldn't.
The only difference at this point is whether some of the implementations are developed in public view or not. If none are public, then all of them will be done in secret, and our opportunities to develop countermeasures will be severely hampered by having fewer eyes on it, and a smaller entry funnel for potential white hats.
> on the contrary, it must be developed.
No, it mustn’t. There’s not a gun to your head, forcing you to do this. You want to develop this technology. That’s why it’s happening.
Technology isn’t inevitable. It’s a choice we make. And we can go in circles about whether it’s good or bad, but at least cop to the fact that you have agency and you’re making that choice.
[1] https://github.com/iperov/DeepFaceLive/issues/41#issuecommen...
Also some entertainment works by artificially instilling a desire which cannot be fulfilled. If people can use deepfakes and masturbation to defuse that desire it might be a moral positive for them.
Let's flip the argument upside down. Let's imagine you want to produce porn content, but don't want to be recognized. This technology allows that.
You might not like the art, but deepfakes are not harmful.
On the other hand somebody can say "if we don't, then others will". So regulate and ban the technology, then.
I'm not convinced this represents "progress". That aside, the goal of resisting it isn't to allow you to become complacent some day. The goal is to avoid being harmed.
How many generations did it take for society to adapt to the industrial revolution? What were the spasms which occurred during that period?
As for deepfakes, we know before we even begin that it breaks (utterly moots) social norms about identity, trust, reputation.
How many people are going to die while societies adapt?
Is that price acceptable for the benefit of more amusing viral videos on TikTok?
That is a nice piece of history.
Ethics is important. And one consequence is: I may die but at least it was not because I made it possible.
Saying “everyone with means to develop would need to choose not to” is like saying “every writer would need to choose not to write this book”. It’s not inevitable.
Let's imagine you have a horrific face malformation, or went through a terrible accident that left you disfigured. Why not allow you to pass for an ordinary person, at least in video calls, so you can have a somewhat normal life?
It took me two minutes to come up with a legitimate use for this technology. I imagine there are many more only a couple minutes away.
If I've understood that right, both your comments can agree. There are strong arguments to say we _should_ develop this technology, but there also many good counter-arguments.
Dead Comment
i disagree, society could stop inventing and humanity would likely continue to exist. However, that would would require the population to stop being interested in the development of technology which i think wont happen.
The point is, if this engineer doesnt write this code, someone else will. If a person can imagine a tool, someone will eventually want to make it a reality.
i think deepfakes and ai generation are going to change the world in a way that im a bit leery of, it feels safer to just stop here, but that isnt a possibility, even if deepfakes are made illegal, people will still create their own tools and have their own harddrives for storage.
maybe these things should be illegal or something, hard to say. But the tech will move forward either way imo.
I’ll promote here one of my favorite reads, The Technological Society by Jacques Ellul. It feels very apropos here as we embark on an new smorgasbord of technologies that can do us harm and good. There will always be people who want to develop technologies even if they are harmful. We need not only ask ourselves “what is the benefit”, but “at what cost?”
I think it is inevitable actually. The history of humanity seems to suggest so at least.
See and that's the problem, you can just talk about "you" not "we", because someone else will do it. Technology, and with it information cannot be stopped.
Knowledge isn't useful if you can't apply it in practice.
My work's IT has a habit of sending out phishing emails that closely match emails legitimate emails our employees and vendors send. We're supposed to be vigilant for looking at the Reply-To field, which is hidden by default on mobile devices. (As is the url things are going to.)
I will never not trigger those. Why? My brain can't tap the header field for every single email I read. And I can't ignore legitimate emails.
Every time on my phone, I'll click the link, then check the url in the browser. I can't get my brain to check before clicking the link. (Trust me, I'm neuro-divergent and I'd tried for years. It won't reprogram.)
This also leads potentially to more or better image/video signing Technologie.
And yes just because someone develops this in the open doesn't mean that someone else is not developing it in parallel but hidden.
During WWII the protagonist's parents are shot by Germans after a body of a collaborator was found in front of their house. The parents were arguing with the soldiers about something when arrested and ended up shot during the arrest.
The collaborator was shot by the resistance in front of their neighbours house, and the neighbours moved the body in front of the protagonist's house.
Over the years, he encounters many people involved in this event, and starts seeing things from many sides. One of the themes that's explored is who bears moral responsibility for his parents' death? The Germans for shooting them? His mother for arguing? The neighbours for moving the body? The resistance for shooting the collaborator? The collaborator for collaborating? All of their actions were a necessary link in the chain that lead to their death.
One of the characters utters a simple and powerful way of dealing with that morality: "He who did it, did it, and not somebody else. The only useful truth is that everybody is killed, by who he is killed, and not by anyone else."
It's a self-serving morality, because the character was part of the resistance group that shot the collaborator in a time where reprisals where very common. But it's also very appealing in its simplicity and clarity.
I find myself referring back to it, in cases like this. In the imagined future, where this tech is used for Bad Things, who is responsible for those Bad Things? The person that did the Bad Thing? The person that developed this tech? The people that developed the tech that lead up to this development?
I'm much inclined to only lay blame on the person that did the Bad Thing.
For example, in the story above, the argument against the Germans being to blame would be that the protagonist has no agency with respect to them being there and shooting people. The counter argument "He who did it, did it, and not somebody else" is shifting perspective from the protagonist to society, which does have agency with respect to the Germans shooting people.
Sometimes you just didn't have the complete picture and you might have been acting ideally given the limited information you had; that is, if you changed your behavior, it might change for the worst in other situations. It's not about self-punishing, it's about learning!
It was Germany who created the situation as part of their quest for world dominance, it was Germany who trained their soldiers to be violent and cruel, who created the policies and it was German soldiers who shot.
In the US, we have the Second Amendment, which has probably prevented many, many lawsuits against gun manufacturers and dealers (but there have still been a few).
Unless there's a constitutional amendment, protecting the tech, it's pretty likely that some hungry lawyers will figure out how to go up the food chain, and get some money.
If the tech is used to go after politicians and oligarchs (almost guaranteed), you can bet that some laws will appear, soon.
In the US, threat of lawsuits govern many decisions. It's a totally legitimate fear, and politicians are notorious for protecting their hunting grounds, and their privileges.
Look at the fight over encryption. It is a "no-brainer," if you are aware of the tech, yet politicians have a very real chance of doing great damage to it.
If you'll allow me to be cynical for a moment; I believe the PLCA was passed because the people in power felt safe from guns, behind their security and metal detectors. I don't think they feel safe from deep fakes or hackers (nor is there much lobbying money), so a similar law protecting technology would never pass.
[1]: https://en.m.wikipedia.org/wiki/Protection_of_Lawful_Commerc...
There's no technology that can cure humanity of moral failing. Most things contain potential good and evil uses.
You remind me of a story I was told years ago which describes a series of "good" and "bad" events and each subsequent event changes the seeming meaning of the prior one.
The part I remember is where a man is given a horse and rejoices at his good fortune and then he is thrown from the horse, breaking his leg, and he decries his misfortune. Then war breaks out and he is not conscripted because his leg is in a cast and he again feels relieved and fortunate.
Imagine a car suddenly swerves in front of a motorcycle rider, which causes him to crash. Perhaps it was a rainy day, the rider was talking on the phone, they were traveling slightly above the speed limit, and they were sitting in the car’s blind spot. It was the car that was at fault for causing the crash, but if one of the other factors in the rider’s situation was eliminated, he may have been able to find a safe escape path to perform an emergency maneuver and avoid the crash.
Ultimately, it’s the final piece in a chain of events that directly causes it, but there are a multitude of variables that are necessary to lead to the situation. Blame can be assigned to a higher degree for a single party, but I believe blame can also be applied to varying degrees to multiple parties.
It seems the authors of the tool in question do not see where their technology fits into a larger puzzle. They may not be the ones who ultimately use it for malice, but it is worrying that they so readily shrug off ethical considerations when they have agency over this piece of the equation.
A better analogy might be a guy who likes to hand out free bolt cutters on the street.
A better heuristic, and almost as simple, is to rate responsibility in proportion to power, ie those with the most leverage to steer events take the most blame. Blame is information. It has no mass, and can be divided effortlessly.
If this is used to scam old people out of their belongings then you really have to question your actions and imho bear some responsibility. Was it worth it? Do the positive uses outweigh their negatives? They use examples of misuse of technology as if that would free them of any guilt. As if previous errors would allow them to do anything because greater mistakes were made.
You are not, of course, completely responsible for the actions others take but if you create something you have to keep in mind bad actors exist. You can't just close your eyes if your actions lead to a strictly worse world. Old people scammed out of their savings are real people and it is real pain. I can't imagine the desperation and the helplessness following that. It really makes me angry how someone can ignore so much pain and not even engage in an argument whether it's the right thing to do.
In fact, anecdotally, it seems the people with the technical ability are least likely to have a nuanced understanding of the ethical impact of their work (or, more optimistically, it's only people with the conjunction of technical ability and ethical idiocy who would work on this, and we're not seeing all the capable people who choose not to).
Also, what's with all the people in this thread coming up with implausible edge cases in which deep fake tech could be used ethically to justify a technology that will very obviously be used unethically in the vast majority of cases? It's almost useless for anything except deception—it is intrinsically deceptive. All the 'yeah but cars kill people so should we ban all cars?' comments miss the obvious point that cars are extremely useful, so we accept the relatively small negatives. The ethical balance is the other way around for deep fake tech. It's almost entirely harmful, with some small use cases that might arguably be valuable to someone.
Correct. There is a tendency to think that because a person is exceptionally intelligent or skilled in one area, they must also be intelligent in other areas. It's simply not the case. An expert is authoritative in the areas of their expertise, but outside of those, their opinions are no more likely to be correct than anyone else's.
This error is often leveraged in persuasion campaigns -- thinking that, for instance, a brilliant physicist's opinions on social policies are more likely to be accurate than any random person on the street.
"Moral intelligence" (or "MQ") and "moral cripples".
...are my provisional terms for talking about this.
This deepfake stuff is a difference in degree, not kind, and once these people figure out how to use AI to help them, everyone is going to have to level up their defenses all over again. You ain't seen nothing yet.
I'm thinking now this is to justify away the collective guilt of bringing into the mainstream harmful products.
It seems to come from the same origins as "crypto can't be regulated", "government can't to anything", "it's ok because it's legal" and it's always worrying me to not really see any sort of moral stance being taken anymore.
In my view, it's one of the things that makes HN great.
Let's just get it over with... Technology is evil. Let's ban technology.
Every time serious regulators have made a strong move, crypto markets and general population access to crypto products have actually been affected. I'm pretty sure if the US government would try to ban all crypto networks, most activity would stop, leaving only a few die-hard activists and some actual criminals.
Using fake media to trick people into believing anything used to be a privileged reserved for nation states and the ultra rich. Now that _ANYONE_ and their cat can do it, it should follow that nobody can believe anything that's on a screen anymore (this comment included).
I think this in general (text, audio, video) will produce a societal earthquake, in a way send us into the Middle Ages - you can't really verify things yourself, because everything can be faked, all you can do is anchoring yourself to some trusted authority.
Imagine you read on hacker news an (AI generated) article about a new breakthrough in physics - new convincing evidence for the cyclical universe hypothesis. In the discussion, there will be a lot of seemingly informed comments arguing about this (all AI generated), links to video presentations from reputable scientists (all AI generated) and papers (all AI generated). It will be all wrong (= there wasn't any breakthrough in the first place), but impossible for a non-physicist to assess correctly.
In a way it will lead to centralization of internet and knowledge, people will stick only to their trusted sources. For some it may be Wikipedia and NYTimes, for others some AI-generated island of knowledge/manipulation.
I also wonder what effects this will have on social platforms when 99% of content is generated by AI.
Marshall McLuhan - Understanding Media
Even outright banning this technology won't make a dent in the bad uses, since those individuals are highly motivated, highly competent and don't care the least about the ban..
It's not like the development won't happen, it will just be hidden, making it less obvious to the potential victims (increasing the size of this group, since you then need to be "in the know").
So no, the only way forward is to have this out in the open as much as possible so that as many people as possible become aware of how trivial it already is to fake stuff.
So, what are you saying ?
The church has been around for years and despite the millennia of lies we are all still here and thriving and improving.
Bullshit spreads faster, but truth wins out over centuries because it doesn't go away when you stop looking at it.
Deleted Comment
Once you let the genie out of the bottle, a wish will be made. A technology might not be inherently bad, but neither are knives, and we don't leave those lying around.
That said, it is the human species that develops technology, rarely is one human individual capable of holding back a technology.
Knives seem like the perfect example, really. We do leave them lying around in our drawers, usually under zero real security against any malicious guest, but we recognise it is about the choices of the people we invite into our houses, not the existence of the knives themselves, that is the real danger.
You can certainly argue though as you have that humans simply shouldn't have the choice to do certain things - what comes to mind immediately is nuclear weapons.
The problem comes from the scale, it is a knife vs nuke question, but you seem to stop a bit too short in your reasoning. Yes both can kill, they just don't operate on the same scale.
You could forge documents 400 years ago, it doesn't mean it would be ethical to release a software that let you forge any document at scale instantly and for free in a single click
One steam engine is a marvel, 1.4B ICE cars on earth is a nightmare
It's always about scale, almost never about the original purpose/intent of the tech. Modern tech develops and spread infinitely faster than anything from even 20 years ago
Neil Postman has two amazing books on this subject.
It is not even that technology is good or bad, any technology will have good and bad aspects but the main issue is that we have surrendered culture and agency to technology as a society.
There is no going back or fix now. The fix for all was a culture that devalues bad uses of technology so that there is no money in creating something shitty. We basically have the opposite of that. Even completely useless technology is worth a ton of money because of our culture.
The solution is not Luddism either. Especially a ridiculous techno-Luddism.
For me, the Faustian bargain with technology has already been signed in blood and there is nothing to do other than enjoy the roller coaster ride.
Technology is not neutral and never will be because the nature of technology is to be a means to an end, and as such, to be inseparable from its end.
Technologies enable certain future outcomes.
That some of those outcomes are deemed "good" or "bad" just depends on whether they're compatible with the ends that were pursued initially and the worldview that supports them.
Some technologies have unexpected outcomes, but that doesn't make those outcomes independent of those technologies, either.
>The solution is not Luddism either. Especially a ridiculous techno-Luddism.
Cough.
It's worth noting that nuclear weapons seem to have prevented more deaths than they caused. The MAD doctrine has prevented direct confrontation between the superpowers and limited their military conflicts to proxy wars.
And this is a technology that has almost no application beyond vaporizing cities almost instantaneously.
If our children all die in a nuclear holocaust, it doesn’t really matter how many lives we saved, does it?
edit: I’ll take your knee-jerk DV, and any others, as an admission of an inability to speak to positive utility of this technology.
1. CGI for video editing - lower the bar of entry to de-age actors, or use a stand-in. Actor can't make it to a shoot that day? No worries, replace their face in post easily.
2. Identity protection - Cold call with someone that reached out to you, you're not sure if they're safe or dangerous, could be a good way to protect yourself.
3. Social media content for clients - become a fake avatar for hire essentially, customize your narrator for any video or brand. Video call centers with fake video (they already have voice modifiers and fake names), Enhanced VTuber sort of things (virtual avatars for streaming).
4. Unexpected outcomes: for example Holly Herndon created (and sold) access to an AI replica of her singing voice (n1), and I could see artists selling or renting access to their faces.
Obviously this can and will be used maliciously, but I personally could see myself using it for more positive reasons.
n1. https://holly.mirror.xyz/54ds2IiOnvthjGFkokFCoaI4EabytH9xjAY...
With that said almost every use-case cited was financial or monetary gain whereas I enquired about social utility and value.
That dishonesty ie creation of a fake avatar is cited as being of social utility strikes me as a reach. I don’t see how adding more dishonesty and facades to the world adds social value, but then I may just be of limited imagination.
- Representing assistive robots/software with friendly human faces
- Reconstructing the likeness of people with permanent facial injuries when connecting with family
Other, questionably "legitimate", commercial uses are already in production:
- auto-generated corporate training videos
- "Personalized" advertising
I'm hating it already.
I could see it being used in AR to conceal identity to facilitate more equitable medical outcomes, I suppose.
Thank you again for the input! I was honestly at a loss for positive applications outside of financial gain.
I haven’t seen any ads driven by deepfake, or at least I don’t think I have. That advertising bit does sound rather obnoxious though!
[0] https://collider.com/trey-parker-matt-stone-almost-made-sass...
I suspect it's going to become popular for both consensual-deepfake of oneself (PR, magazines, actors, pop stars, any form of public speaker) and "bought out" deepfake (actors selling out their image rights and then losing creative control; dead actors, etc.)
The political-deepfake is really going to accelerate the debate over how much free speech permits you to just lie about people, though.
Another analogy. Say somebody makes some hacking kit. Say it uses zero day exploits to compromise Windows, Mac, and Linux. Would any of us take issue with that? Would it be a different story if it was made into a push-button tool like WinNuke was in the 1990s? Or automated to the extent that somebody who can make a word doc could employ it against your systems? Is there really no feasible line of distinction here, in your eyes?
Think about it: people choose to trust or not trust based on a face. When deepfaking becomes a tool easily available to every average joe, appearance will lose some of its power. People will learn to lose their irrational trust in face.
Deleted Comment
This means that the incentive to develop this technology is already there, and so it WILL be developed no matter how much people wish it wouldn't.
The only difference at this point is whether some of the implementations are developed in public view or not. If none are public, then all of them will be done in secret, and our opportunities to develop countermeasures will be severely hampered by having fewer eyes on it, and a smaller entry funnel for potential white hats.
Sure, but there's a value in delaying that development. Delaying harm is a valid tactic.
> Sure, but there's a value in delaying that development. Delaying harm is a valid tactic.
It is a horrible tactic when your adversary has ultra-deep pockets & mountains of bodies to throw at the problem.
Whatever's being developed here has 1/10th the capability of the tech being developed in black site facilities at the behest of national militaries.