> Authors and contributing developers assume no liability and are not responsible for any misuse or damage caused by the use of this program.
Anything that can be created, will be created. However, that doesn't free you from all moral culpability. If you create something, make it freely accessible and easy to use, then I think you are partly responsible for its misuse.
I'm not saying that they shouldn't have created this, or that they don't have the right to release it. But to create it, release it, and then pretend that any misuse was entirely separate to you is at best naive.
I’d argue there is a moral imperative to create and release tools like this as free and open source software so that anyone has access to them and can choose whether to use them, rather than only sophisticated and well resourced adversaries.
IMO the creators should feel good about their actions, even if they feel bad or apprehensive about the direction of the world because this technology exists at all.
> there is a moral imperative to create and release tools like this as free and open source software so that _anyone_ has access to them
That's an argument I have a lot of time for, but it needs to be made, rather than - as at present - having the whole issue sidestepped. With any tool like this, I think there's a need for ethical due diligence as much as there is for engineering, and there's no evidence that this aspect has been considered at all.
Does this apply to nuclear weapons? To biological weapons?
“Tool is expensive” is a real, effective deterrent to frivolous use of a tool. There’s a reason we pool resources into governments, which is explicitly so we (via a government) have capabilities that we (as individuals) don’t have.
I'm not saying it's a good idea to give everyone a gun, but I do like the argument that the disadvantaged have the same opportunity to pull a trigger. I hope humanity learns wisdom as quickly as we innovate. We've made it this far..
We open sourced 3d printed AK-47s. We disclaim any responsibility for the ensuing mass shootings.
We produced the software to run the drones. We didn't personally deploy them to kill those children.
We built software to guide the rockets. We didn't personally fire them at those hospitals.
We chose to startup our company providing profiles of suspects using scraped, publically available data to any government agency. What they do with the information isn't our problem.
We wrote the software, but it's not our fault or problem what happens because of who uses and because of how they use it. Our intent was good. The market and demand was there. What's wrong with providing a supply? How many software creators feel culpable? Vanishingly few. Who cares when comp is high?
I can’t see a way this technology could be used defensively. Wide access just leads to more abuse. There’s no principle by which it serves some sort of justice to make some crime more accessible.
It’s surprising how often this argument is made compared to, say, equal-opportunity access to fancy cars or good housing.
This has a clear benign use. Of course it sucks that you can also use it in a hostile manner - but the fact that this tool is publicly available rather than hidden in the pocket of some unscrupulous blackhat means that every space that uses verification with these methods can now incorporate this type of testing. That's a net benefit for society.
I do think disclaimers like this are a little juvenile (it reeks of a US-ian litigation mentality), but you can easily imagine why they put it there. Perhaps instead of the author being less naive, you need to be more empathetic.
More generically, I think there's a big difference between releasing proof of concepts, and fully weaponized tools. While the latter is also usable by red teams, it also gives attackers (who often wouldn't have the resources to build the same) the weapons they need.
Personally, I like when people release proof of concepts, and hate it when they release weaponized tools. Especially when I inevitably end up reading reports where APT groups are using those tools.
This is a tool with a clear benign use, along with a bunch of malign ones. If you create something with significant potential for harm, then you should at least think about that, the potential consequences, and possible mitigations.
I wouldn't term it a failure of empathy to ask that people consider the impact of their actions on others. I totally get why they have the disclaimer, and I wouldn't ask them to remove it, but I don't think it's enough. Given the clear potential for harm here, the potential uses and misuses of this tool should have been addressed directly.
What do you do? IMHO, only person naive here is you.
People should be happy, that there are still white-hats who're reporting exploits, even for no profit. I personally switched to gray-market, can't take the shit we get from companies, anymore.
This tool released publicly is only to bring an awareness to the topic. Everybody else who needed to exploit this, have these tools developed in private.
> If you create something, make it freely accessible and easy to use, then I think you are partly responsible for its misuse.
That's a dangerous precedent. Would you apply the same logic to a kitchen knife? Or if for some reason only freemium products count (not sure why), then a pentesting tool?
I understand the underlying point you are trying to make but what are you proposing as an alternative exactly? Who gets to decide which products fall into a gray zone, whilst others are only for bad use. We already see this kind of shoddy thinking leading to keeping DALE-2 out of the public's hands (or at least that is their claim).
A knife is a multi-purpose tool that can be used as a weapon, and one that has so many important non-weapon uses that not having it would cause far more harm than having it would. This is intended primarily as a weapon, even when used defensively. There's an important qualitative difference there, though it's tempting to gallop down the slippery slope.
Regardless, I'm not proposing to ban either knives or this tool - I've been very clear about that. I do think that - with anything that has potential for harm - it's important for people to consider the possible consequences and to actually engage in the discussion about usage, rather than either washing their hands of the issue or declaring that any consideration will soon lead to arresting everyone for everything.
This is a path we've trodden countless times. With some things, we've collectively decided that no controls are necessary. With other things - poisons, nuclear weapons, a host of things depending on location - society has enacted controls of various efficacy and validity.
The responsible choice, regardless of whether you end up being for or any any given restriction for [thing], is to spend time thinking about it, discussing it, and - particularly when you've chosen to release something - acknowledging the potential issues.
I disagree. The moral responsibility really rests on the person who uses the tool.
For instance, I once cheated by using gcc/godbolt to generate assembly output for a class from C code. By this logic, Richard Stallman should be blamed for my misconduct.
There are any number of reasons for supplanting another face onto your own, many of which are simply good fun. If you choose to use this for scamming or perverted reasons, so be it.
Moral posturing aside, perhaps there could be an invisible watermark or something included by default to easily identify less technically inclined actors as users of this tool.
Parents of one of the victims of the recent elementary school shooting in Uvalde, Texas are attempting, or preparing, to sue the gun manufacturer of the semiautomatic rifle that was used in that event. Will they sue, will they win? I don't know.
Do guns kill people or do people kill people? Could be a relevant analogy.
They arrest people for making and selling meth everyday. You have a bias because making deep fakes is not illegal yet. Knives are used for cooking, what alternative good do deep fakes provide?
The main value in releasing tools like this is to demonstrate weakness in our current security controls. A key weakness of biometrics is that there is no secret data. Open source tooling like this help people understand that.
I wish I didn’t have to scroll this far to find your rational perspective. Lets take the famous LockPickingLawyer of YouTube, is he responsible for every crime where the thief defeats a lock that he has demonstrated the weaknesses of? I would say “no!”.
Exposing a weakness puts the onus of securing said weaknesses on those that sell technology/devices/services that market themselves as “secure”.
Unless we cut off the chain of responsibility somewhere, the creators of the programming language, the computer and its components, as well as the people who have designed the computer and those who mined the required minerals are responsible as well.
What's the difference between this and Metasploit, sqlmap, ... ? Not saying you're wrong, it's just that while these tools have valid and legal uses (pentesting) they're also used in black hat scenarios and one could say the same about DOT.
I think that's part of what makes me uneasy about this - it's a solely legal disclaimer for something with a moral dimension, and it reads to me like the abdication of responsibility, rather than just a shield against litigation.
We're actually super fortunate that the development of deepfake technologies have been done relatively out in the open, with source codes, concepts, and pre-trained models often being readily shared. This allows for a broader based understanding of what is possible, and then hopefully develop ways for folks to inoculate themselves, or at least have some societal-level resistance to being hoodwinked by them. If this tech was only developed in secret, and was being used in a targeted manner, who knows what kind of large-scale manipulations would be being undertaken.
So, if we take your naive take to its logical conclusion; The Apache Foundation should be considered partly responsible for all the malware distributed via their http software?
There is a conceptual difference between releasing a weapon (even if for research/defensive purposes) and releasing tools which could later be adapted negatively. That's not to say that weapons should never be created/released, but that there is an extra onus on the creators to at least consider the harm they are enabling and possible mitigations.
The Apache Foundation - great example, thank you - despite not creating weapons, has clearly put thought into how their work interacts with wider society and how to ensure positive outcomes as far as possible [1]. People absolutely don't have to agree on moral issues, but it is irresponsible not to have considered them.
> pretend that any misuse was entirely separate to you is at best naive.
No, it is naive to pretend facial recognition is worth anything when creating tools to defeat it is a mere academic exercise in reading papers and implementing algorithms described in those.
Do you ask for the same level of culpability from a hammer manufacturer? Should hammer makers have trouble sleeping at night because someone bludgeoned another person to death with a hammer made by them?
I am closer to your line of thinking than not, but, at the same time, from where I sit, it seems that data monitoring went too far already and regular user has to have a way to circumvent it.
For that reason alone, this tool, even if it will result in some abuse, it just evens out the playing field.
Same applies to a lot of security oriented tools, most notably metasploit which had these same accusations when it came out long ago, nowadays it's no big deal as exploitation frameworks are now more accessible.
same applies to a lot of niche technologies that can be misused.
This is morally corrupt, dangerous and would lead to an oppressive, violent society. A knife maker killed for murders, a watch maker killed for the tyranny of time.
We should do the exact opposite. Science and reason would cease to exist otherwise. Individuals wouldn't need to be held accountable, innovators/engineers/scientists/entrepreneurs would be. End of a free society.
Have you thought of the chilling effect this might bring?
I've been extremely clear throughout this comment section - I don't want censorship, I don't think this should be banned, and nowhere have I called for legal consequences for releasing this. I want people to think about their actions, and to discuss the ethical issues arising from them.
And yet you've jumped to calling me morally corrupt because I want to murder craftsmen. That's not a reasonable reading of my comments, or in any way a proportionate response.
If you want to talk about chilling effects and the importance of science and reason, how would you describe your comments? You're shouting down discussion with wild accusations.
What authors _could_ do is to add some kind of secret watermark that would only be shared with select government agencies and perhaps software companies that could be trusted to keep it secret.
That way, the software could be used for pen testing, but it could cause a silent trigger to go off.
That could even be a way to monetize the software....
I work at Axis, which makes surveillance cameras.[0] This comment is my own, and is not on behalf of the company. I'm using a throwaway account because I'd rather not be identified (and because surveillance is quite controversial here).
Axis has developed a way to cryptographically sign video, using TPMs (Trusted Platform Module) built into the cameras and embedding the signature into the h.264 stream.[1] The video can be verified on playback using an open-source video player.[2]
I hope this sort of video signing will be mainstream in all cameras in the future (i.e. cellphones etc), as it will pretty much solve the trust issues deep fakes are causing.
It shouldn't be too hard to film a deepfake movie from a screen or projection that don't make it obvious it was filmed. That way, the cryptographic signature will even lend extra authenticity to the deepfake!
>> I hope this sort of video signing will be mainstream in all cameras in the future (i.e. cellphones etc), as it will pretty much solve the trust issues deep fakes are causing.
> It shouldn't be too hard to film a deepfake movie from a screen or projection that don't make it obvious it was filmed. That way, the cryptographic signature will even lend extra authenticity to the deepfake!
Would you even have to go that far? Couldn't you just figure out how to embed a cryptographically valid signature in the right format, and call it good?
Say you wanted to take down a politician, so you deepfake a video of him slapping his wife on a streetcorner, and embed a signature indicating it's from some rando cell phone camera with serial XYZ. Claim you verified it, but the source wants to remain anonymous.
I don't think this idea address the problems caused by deepfakes, unless anonymous video somehow ceases to be a thing.
Similarly, it could have serious negative consequences, such as people being reluctant to share real video because they don't want to be identified and subject to reprisals (e.g. are in an authoritarian country and have video of human rights abuses).
A screen with double the resolution and twice the framrate should be indistinguishable. Moreover if you pop the case on the camera and replace the sensor with something fed by display port (probably need an fpga to convert display port to lvds, spi,ic2 or whatever those sensors use, at speed) that should work too.
That will just move the hack one level further down and will create even more confusion because then you'll have a 'properly signed video stream' as a kind of certificate that the video wasn't manipulated. But you don't really know that, because the sensor input itself could be computer generated and I can think off the bat of at least two ways in which I could do just that.
"That will just make it more difficult to make fakes."
Yes that's kind of the point. Plus I'm sure they could put the whole camera in a tamper resistant case. They could make it very difficult to access the sensor.
Including focus data should make "record a screen" a bit harder too. I guess recording a projection would be pretty hard to detect, but how likely is it that people would go to those lengths, vs using a simple deep fake tool?
Isn't the point to prove physical access to the camera. As in, this stream originated at this TPM which was sold as part of this camera.
So, the best you get is that the stream shows it wasn't produced with access to a particular camera. Then impersonating a YouTuber, say, requires access to their physical camera.
I'm not so sure. How will you verify a signature if you see a video on TV or on social media? Do you believe these devices are 100% secure and the keys will never be extracted?
That will help someone determine that a fake video didn't come from their own camera.
But most videos where we're worried about deep fakes are videos that come from other people's cameras, where we don't know which signature should be valid, nor whether it should have a signature.
I believe they're saying that the manufacturer will sign the video, not the filmer. Those signatures can then be validated by platforms that the video is uploaded to. The signature isn't supposed to say "this video came from person X" it's supposed to say "this video in fact came from the real world".
>it will pretty much solve the trust issues deep fakes are causing.
It's a nice piece of tech, I can see it being used in court for example, to strengthen the claim that a video is not a deepfake.
However, that's not "the" problem with deepfakes. Propaganda of all sorts have demonstrated that "Falsehood flies, and the Truth comes limping after it". As in, with the proper deepfakes, you can do massive damage via social media for example. People re-share without validation all the time, and the existence of deepfakes add fuel to this fire. And I think that we can't do anything about either.
That's really cool! I've been waiting for tech like this to finally come to light. Honestly expected either Google or Apple to lead the way on it. Have you all worked with the content authenticity initiative at all? It seems like they're looking at ways to develop standards around tech like this to ensure interoperability in the future.
You're being uncharitable. I read it as a way for a person to prove they took the video, not as a database of person<->signing key. In other words, the government would only be able to see that video 123 was signed by signature 456. It would be up to the poster to prove they own the camera that produced signature 456.
A few months ago, the IRS made me verify my identity using some janky video conferencing software where I had to hold up a copy of my passport. The software was so hard to use, that I can't believe average people manage to do it. Now, real-time deep fakes are literally easier to create than using the video verification software itself. This will have interesting societal implications.
In India, Digital Signature issuing companies use webcam video to authenticate the applicant as well(I don't think even holding document is required); That digital-sign is used everywhere from signing tax filing to paying taxes.
I hope deep-fake detection software can compete with deep-fake generation software, I've been tracking this need-gap on my problem validation forum for a while now[1].
That said, There are ethical usages of deep-fake videos as well; In fact I might checkout this very tool to see if I can use it for 'smiling more in the videos', remembering to smile during videos is exhaustive for me. There are other ethical usages like limiting the physical effort needed to produce video content for those with disability(like myself)[2].
I mistyped my SSN this year and I wound up doing something similar: I had to take off my glasses and hold my face exactly in the center of the camera, while repeatedly squinting (hopelessly) to try and read the error messages as they alternated between "TOO CLOSE" and "TOO FAR AWAY". I gave up and, luckily, a few hours later found the mistake.
I'm sick snd tired of seeing big companies and orgs (Google is the most recent) publish an amazing application of ML but refuse to release the trained model because the model is biased and may be used in a bad way.
I suspect that's mostly an excuse and they just want to keep it to themselves for commercial reasons. I mean I'm sure they are happy not to have to deal with any ethical issues by keeping it private but that's probably a secondary motivation.
It's not like they release their state of the art ML stuff when there aren't any ethical issues anyway, e.g. for voice recognition.
To those who ask about the ethics of releasing something like this, I'd say that this technology already exists, and bad actors probably already can get access if they really want to and are sophisticated enough. Making this available to the general public will spread awareness of the existence of such tools, and can then possibly have a preventive effect.
As someone with a stalker, I can't emphasize this enough. A stalker will go to all sorts of lengths to do bizarre shit. People don't believe it. I would guess governments will do some equivalent there-of.
Democratizing access to things -- including bad things -- has a preventative effect:
1) I can guard against things I know about
2) People take me seriously if something has been democratized
The worst-case scenario is if my stalker got her hands on something like deep fake technology before the police / prosecutor / jury didn't knew it existed. I'd probably be in jail by now if something like that ever happened. She's tried to frame me twice before. Fortunately, they were transparent. She'll try again.
Best case scenario is that no one has access to this stuff.
Worst case scenario is only a select group have access, and most people don't know about it.
I want to second this, as there are so many people that just simply can't believe how much time and energy some people will put into destroying someone else's life.
And when you ask for help, people think you are the insane one because they simply can't believe your story about the insanity of someone else.
I hope you find relief sometime from your stalker. I found it (not a stalker exactly) from letting the person burn themselves with their behavior so many times without me doing or saying anything in return to them (my strategy of non-direct conflict, and it worked for me), that eventually they ran out of people to manipulate and fool.
Your "worst case" depends greatly on who the select group is. Is it movie studios making 9 figure budgets, or is it any obsessed person who can figure out how to find and install software.
Obviously, it's hard to imagine many situations like that, but you can imagine a process that required a 8-figure quantum supercomputer.
not to be too pedantic, but this is not "democratizing access", as that would involve print-outs/usb sticks/discs of the code distributed to people that can't access the internet, accessibility issues, bias considerations etc etc. as such, this is just "access".
I agree with everything you said, but I we shouldn't deny that opportunistic bad actors don't exist. Or it might get on their radar and be exploited. Open source tools also tend to be better maintained, documented and reliable, so the bad guys will have a better tool.
That being said, bringing it to light also has benefits like you said. If the tool is out in the open and state of the art techniques are used, technology to detect its use will also benefit.
I'm reminded of Firesheep - https://en.wikipedia.org/wiki/Firesheep - which came out in 2010. It wrapped session hijacking on WiFi in an easily usable interface. The technique and the vulnerability wasn't anything new, but the extension raised awareness in a big way and really sparked a big push for getting SSL deployed, enabled, and defaulted everywhere.
It only buys time, but that can provide the time needed to create countermeasures and ideally make those very accessible—somewhat similar to responsible vulnerability disclosure.
The entirety of deep fake technology was developed mostly in mainstream academia using "raising awareness" as an excuse. Paper after paper, model after model, repository after repository. Every single time the excuse was "if we don't do it, someone else will". This was going on for years and the explanation is absolutely laughable. Without countless human-hours put into this by academia, it's pretty obvious that this technology would be nowhere near its current state. Maybe some select military research agencies could develop something analogous. Currently this is accessible to literally every crook and prankster with internet access.
Also, the notion that "raising awareness" is going to prevent deep fakes from being used in practice shows complete and utter disconnect from reality. Most people who are skeptical are already aware how imminently fakeable all the media really is. Most people who still are unaware will remain so, no matter how many GitHub repositories some dipshits will publish.
I agree that raising awareness that tools like this are possible is important and that sufficiently advanced actors can do this anyway, however I don't think in this case releasing pre-trained weights to the general public is responsible. This could probably be used to help bypass crypto exchange KYC for moneylaundering purposes. I'm not sure what the best access model is - email us with a good reason to get access to the weights perhaps - but what alarms me is there seems to be no consideration for misuse or responsible release at all.
Even without deepfakes any kind of system relying on a person (or computer) not being tricked by webcam video seems quite questionable. People could still be tricked with a spliced video fragments of the real person or makeup especially if the set of face expressions used during "liveness check" is known ahead of time.
I try to imagine how society will deal with this. What if deep fakes are so perfect that anyone can generate real-time footage of anyone else doing anything? As a society we’d need to move out of the virtual and back into the physical. Would that be such a bad thing?
I suspect this perfect deep fake technology might be a real boon to society.
That's like saying that nuclear weapons exist, and bad actors can potentially get them, so let's lower the bar so that anyone can.
Making such tools accessible is reprehensible. It will lead to more bad actors, to less trust in media and in any objective reality, and more erosion of our institutions and society.
There is absolutely no reason whatsoever for this. It's unethical and frankly downright evil.
The technology exists, and it’ll get better. Pretending that it doesn’t exist or banning it won’t make it go away — it’ll just be used by the least scrupulous and most powerful.
Disruptive efforts like this are most upsetting to anxiety-ridden people who think that if they could just control things firmly enough, everyone and everything will be safe.
That kind of thinking doesn’t actually work, though, and it produces a stiflingly rigid, oppressive society that deserves to be upset occasionally.
Trust in media is already very low, and in fact should go lower. Deepfake tech exists, and the fact that it does, and is broadly available to bad actors, should be widely known.
Obviously the best case by far is that such weapons (in this case disinformation weapons) do not exist, but the worst case is them existing but hidden — THAT is the recipe for fooling people in the greatest numbers.
These tools existing, with widespread knowledge of their existence is sort of the least-worst case for the real world in which we live.
Yes this does have the potential to kill pretty much anything related to video and photography (everything from art to news to documentation), but that was the same when spam was a literal threat to existence of email. Unless we manage it, video and photography will be trusted for nothing but boring amusement; but better than than mass deception.
Well, genius of this guy. Create the threat then sell the cure. The old school business model we know from anti-virus software.
"I am Cofounder and CEO at Sensity, formerly called Deeptrace, an AI security startup detecting and monitoring online visual threats such as “deepfakes”." (one of the contributors of this repo)
I'm really excited to see what could be done with this! I think the primary benefits of this being released are two fold:
1) It will give security researchers more freely available technology to work with in order to try and fight the malicious use of deepfakes. (I saw some interesting comments in this thread about TPM. It'd be interesting to see what other solutions are out there.)
2) It would raise the overall awareness of the general population about the existence and advancements that deepfake technology has made. I would argue that a small subset of the overall population know what the term "deepfake" means, and even fewer are aware at how far it has progressed in only a few short years. (I'm not super well versed in the topic myself, I just know that I've heard a lot of progress has been made.)
I think that since this tech is already actively being used by bad actors, the best course of action that we can take until at least a somewhat good counter to it has been adopted (and then quickly defeated) is to make as many people aware that this is something that could affect them, or their families. That this is something that could be used to get someone fired, or hurt, or killed. I think that the more that people are aware of its existence, the less impactful the overall effect of deepfakes becomes. People learn to look twice before making a call on something, because of how easy it has become to fake audio and video.
Anything that can be created, will be created. However, that doesn't free you from all moral culpability. If you create something, make it freely accessible and easy to use, then I think you are partly responsible for its misuse.
I'm not saying that they shouldn't have created this, or that they don't have the right to release it. But to create it, release it, and then pretend that any misuse was entirely separate to you is at best naive.
IMO the creators should feel good about their actions, even if they feel bad or apprehensive about the direction of the world because this technology exists at all.
That's an argument I have a lot of time for, but it needs to be made, rather than - as at present - having the whole issue sidestepped. With any tool like this, I think there's a need for ethical due diligence as much as there is for engineering, and there's no evidence that this aspect has been considered at all.
“Tool is expensive” is a real, effective deterrent to frivolous use of a tool. There’s a reason we pool resources into governments, which is explicitly so we (via a government) have capabilities that we (as individuals) don’t have.
We produced the software to run the drones. We didn't personally deploy them to kill those children.
We built software to guide the rockets. We didn't personally fire them at those hospitals.
We chose to startup our company providing profiles of suspects using scraped, publically available data to any government agency. What they do with the information isn't our problem.
We wrote the software, but it's not our fault or problem what happens because of who uses and because of how they use it. Our intent was good. The market and demand was there. What's wrong with providing a supply? How many software creators feel culpable? Vanishingly few. Who cares when comp is high?
It’s surprising how often this argument is made compared to, say, equal-opportunity access to fancy cars or good housing.
Dead Comment
I do think disclaimers like this are a little juvenile (it reeks of a US-ian litigation mentality), but you can easily imagine why they put it there. Perhaps instead of the author being less naive, you need to be more empathetic.
Personally, I like when people release proof of concepts, and hate it when they release weaponized tools. Especially when I inevitably end up reading reports where APT groups are using those tools.
I wouldn't term it a failure of empathy to ask that people consider the impact of their actions on others. I totally get why they have the disclaimer, and I wouldn't ask them to remove it, but I don't think it's enough. Given the clear potential for harm here, the potential uses and misuses of this tool should have been addressed directly.
People should be happy, that there are still white-hats who're reporting exploits, even for no profit. I personally switched to gray-market, can't take the shit we get from companies, anymore.
This tool released publicly is only to bring an awareness to the topic. Everybody else who needed to exploit this, have these tools developed in private.
That's a dangerous precedent. Would you apply the same logic to a kitchen knife? Or if for some reason only freemium products count (not sure why), then a pentesting tool?
I understand the underlying point you are trying to make but what are you proposing as an alternative exactly? Who gets to decide which products fall into a gray zone, whilst others are only for bad use. We already see this kind of shoddy thinking leading to keeping DALE-2 out of the public's hands (or at least that is their claim).
Regardless, I'm not proposing to ban either knives or this tool - I've been very clear about that. I do think that - with anything that has potential for harm - it's important for people to consider the possible consequences and to actually engage in the discussion about usage, rather than either washing their hands of the issue or declaring that any consideration will soon lead to arresting everyone for everything.
This is a path we've trodden countless times. With some things, we've collectively decided that no controls are necessary. With other things - poisons, nuclear weapons, a host of things depending on location - society has enacted controls of various efficacy and validity.
The responsible choice, regardless of whether you end up being for or any any given restriction for [thing], is to spend time thinking about it, discussing it, and - particularly when you've chosen to release something - acknowledging the potential issues.
For instance, I once cheated by using gcc/godbolt to generate assembly output for a class from C code. By this logic, Richard Stallman should be blamed for my misconduct.
There are any number of reasons for supplanting another face onto your own, many of which are simply good fun. If you choose to use this for scamming or perverted reasons, so be it.
Moral posturing aside, perhaps there could be an invisible watermark or something included by default to easily identify less technically inclined actors as users of this tool.
The inventor of knives? The knife‘s manufacturer? The store who sold me the knife? I would say the responsibility lays 100% with myself.
I do not think it makes sense to pursue inventors for what happens to their creations, unless they actively encourage misuse.
Do guns kill people or do people kill people? Could be a relevant analogy.
https://www.npr.org/2022/06/03/1102755195/uvalde-special-ed-...
Sorry can't resist
I don't agree, but people will never reach consensus on this moral topic. So please don't call the other side "at best naive".
We can do that. We do it all the time, assigning varying levels of blame to different parties for different things.
Determining between proximal and ultimate causes, or assigning more weight to one cause or another, is not some impossible burden.
The Apache Foundation - great example, thank you - despite not creating weapons, has clearly put thought into how their work interacts with wider society and how to ensure positive outcomes as far as possible [1]. People absolutely don't have to agree on moral issues, but it is irresponsible not to have considered them.
[1] https://apache.org/theapacheway/index.html
No, it is naive to pretend facial recognition is worth anything when creating tools to defeat it is a mere academic exercise in reading papers and implementing algorithms described in those.
Don't shoot the messenger.
Deleted Comment
Deleted Comment
No. You don't.
I am closer to your line of thinking than not, but, at the same time, from where I sit, it seems that data monitoring went too far already and regular user has to have a way to circumvent it.
For that reason alone, this tool, even if it will result in some abuse, it just evens out the playing field.
irrelevant
We should do the exact opposite. Science and reason would cease to exist otherwise. Individuals wouldn't need to be held accountable, innovators/engineers/scientists/entrepreneurs would be. End of a free society.
Have you thought of the chilling effect this might bring?
I've been extremely clear throughout this comment section - I don't want censorship, I don't think this should be banned, and nowhere have I called for legal consequences for releasing this. I want people to think about their actions, and to discuss the ethical issues arising from them.
And yet you've jumped to calling me morally corrupt because I want to murder craftsmen. That's not a reasonable reading of my comments, or in any way a proportionate response.
If you want to talk about chilling effects and the importance of science and reason, how would you describe your comments? You're shouting down discussion with wild accusations.
That way, the software could be used for pen testing, but it could cause a silent trigger to go off.
That could even be a way to monetize the software....
Dead Comment
Make the tools available to everyone. As Elon musk says, sun is the best disinfectant.
Axis has developed a way to cryptographically sign video, using TPMs (Trusted Platform Module) built into the cameras and embedding the signature into the h.264 stream.[1] The video can be verified on playback using an open-source video player.[2]
I hope this sort of video signing will be mainstream in all cameras in the future (i.e. cellphones etc), as it will pretty much solve the trust issues deep fakes are causing.
[0] https://www.axis.com/ [1] https://www.axis.com/newsroom/article/trust-signed-video [2] https://www.axis.com/en-gb/newsroom/press-release/axis-commu...
> It shouldn't be too hard to film a deepfake movie from a screen or projection that don't make it obvious it was filmed. That way, the cryptographic signature will even lend extra authenticity to the deepfake!
Would you even have to go that far? Couldn't you just figure out how to embed a cryptographically valid signature in the right format, and call it good?
Say you wanted to take down a politician, so you deepfake a video of him slapping his wife on a streetcorner, and embed a signature indicating it's from some rando cell phone camera with serial XYZ. Claim you verified it, but the source wants to remain anonymous.
I don't think this idea address the problems caused by deepfakes, unless anonymous video somehow ceases to be a thing.
Similarly, it could have serious negative consequences, such as people being reluctant to share real video because they don't want to be identified and subject to reprisals (e.g. are in an authoritarian country and have video of human rights abuses).
Yes that's kind of the point. Plus I'm sure they could put the whole camera in a tamper resistant case. They could make it very difficult to access the sensor.
Including focus data should make "record a screen" a bit harder too. I guess recording a projection would be pretty hard to detect, but how likely is it that people would go to those lengths, vs using a simple deep fake tool?
So, the best you get is that the stream shows it wasn't produced with access to a particular camera. Then impersonating a YouTuber, say, requires access to their physical camera.
I'm not so sure. How will you verify a signature if you see a video on TV or on social media? Do you believe these devices are 100% secure and the keys will never be extracted?
But most videos where we're worried about deep fakes are videos that come from other people's cameras, where we don't know which signature should be valid, nor whether it should have a signature.
It's a nice piece of tech, I can see it being used in court for example, to strengthen the claim that a video is not a deepfake.
However, that's not "the" problem with deepfakes. Propaganda of all sorts have demonstrated that "Falsehood flies, and the Truth comes limping after it". As in, with the proper deepfakes, you can do massive damage via social media for example. People re-share without validation all the time, and the existence of deepfakes add fuel to this fire. And I think that we can't do anything about either.
https://contentauthenticity.org/
I hope deep-fake detection software can compete with deep-fake generation software, I've been tracking this need-gap on my problem validation forum for a while now[1].
That said, There are ethical usages of deep-fake videos as well; In fact I might checkout this very tool to see if I can use it for 'smiling more in the videos', remembering to smile during videos is exhaustive for me. There are other ethical usages like limiting the physical effort needed to produce video content for those with disability(like myself)[2].
[1] https://needgap.com/problems/21-deep-fake-video-detection-fa...
[2] https://needgap.com/problems/20-deep-fake-video-generating-s...
Deleted Comment
I'm sick snd tired of seeing big companies and orgs (Google is the most recent) publish an amazing application of ML but refuse to release the trained model because the model is biased and may be used in a bad way.
It's not like they release their state of the art ML stuff when there aren't any ethical issues anyway, e.g. for voice recognition.
Democratizing access to things -- including bad things -- has a preventative effect:
1) I can guard against things I know about
2) People take me seriously if something has been democratized
The worst-case scenario is if my stalker got her hands on something like deep fake technology before the police / prosecutor / jury didn't knew it existed. I'd probably be in jail by now if something like that ever happened. She's tried to frame me twice before. Fortunately, they were transparent. She'll try again.
Best case scenario is that no one has access to this stuff.
Worst case scenario is only a select group have access, and most people don't know about it.
Universal access is somewhere in between.
And when you ask for help, people think you are the insane one because they simply can't believe your story about the insanity of someone else.
I hope you find relief sometime from your stalker. I found it (not a stalker exactly) from letting the person burn themselves with their behavior so many times without me doing or saying anything in return to them (my strategy of non-direct conflict, and it worked for me), that eventually they ran out of people to manipulate and fool.
Obviously, it's hard to imagine many situations like that, but you can imagine a process that required a 8-figure quantum supercomputer.
not to be too pedantic, but this is not "democratizing access", as that would involve print-outs/usb sticks/discs of the code distributed to people that can't access the internet, accessibility issues, bias considerations etc etc. as such, this is just "access".
That being said, bringing it to light also has benefits like you said. If the tool is out in the open and state of the art techniques are used, technology to detect its use will also benefit.
It only buys time, but that can provide the time needed to create countermeasures and ideally make those very accessible—somewhat similar to responsible vulnerability disclosure.
This piece goes into more detail: https://aviv.medium.com/the-path-to-deepfake-harm-da4effb541... (excerpt from a working paper, part of which was presented at NeurIPs).
Also, the notion that "raising awareness" is going to prevent deep fakes from being used in practice shows complete and utter disconnect from reality. Most people who are skeptical are already aware how imminently fakeable all the media really is. Most people who still are unaware will remain so, no matter how many GitHub repositories some dipshits will publish.
I suspect this perfect deep fake technology might be a real boon to society.
Deleted Comment
Dead Comment
https://youtu.be/CpAdOi1Vo5s?t=3786
Making such tools accessible is reprehensible. It will lead to more bad actors, to less trust in media and in any objective reality, and more erosion of our institutions and society.
There is absolutely no reason whatsoever for this. It's unethical and frankly downright evil.
You’re being absurd.
The technology exists, and it’ll get better. Pretending that it doesn’t exist or banning it won’t make it go away — it’ll just be used by the least scrupulous and most powerful.
Disruptive efforts like this are most upsetting to anxiety-ridden people who think that if they could just control things firmly enough, everyone and everything will be safe.
That kind of thinking doesn’t actually work, though, and it produces a stiflingly rigid, oppressive society that deserves to be upset occasionally.
Institutions that are corrupt and serve themselves (not the public, despite their lipservice) need to go.
Trust in media is already very low, and in fact should go lower. Deepfake tech exists, and the fact that it does, and is broadly available to bad actors, should be widely known.
Obviously the best case by far is that such weapons (in this case disinformation weapons) do not exist, but the worst case is them existing but hidden — THAT is the recipe for fooling people in the greatest numbers.
These tools existing, with widespread knowledge of their existence is sort of the least-worst case for the real world in which we live.
Yes this does have the potential to kill pretty much anything related to video and photography (everything from art to news to documentation), but that was the same when spam was a literal threat to existence of email. Unless we manage it, video and photography will be trusted for nothing but boring amusement; but better than than mass deception.
"I am Cofounder and CEO at Sensity, formerly called Deeptrace, an AI security startup detecting and monitoring online visual threats such as “deepfakes”." (one of the contributors of this repo)
1) It will give security researchers more freely available technology to work with in order to try and fight the malicious use of deepfakes. (I saw some interesting comments in this thread about TPM. It'd be interesting to see what other solutions are out there.)
2) It would raise the overall awareness of the general population about the existence and advancements that deepfake technology has made. I would argue that a small subset of the overall population know what the term "deepfake" means, and even fewer are aware at how far it has progressed in only a few short years. (I'm not super well versed in the topic myself, I just know that I've heard a lot of progress has been made.)
I think that since this tech is already actively being used by bad actors, the best course of action that we can take until at least a somewhat good counter to it has been adopted (and then quickly defeated) is to make as many people aware that this is something that could affect them, or their families. That this is something that could be used to get someone fired, or hurt, or killed. I think that the more that people are aware of its existence, the less impactful the overall effect of deepfakes becomes. People learn to look twice before making a call on something, because of how easy it has become to fake audio and video.