Readit News logoReadit News
Peritract · 4 years ago
> Authors and contributing developers assume no liability and are not responsible for any misuse or damage caused by the use of this program.

Anything that can be created, will be created. However, that doesn't free you from all moral culpability. If you create something, make it freely accessible and easy to use, then I think you are partly responsible for its misuse.

I'm not saying that they shouldn't have created this, or that they don't have the right to release it. But to create it, release it, and then pretend that any misuse was entirely separate to you is at best naive.

barnabee · 4 years ago
I’d argue there is a moral imperative to create and release tools like this as free and open source software so that anyone has access to them and can choose whether to use them, rather than only sophisticated and well resourced adversaries.

IMO the creators should feel good about their actions, even if they feel bad or apprehensive about the direction of the world because this technology exists at all.

Peritract · 4 years ago
> there is a moral imperative to create and release tools like this as free and open source software so that _anyone_ has access to them

That's an argument I have a lot of time for, but it needs to be made, rather than - as at present - having the whole issue sidestepped. With any tool like this, I think there's a need for ethical due diligence as much as there is for engineering, and there's no evidence that this aspect has been considered at all.

ethanbond · 4 years ago
Does this apply to nuclear weapons? To biological weapons?

“Tool is expensive” is a real, effective deterrent to frivolous use of a tool. There’s a reason we pool resources into governments, which is explicitly so we (via a government) have capabilities that we (as individuals) don’t have.

thrwawy283 · 4 years ago
I'm not saying it's a good idea to give everyone a gun, but I do like the argument that the disadvantaged have the same opportunity to pull a trigger. I hope humanity learns wisdom as quickly as we innovate. We've made it this far..
formerkrogemp · 4 years ago
We open sourced 3d printed AK-47s. We disclaim any responsibility for the ensuing mass shootings.

We produced the software to run the drones. We didn't personally deploy them to kill those children.

We built software to guide the rockets. We didn't personally fire them at those hospitals.

We chose to startup our company providing profiles of suspects using scraped, publically available data to any government agency. What they do with the information isn't our problem.

We wrote the software, but it's not our fault or problem what happens because of who uses and because of how they use it. Our intent was good. The market and demand was there. What's wrong with providing a supply? How many software creators feel culpable? Vanishingly few. Who cares when comp is high?

hackernewds · 4 years ago
This sounds like it's making the gun control argument regularly made in America, and I disagree with it vehemently
IfOnlyYouKnew · 4 years ago
I can’t see a way this technology could be used defensively. Wide access just leads to more abuse. There’s no principle by which it serves some sort of justice to make some crime more accessible.

It’s surprising how often this argument is made compared to, say, equal-opportunity access to fancy cars or good housing.

bko · 4 years ago
Honest question, do you apply the same reasoning to gun control?

Dead Comment

krageon · 4 years ago
This has a clear benign use. Of course it sucks that you can also use it in a hostile manner - but the fact that this tool is publicly available rather than hidden in the pocket of some unscrupulous blackhat means that every space that uses verification with these methods can now incorporate this type of testing. That's a net benefit for society.

I do think disclaimers like this are a little juvenile (it reeks of a US-ian litigation mentality), but you can easily imagine why they put it there. Perhaps instead of the author being less naive, you need to be more empathetic.

tgsovlerkhgsel · 4 years ago
More generically, I think there's a big difference between releasing proof of concepts, and fully weaponized tools. While the latter is also usable by red teams, it also gives attackers (who often wouldn't have the resources to build the same) the weapons they need.

Personally, I like when people release proof of concepts, and hate it when they release weaponized tools. Especially when I inevitably end up reading reports where APT groups are using those tools.

Peritract · 4 years ago
This is a tool with a clear benign use, along with a bunch of malign ones. If you create something with significant potential for harm, then you should at least think about that, the potential consequences, and possible mitigations.

I wouldn't term it a failure of empathy to ask that people consider the impact of their actions on others. I totally get why they have the disclaimer, and I wouldn't ask them to remove it, but I don't think it's enough. Given the clear potential for harm here, the potential uses and misuses of this tool should have been addressed directly.

RektBoy · 4 years ago
What do you do? IMHO, only person naive here is you.

People should be happy, that there are still white-hats who're reporting exploits, even for no profit. I personally switched to gray-market, can't take the shit we get from companies, anymore.

This tool released publicly is only to bring an awareness to the topic. Everybody else who needed to exploit this, have these tools developed in private.

ramblerman · 4 years ago
> If you create something, make it freely accessible and easy to use, then I think you are partly responsible for its misuse.

That's a dangerous precedent. Would you apply the same logic to a kitchen knife? Or if for some reason only freemium products count (not sure why), then a pentesting tool?

I understand the underlying point you are trying to make but what are you proposing as an alternative exactly? Who gets to decide which products fall into a gray zone, whilst others are only for bad use. We already see this kind of shoddy thinking leading to keeping DALE-2 out of the public's hands (or at least that is their claim).

Peritract · 4 years ago
A knife is a multi-purpose tool that can be used as a weapon, and one that has so many important non-weapon uses that not having it would cause far more harm than having it would. This is intended primarily as a weapon, even when used defensively. There's an important qualitative difference there, though it's tempting to gallop down the slippery slope.

Regardless, I'm not proposing to ban either knives or this tool - I've been very clear about that. I do think that - with anything that has potential for harm - it's important for people to consider the possible consequences and to actually engage in the discussion about usage, rather than either washing their hands of the issue or declaring that any consideration will soon lead to arresting everyone for everything.

This is a path we've trodden countless times. With some things, we've collectively decided that no controls are necessary. With other things - poisons, nuclear weapons, a host of things depending on location - society has enacted controls of various efficacy and validity.

The responsible choice, regardless of whether you end up being for or any any given restriction for [thing], is to spend time thinking about it, discussing it, and - particularly when you've chosen to release something - acknowledging the potential issues.

Francesco_deep · 4 years ago
I sell airplanes do I am to feel responsible for the 9/11?
solsane · 4 years ago
I disagree. The moral responsibility really rests on the person who uses the tool.

For instance, I once cheated by using gcc/godbolt to generate assembly output for a class from C code. By this logic, Richard Stallman should be blamed for my misconduct.

There are any number of reasons for supplanting another face onto your own, many of which are simply good fun. If you choose to use this for scamming or perverted reasons, so be it.

Moral posturing aside, perhaps there could be an invisible watermark or something included by default to easily identify less technically inclined actors as users of this tool.

Hendrikto · 4 years ago
If I stab someone with a knife, who is responsible?

The inventor of knives? The knife‘s manufacturer? The store who sold me the knife? I would say the responsibility lays 100% with myself.

I do not think it makes sense to pursue inventors for what happens to their creations, unless they actively encourage misuse.

jumpkick · 4 years ago
Parents of one of the victims of the recent elementary school shooting in Uvalde, Texas are attempting, or preparing, to sue the gun manufacturer of the semiautomatic rifle that was used in that event. Will they sue, will they win? I don't know.

Do guns kill people or do people kill people? Could be a relevant analogy.

https://www.npr.org/2022/06/03/1102755195/uvalde-special-ed-...

thingification · 4 years ago
burrows · 4 years ago
Responsibility is a ghost. Like God, Love, Honor, etc. You’re not actually talking about anything.
FollowingTheDao · 4 years ago
They arrest people for making and selling meth everyday. You have a bias because making deep fakes is not illegal yet. Knives are used for cooking, what alternative good do deep fakes provide?
yieldcrv · 4 years ago
All I learned from this response was to just release tools like this on Darknet Marketplaces and maybe Telegram and just forget the disclaimer
ospray · 4 years ago
The main value in releasing tools like this is to demonstrate weakness in our current security controls. A key weakness of biometrics is that there is no secret data. Open source tooling like this help people understand that.
polartx · 4 years ago
I wish I didn’t have to scroll this far to find your rational perspective. Lets take the famous LockPickingLawyer of YouTube, is he responsible for every crime where the thief defeats a lock that he has demonstrated the weaknesses of? I would say “no!”. Exposing a weakness puts the onus of securing said weaknesses on those that sell technology/devices/services that market themselves as “secure”.
wdroz · 4 years ago
> If you create something, make it freely accessible and easy to use, then I think you are partly responsible for its misuse.

I don't agree, but people will never reach consensus on this moral topic. So please don't call the other side "at best naive".

Peritract · 4 years ago
If there's no duty to consider potential harm when releasing tools, then why would there be a duty to avoid criticising people who release them?
ls15 · 4 years ago
Unless we cut off the chain of responsibility somewhere, the creators of the programming language, the computer and its components, as well as the people who have designed the computer and those who mined the required minerals are responsible as well.
Peritract · 4 years ago
> Unless we cut off the chain of responsibility somewhere

We can do that. We do it all the time, assigning varying levels of blame to different parties for different things.

Determining between proximal and ultimate causes, or assigning more weight to one cause or another, is not some impossible burden.

alexjurkiewicz · 4 years ago
Good thing nobody is suggesting we *checks notes* jail silica miners for building the Deepfake Offensive Toolkit.
stef25 · 4 years ago
What's the difference between this and Metasploit, sqlmap, ... ? Not saying you're wrong, it's just that while these tools have valid and legal uses (pentesting) they're also used in black hat scenarios and one could say the same about DOT.
jalk · 4 years ago
Which is why Metasploit triggered the same discussion.
carnitine · 4 years ago
That’s a legal disclaimer, not a moral one.
Peritract · 4 years ago
I think that's part of what makes me uneasy about this - it's a solely legal disclaimer for something with a moral dimension, and it reads to me like the abdication of responsibility, rather than just a shield against litigation.
jacquesm · 4 years ago
It is an attempt at a legal disclaimer. It may not hold up in court.
forgingahead · 4 years ago
We're actually super fortunate that the development of deepfake technologies have been done relatively out in the open, with source codes, concepts, and pre-trained models often being readily shared. This allows for a broader based understanding of what is possible, and then hopefully develop ways for folks to inoculate themselves, or at least have some societal-level resistance to being hoodwinked by them. If this tech was only developed in secret, and was being used in a targeted manner, who knows what kind of large-scale manipulations would be being undertaken.
tehbeard · 4 years ago
So, if we take your naive take to its logical conclusion; The Apache Foundation should be considered partly responsible for all the malware distributed via their http software?
IfOnlyYouKnew · 4 years ago
That’s not a naive take but a stupid one, and it wasn’t made by OP. Slippery slope is a fallacy, not an argument.
Peritract · 4 years ago
There is a conceptual difference between releasing a weapon (even if for research/defensive purposes) and releasing tools which could later be adapted negatively. That's not to say that weapons should never be created/released, but that there is an extra onus on the creators to at least consider the harm they are enabling and possible mitigations.

The Apache Foundation - great example, thank you - despite not creating weapons, has clearly put thought into how their work interacts with wider society and how to ensure positive outcomes as far as possible [1]. People absolutely don't have to agree on moral issues, but it is irresponsible not to have considered them.

[1] https://apache.org/theapacheway/index.html

jlg23 · 4 years ago
> pretend that any misuse was entirely separate to you is at best naive.

No, it is naive to pretend facial recognition is worth anything when creating tools to defeat it is a mere academic exercise in reading papers and implementing algorithms described in those.

Don't shoot the messenger.

belter · 4 years ago
Should Ron Rivest, Adi Shamir, and Leonard Adleman have responsibility if criminals use their discovery?
disintegore · 4 years ago
Is this the first pen test tool you've seen? No, they aren't responsible, morally or legally or however you want to slice it. At all. Not one bit.

Deleted Comment

Deleted Comment

wiz21c · 4 years ago
Reminds me of Oppenheimer: "I am become Death, the shatterer of worlds."
throwaway71271 · 4 years ago
you can hear Oppenheimer himself: https://www.youtube.com/watch?v=lb13ynu3Iac
rockemsockem · 4 years ago
Do you ask for the same level of culpability from a hammer manufacturer? Should hammer makers have trouble sleeping at night because someone bludgeoned another person to death with a hammer made by them?

No. You don't.

A4ET8a8uTh0 · 4 years ago
Hmm. It is a tough one.

I am closer to your line of thinking than not, but, at the same time, from where I sit, it seems that data monitoring went too far already and regular user has to have a way to circumvent it.

For that reason alone, this tool, even if it will result in some abuse, it just evens out the playing field.

hegzploit · 4 years ago
Same applies to a lot of security oriented tools, most notably metasploit which had these same accusations when it came out long ago, nowadays it's no big deal as exploitation frameworks are now more accessible. same applies to a lot of niche technologies that can be misused.
bufferoverflow · 4 years ago
Do you think knife manufacturers are responsible for stabbings and knife attacks?
golergka · 4 years ago
You are a partly reason for misuse, but you are not morally responsible for it. Causal relationship doesn't necessarily imply a moral one.
_andrei_ · 4 years ago
> then I think you are partly responsible for its misuse

irrelevant

sedatk · 4 years ago
Exposing these tools to broader use will accelerate the development of mitigations, which is a net win for regular users IMHO.
systemvoltage · 4 years ago
This is morally corrupt, dangerous and would lead to an oppressive, violent society. A knife maker killed for murders, a watch maker killed for the tyranny of time.

We should do the exact opposite. Science and reason would cease to exist otherwise. Individuals wouldn't need to be held accountable, innovators/engineers/scientists/entrepreneurs would be. End of a free society.

Have you thought of the chilling effect this might bring?

Peritract · 4 years ago
This is not a good faith argument.

I've been extremely clear throughout this comment section - I don't want censorship, I don't think this should be banned, and nowhere have I called for legal consequences for releasing this. I want people to think about their actions, and to discuss the ethical issues arising from them.

And yet you've jumped to calling me morally corrupt because I want to murder craftsmen. That's not a reasonable reading of my comments, or in any way a proportionate response.

If you want to talk about chilling effects and the importance of science and reason, how would you describe your comments? You're shouting down discussion with wild accusations.

trashtester · 4 years ago
What authors _could_ do is to add some kind of secret watermark that would only be shared with select government agencies and perhaps software companies that could be trusted to keep it secret.

That way, the software could be used for pen testing, but it could cause a silent trigger to go off.

That could even be a way to monetize the software....

kobalsky · 4 years ago
this authentication mechanism is flawed and needs to go asap. these guys are doing something positive by speeding up the process.
nurettin · 4 years ago
It is not a moral statement, just lawbabble.
bongoman37 · 4 years ago
By this argument Linus Torvalds would be responsible for everything from child porn to drug marketplaces to nuclear weapons.

Dead Comment

davidguetta · 4 years ago
Having morals when the other side doesn't is a weakness.

Make the tools available to everyone. As Elon musk says, sun is the best disinfectant.

Peritract · 4 years ago
Having morals when the other side doesn't is the best possible reason to be on different sides.
throw9871928 · 4 years ago
I work at Axis, which makes surveillance cameras.[0] This comment is my own, and is not on behalf of the company. I'm using a throwaway account because I'd rather not be identified (and because surveillance is quite controversial here).

Axis has developed a way to cryptographically sign video, using TPMs (Trusted Platform Module) built into the cameras and embedding the signature into the h.264 stream.[1] The video can be verified on playback using an open-source video player.[2]

I hope this sort of video signing will be mainstream in all cameras in the future (i.e. cellphones etc), as it will pretty much solve the trust issues deep fakes are causing.

[0] https://www.axis.com/ [1] https://www.axis.com/newsroom/article/trust-signed-video [2] https://www.axis.com/en-gb/newsroom/press-release/axis-commu...

jonathanstrange · 4 years ago
It shouldn't be too hard to film a deepfake movie from a screen or projection that don't make it obvious it was filmed. That way, the cryptographic signature will even lend extra authenticity to the deepfake!
tablespoon · 4 years ago
>> I hope this sort of video signing will be mainstream in all cameras in the future (i.e. cellphones etc), as it will pretty much solve the trust issues deep fakes are causing.

> It shouldn't be too hard to film a deepfake movie from a screen or projection that don't make it obvious it was filmed. That way, the cryptographic signature will even lend extra authenticity to the deepfake!

Would you even have to go that far? Couldn't you just figure out how to embed a cryptographically valid signature in the right format, and call it good?

Say you wanted to take down a politician, so you deepfake a video of him slapping his wife on a streetcorner, and embed a signature indicating it's from some rando cell phone camera with serial XYZ. Claim you verified it, but the source wants to remain anonymous.

I don't think this idea address the problems caused by deepfakes, unless anonymous video somehow ceases to be a thing.

Similarly, it could have serious negative consequences, such as people being reluctant to share real video because they don't want to be identified and subject to reprisals (e.g. are in an authoritarian country and have video of human rights abuses).

galangalalgol · 4 years ago
A screen with double the resolution and twice the framrate should be indistinguishable. Moreover if you pop the case on the camera and replace the sensor with something fed by display port (probably need an fpga to convert display port to lvds, spi,ic2 or whatever those sensors use, at speed) that should work too.
yjftsjthsd-h · 4 years ago
Isn't it fun how the analog hole works both ways? :)
jacquesm · 4 years ago
That will just move the hack one level further down and will create even more confusion because then you'll have a 'properly signed video stream' as a kind of certificate that the video wasn't manipulated. But you don't really know that, because the sensor input itself could be computer generated and I can think off the bat of at least two ways in which I could do just that.
IshKebab · 4 years ago
"That will just make it more difficult to make fakes."

Yes that's kind of the point. Plus I'm sure they could put the whole camera in a tamper resistant case. They could make it very difficult to access the sensor.

Including focus data should make "record a screen" a bit harder too. I guess recording a projection would be pretty hard to detect, but how likely is it that people would go to those lengths, vs using a simple deep fake tool?

pbhjpbhj · 4 years ago
Isn't the point to prove physical access to the camera. As in, this stream originated at this TPM which was sold as part of this camera.

So, the best you get is that the stream shows it wasn't produced with access to a particular camera. Then impersonating a YouTuber, say, requires access to their physical camera.

Tepix · 4 years ago
> it will pretty much solve the trust issues

I'm not so sure. How will you verify a signature if you see a video on TV or on social media? Do you believe these devices are 100% secure and the keys will never be extracted?

throw9871928 · 4 years ago
Nothing is 100% secure, but the point of using a TPM is to prevent extraction of the keys.
Workaccount2 · 4 years ago
It will be 6 months before hardware bypass devices show up on alibaba.
driverdan · 4 years ago
I don't see how that would be useful here. I'm not giving out info about my webcam to anyone. There would be no way to verify the signature.
rockemsockem · 4 years ago
Manufacturer signature, not your signature.
kube-system · 4 years ago
That will help someone determine that a fake video didn't come from their own camera.

But most videos where we're worried about deep fakes are videos that come from other people's cameras, where we don't know which signature should be valid, nor whether it should have a signature.

rockemsockem · 4 years ago
I believe they're saying that the manufacturer will sign the video, not the filmer. Those signatures can then be validated by platforms that the video is uploaded to. The signature isn't supposed to say "this video came from person X" it's supposed to say "this video in fact came from the real world".
npteljes · 4 years ago
>it will pretty much solve the trust issues deep fakes are causing.

It's a nice piece of tech, I can see it being used in court for example, to strengthen the claim that a video is not a deepfake.

However, that's not "the" problem with deepfakes. Propaganda of all sorts have demonstrated that "Falsehood flies, and the Truth comes limping after it". As in, with the proper deepfakes, you can do massive damage via social media for example. People re-share without validation all the time, and the existence of deepfakes add fuel to this fire. And I think that we can't do anything about either.

rockemsockem · 4 years ago
That's really cool! I've been waiting for tech like this to finally come to light. Honestly expected either Google or Apple to lead the way on it. Have you all worked with the content authenticity initiative at all? It seems like they're looking at ways to develop standards around tech like this to ensure interoperability in the future.

https://contentauthenticity.org/

tinus_hn · 4 years ago
The surveillance states wet dream, where you can just look up who took that ‘unfair’ video of your agent breaking the rules.
colejohnson66 · 4 years ago
You're being uncharitable. I read it as a way for a person to prove they took the video, not as a database of person<->signing key. In other words, the government would only be able to see that video 123 was signed by signature 456. It would be up to the poster to prove they own the camera that produced signature 456.
culopatin · 4 years ago
Hey, off topic but I love your cameras and the attention to detail you put in them, especially in regards to what helps the tech install them.
yosito · 4 years ago
A few months ago, the IRS made me verify my identity using some janky video conferencing software where I had to hold up a copy of my passport. The software was so hard to use, that I can't believe average people manage to do it. Now, real-time deep fakes are literally easier to create than using the video verification software itself. This will have interesting societal implications.
Abishek_Muthian · 4 years ago
In India, Digital Signature issuing companies use webcam video to authenticate the applicant as well(I don't think even holding document is required); That digital-sign is used everywhere from signing tax filing to paying taxes.

I hope deep-fake detection software can compete with deep-fake generation software, I've been tracking this need-gap on my problem validation forum for a while now[1].

That said, There are ethical usages of deep-fake videos as well; In fact I might checkout this very tool to see if I can use it for 'smiling more in the videos', remembering to smile during videos is exhaustive for me. There are other ethical usages like limiting the physical effort needed to produce video content for those with disability(like myself)[2].

[1] https://needgap.com/problems/21-deep-fake-video-detection-fa...

[2] https://needgap.com/problems/20-deep-fake-video-generating-s...

aimor · 4 years ago
I mistyped my SSN this year and I wound up doing something similar: I had to take off my glasses and hold my face exactly in the center of the camera, while repeatedly squinting (hopelessly) to try and read the error messages as they alternated between "TOO CLOSE" and "TOO FAR AWAY". I gave up and, luckily, a few hours later found the mistake.

Deleted Comment

mustyoshi · 4 years ago
I'm glad they released this.

I'm sick snd tired of seeing big companies and orgs (Google is the most recent) publish an amazing application of ML but refuse to release the trained model because the model is biased and may be used in a bad way.

IshKebab · 4 years ago
I suspect that's mostly an excuse and they just want to keep it to themselves for commercial reasons. I mean I'm sure they are happy not to have to deal with any ethical issues by keeping it private but that's probably a secondary motivation.

It's not like they release their state of the art ML stuff when there aren't any ethical issues anyway, e.g. for voice recognition.

sva_ · 4 years ago
To those who ask about the ethics of releasing something like this, I'd say that this technology already exists, and bad actors probably already can get access if they really want to and are sophisticated enough. Making this available to the general public will spread awareness of the existence of such tools, and can then possibly have a preventive effect.
blagie · 4 years ago
As someone with a stalker, I can't emphasize this enough. A stalker will go to all sorts of lengths to do bizarre shit. People don't believe it. I would guess governments will do some equivalent there-of.

Democratizing access to things -- including bad things -- has a preventative effect:

1) I can guard against things I know about

2) People take me seriously if something has been democratized

The worst-case scenario is if my stalker got her hands on something like deep fake technology before the police / prosecutor / jury didn't knew it existed. I'd probably be in jail by now if something like that ever happened. She's tried to frame me twice before. Fortunately, they were transparent. She'll try again.

Best case scenario is that no one has access to this stuff.

Worst case scenario is only a select group have access, and most people don't know about it.

Universal access is somewhere in between.

RobertRoberts · 4 years ago
I want to second this, as there are so many people that just simply can't believe how much time and energy some people will put into destroying someone else's life.

And when you ask for help, people think you are the insane one because they simply can't believe your story about the insanity of someone else.

I hope you find relief sometime from your stalker. I found it (not a stalker exactly) from letting the person burn themselves with their behavior so many times without me doing or saying anything in return to them (my strategy of non-direct conflict, and it worked for me), that eventually they ran out of people to manipulate and fool.

HWR_14 · 4 years ago
Your "worst case" depends greatly on who the select group is. Is it movie studios making 9 figure budgets, or is it any obsessed person who can figure out how to find and install software.

Obviously, it's hard to imagine many situations like that, but you can imagine a process that required a 8-figure quantum supercomputer.

kmlx · 4 years ago
> Democratizing access to things

not to be too pedantic, but this is not "democratizing access", as that would involve print-outs/usb sticks/discs of the code distributed to people that can't access the internet, accessibility issues, bias considerations etc etc. as such, this is just "access".

bko · 4 years ago
I agree with everything you said, but I we shouldn't deny that opportunistic bad actors don't exist. Or it might get on their radar and be exploited. Open source tools also tend to be better maintained, documented and reliable, so the bad guys will have a better tool.

That being said, bringing it to light also has benefits like you said. If the tool is out in the open and state of the art techniques are used, technology to detect its use will also benefit.

belter · 4 years ago
You are right. I saw some guy that looked like Matt Damon trying to sell me some crypto coins...
karmakurtisaani · 4 years ago
It would be so ridiculously crazy that an established A-list actor would willingly promote those crypto scams. Must have been a deep fake!
HWR_14 · 4 years ago
For what it's worth, actors seem to be securing themselves against this using IP rights.
roughly · 4 years ago
I'm reminded of Firesheep - https://en.wikipedia.org/wiki/Firesheep - which came out in 2010. It wrapped session hijacking on WiFi in an easily usable interface. The technique and the vulnerability wasn't anything new, but the extension raised awareness in a big way and really sparked a big push for getting SSL deployed, enabled, and defaulted everywhere.
avivo · 4 years ago
Ease of access does matter.

It only buys time, but that can provide the time needed to create countermeasures and ideally make those very accessible—somewhat similar to responsible vulnerability disclosure.

This piece goes into more detail: https://aviv.medium.com/the-path-to-deepfake-harm-da4effb541... (excerpt from a working paper, part of which was presented at NeurIPs).

gambler · 4 years ago
The entirety of deep fake technology was developed mostly in mainstream academia using "raising awareness" as an excuse. Paper after paper, model after model, repository after repository. Every single time the excuse was "if we don't do it, someone else will". This was going on for years and the explanation is absolutely laughable. Without countless human-hours put into this by academia, it's pretty obvious that this technology would be nowhere near its current state. Maybe some select military research agencies could develop something analogous. Currently this is accessible to literally every crook and prankster with internet access.

Also, the notion that "raising awareness" is going to prevent deep fakes from being used in practice shows complete and utter disconnect from reality. Most people who are skeptical are already aware how imminently fakeable all the media really is. Most people who still are unaware will remain so, no matter how many GitHub repositories some dipshits will publish.

oliver910 · 4 years ago
I agree that raising awareness that tools like this are possible is important and that sufficiently advanced actors can do this anyway, however I don't think in this case releasing pre-trained weights to the general public is responsible. This could probably be used to help bypass crypto exchange KYC for moneylaundering purposes. I'm not sure what the best access model is - email us with a good reason to get access to the weights perhaps - but what alarms me is there seems to be no consideration for misuse or responsible release at all.
Karliss · 4 years ago
Even without deepfakes any kind of system relying on a person (or computer) not being tricked by webcam video seems quite questionable. People could still be tricked with a spliced video fragments of the real person or makeup especially if the set of face expressions used during "liveness check" is known ahead of time.
wussboy · 4 years ago
I try to imagine how society will deal with this. What if deep fakes are so perfect that anyone can generate real-time footage of anyone else doing anything? As a society we’d need to move out of the virtual and back into the physical. Would that be such a bad thing?

I suspect this perfect deep fake technology might be a real boon to society.

Deleted Comment

Dead Comment

verisimi · 4 years ago
This technology certainly already exists and has probably been around for a long time.

https://youtu.be/CpAdOi1Vo5s?t=3786

light_hue_1 · 4 years ago
That's like saying that nuclear weapons exist, and bad actors can potentially get them, so let's lower the bar so that anyone can.

Making such tools accessible is reprehensible. It will lead to more bad actors, to less trust in media and in any objective reality, and more erosion of our institutions and society.

There is absolutely no reason whatsoever for this. It's unethical and frankly downright evil.

teakettle42 · 4 years ago
On par with nuclear weapons? Downright evil?

You’re being absurd.

The technology exists, and it’ll get better. Pretending that it doesn’t exist or banning it won’t make it go away — it’ll just be used by the least scrupulous and most powerful.

Disruptive efforts like this are most upsetting to anxiety-ridden people who think that if they could just control things firmly enough, everyone and everything will be safe.

That kind of thinking doesn’t actually work, though, and it produces a stiflingly rigid, oppressive society that deserves to be upset occasionally.

verisimi · 4 years ago
But who are the actors worse that the mafia that is already in control?

Institutions that are corrupt and serve themselves (not the public, despite their lipservice) need to go.

toss1 · 4 years ago
>>It will lead to . . . less trust in media

Trust in media is already very low, and in fact should go lower. Deepfake tech exists, and the fact that it does, and is broadly available to bad actors, should be widely known.

Obviously the best case by far is that such weapons (in this case disinformation weapons) do not exist, but the worst case is them existing but hidden — THAT is the recipe for fooling people in the greatest numbers.

These tools existing, with widespread knowledge of their existence is sort of the least-worst case for the real world in which we live.

Yes this does have the potential to kill pretty much anything related to video and photography (everything from art to news to documentation), but that was the same when spam was a literal threat to existence of email. Unless we manage it, video and photography will be trusted for nothing but boring amusement; but better than than mass deception.

giorgiop · 4 years ago
Dot was used for performing vulnerability assessments on many biometric KYC vendors on 2022. The Verge covered this study in this article https://www.theverge.com/2022/5/18/23092964/deepfake-attack-...
pragmatick · 4 years ago
That article is linked in the second paragraph of the readme.
wafriedemann · 4 years ago
Well, genius of this guy. Create the threat then sell the cure. The old school business model we know from anti-virus software.

"I am Cofounder and CEO at Sensity, formerly called Deeptrace, an AI security startup detecting and monitoring online visual threats such as “deepfakes”." (one of the contributors of this repo)

nicce · 4 years ago
Well, this kind of threat was just a matter of time, if did not exist already, and public knowledge is for a greater good.
registeredcorn · 4 years ago
I'm really excited to see what could be done with this! I think the primary benefits of this being released are two fold:

1) It will give security researchers more freely available technology to work with in order to try and fight the malicious use of deepfakes. (I saw some interesting comments in this thread about TPM. It'd be interesting to see what other solutions are out there.)

2) It would raise the overall awareness of the general population about the existence and advancements that deepfake technology has made. I would argue that a small subset of the overall population know what the term "deepfake" means, and even fewer are aware at how far it has progressed in only a few short years. (I'm not super well versed in the topic myself, I just know that I've heard a lot of progress has been made.)

I think that since this tech is already actively being used by bad actors, the best course of action that we can take until at least a somewhat good counter to it has been adopted (and then quickly defeated) is to make as many people aware that this is something that could affect them, or their families. That this is something that could be used to get someone fired, or hurt, or killed. I think that the more that people are aware of its existence, the less impactful the overall effect of deepfakes becomes. People learn to look twice before making a call on something, because of how easy it has become to fake audio and video.