Readit News logoReadit News
3fe9a03ccd14ca5 · 6 years ago
I find it amusing that Congress has suddenly become so interested in the dangers of deep fakes. When they started going on and on about the dangers I thought “why is this dude so worried?”.

Then of course the Epstein stuff becomes news[1] months later and the cynical part of me can’t help but think they’re looking for a defense if they ever partied at his island or ranch.

1. https://nypost.com/2019/11/18/jeffrey-epstein-accuser-claims...

ISL · 6 years ago
Deepfakes are readily understandable as a problem for politicians. Politicians are experts in public messaging; this technology has the ability to alter previously-immutable messages.
jobigoud · 6 years ago
I think the bigger problem is it provides deniability and removes accountability. The "I never said that" kind.
noelsusman · 6 years ago
We've had the ability to make convincing fake videos for a long time. There are whole industries devoted to the task.
TearsInTheRain · 6 years ago
Is there any benefit to deepfake technology besides maybe entertainment?
bduerst · 6 years ago
Why are deepfakes now a huge problem, or any worse than photoshop? This technology has existed for decades.
ouid · 6 years ago
Of all the places where you could find "epstein news" you choose to link the New York Post.
malvosenior · 6 years ago
The New York Post has actually been one of the most consistent places for breaking Epstein news. Most major news outlets have employees implicated with Epstein and have reported (or not) accordingly.

It's quite telling when the corruption and scandal is so large that only "fringe" outlets are able to report on in. Also cf. with Ronan Farrow and Weinstein.

3fe9a03ccd14ca5 · 6 years ago
I would honestly love to link to a trusted MSM report, but these outlets have been burying Epstein stories[1]. It's awful.

1. https://www.newsweek.com/abc-jeffrey-epstein-story-amy-robac...

rriepe · 6 years ago
Same. The cynical part of me was looking to see if this project came out of MIT Media Lab (it didn't, just MIT)
malvosenior · 6 years ago
The President of MIT (Rafael Reif) while not directly implicated in fraternizing with Epstein did work to actively cover up the Joi Ito scandal and has essentially side-stepped all responsibility. There are still students protesting and calling for his resignation.
canada_dry · 6 years ago
Deep fakes are yet another prime example of how the writing is on the wall (i.e. it's clear that this technology can/will be used for nefarious purposes - esp. by our enemies) and yet mitigations - through regulation and enhanced standards - won't be pursued until long after the horses have left the barn.

I don't know how/what form this should take, but an older analogy might be how colour photocopier manufacturers imbed microdots into each reproduction so counterfeit bills can be traced to the equipment that produced them.

gojomo · 6 years ago
The sorts of bad actors or 'enemies' who'd deploy deep fakes for advantage aren't typically discouraged by 'regulations'.

On the other hand, there is a reasonable amount of active research on both detecting current faking-techniques, and methods of adding cryptographic attestation from point-of-recording.

I don't believe "detection" can win in the end, as widespread detection-technology can generally be used to tune better fabrications.

So ultimately we'll have to rely on: "do we trust the specific chain-of-people-and-sensors-and-relays that brought this evidence to our purview?" And various kinds of constantly-applied cryptographic signing & timestamping can help with that, though interpreting the challenging cases will require a lot of abstract expertise. (So again, for most people, it may reduce to: "who do you choose to trust?")

throwaway5752 · 6 years ago
Yes, we should not underestimate plain old social solutions. I remember when you used to answer the phone every time because caller id didn't exist. Who answers their phone now? This just mainstreams exceptional validation workflows that exist in finance for establishing identity and preventing phishing and other forms of identity attack.

We just won't be able to trust anyone is actually saying anything except after confirmation via non-repudiable channels.

jfengel · 6 years ago
I don't think detection would really help even if we had it. Even now, most fake-news stories are obviously fake. They have no subtlety; they don't fool anybody paying even the faintest shred of attention.

They'll be used on people who are already perfectly happy to believe fake printed quotes from obviously unreliable sources. When challenged, they'll refer to conventional partisan media citing those sources as proof. Ordinarily showing video would increase their certainty, but they're already absolutely certain. It'll just be a more entertaining way of delivering it to them.

Detection, no matter how definitive, can't deter that. They're already perfectly happy with their trusted sources.

extrapickles · 6 years ago
I wonder how long it will be before someone stuffs a crypto module into the same package as a PDM microphone to provide a digitally signed audio stream.
krick · 6 years ago
This comment kinda scares me, because it is exactly that panicky reaction that makes all these stupid regulations come to life, and you are reminding me that it's not really "just stupid politicians", but in fact scared, easy to manipulate crowds behind them.
undersuit · 6 years ago
What if we put the regulations and enhanced standards on to the people who we are most worried will abuse our trust? We'll never be able to stop a malicious actor from Deepfaking our leaders and spreading disinformation, but at the very least we should have something in place to make sure our leaders aren't the malicious actors using Deepfakes and react accordingly with punishments when they are.
umvi · 6 years ago
So you think it would be better if politicians have a knee jerk emotional reaction like with 3d printed guns and start shutting down research and making it a felony to publish deep fake videos?
canada_dry · 6 years ago
Another recent example where regulations were (relatively) quick to address some new tech/gadget: drones.

It was recognized by law makers that drones could be used for nefarious purposes and needed some form of regulation.

I'm not saying that drone registration thwarts illegal use, but at least something was done by regulators.

jrockway · 6 years ago
People have been faking videos for as long as video has existed. Deepfakes is just a scary new word for what people would call "CGI" or "VFX".
jdietrich · 6 years ago
> Deepfakes is just a scary new word for what people would call "CGI" or "VFX".

"Ryzen 3950x" is just a scary new word for "z80" or "6502".

Deepfakes don't provide any fundamentally new capability, but they do reduce the cost and time required to create a convincing fake video by multiple orders of magnitude. That completely changes the threat model, from "our propaganda rival occasionally releases a fake video that we have to debunk" to "our propaganda rival is producing thousands of fake videos every day and we can't even keep track of them".

philwelch · 6 years ago
We’ve had photographic deep fakes for decades now. How does this fundamentally change anything more than Photoshop already did?
Bartweiss · 6 years ago
We've also had video since before convincing photo edits were available.

So people learned that we shouldn't trust a photo of Martians, or a sound clip of the President confessing to murder, but should hold out for video. When footage of Rob Ford doing crack cocaine shows up, we believe it's real. And when an investigative reporter wants a verifiable record of something, they resort to a high-quality video.

Even if video deepfakes aren't any more realistic than image or audio fakes have been, they're important because there's no fallback. With perfect-accuracy photo and audio fakes, we'd have to be more skeptical. But with perfect-accuracy photo, audio, and video fakes, we'd suddenly be back to the pre-photography era when there was essentially no way for an untrusted source to convince you that something had happened. Deepfakes aren't flawless, we're not at that point, but the specifics of video are less important than the impact of having some medium which can't be convincingly faked.

CamperBob2 · 6 years ago
"We've had computers for decades now. How does this fundamentally change anything more than the IBM 360 did?" -- investor being asked for funding in 1978 by a couple of hippies in Cupertino
alpaca128 · 6 years ago
Photoshop doesn't have a "put this person's face on this other person's shoulders"-button that anyone can use.

Creating convincing deepfake videos is still a lot of work, but a couple years ago altering one's face in live streams took also much more effort than pressing a button in a free mobile app.

javanix · 6 years ago
What regulations / standards would actually stop these things from getting out though?
eyegor · 6 years ago
The only answer here is drm esque file signing. Any attempts at detecting fakes will only work while the tech is young. Ofc this requires a web of trust and verification techniques to be applied, since videos are often re encoded.
pc86 · 6 years ago
This comment is so egregiously out of touch I hesitate to even respond at all, let alone to do so civilly, however in the interest of discussion I'm going to try to.

Which "enemy" do you envision would use deep fakes for a nefarious purpose, yet will stop short once they realize there are - gasp! - regulations around them? And as far as I'm aware, at least in the United States, there are no regulations requiring the use of microdots in printers or copiers, they're done of the manufacturers' own accord to aid law enforcement.

The idea that we should have ill-informed, knee-jerk regulations in reaction to every mildly upsetting technological fad is antithetical to everything the vast majority of technologists believe in and stand for.

briffle · 6 years ago
I get your point your making, but I think your reacting a bit harshly. We don't need one regulation that solves all our problems. We can have 'defense in depth'.

Current laws that put microdot detection in color scanners will not deter huge resources to counterfeiting like state backed operations. But they do prevent the local meth junkie down the road from making a bunch of fake 20's to get his next fix.

At the sametime, the paper that we use is also highly regulated. Its not impossible to get something that is similar, but its not easy. (and not cheap).

The points of all these layers, is to prevent the casual and common crimes. By doing that, you can spend your resources on the larger operations.

gambiting · 6 years ago
It just reminds me of my 85-year old grandfather who still in 2019 maintains that photocopiers should be banned for private ownership because people can make illegal copies of documents on them.
gonehome · 6 years ago
You said you'd try to respond civilly, but your comment is openly hostile and this takes away from your valid core point.
Bartweiss · 6 years ago
As far as technical responses, there are at least four different goals to pursue regarding fakes:

1. Origin tracing. This is the microdot example: it's not meant to catch deepfakes, but link nefarious instances to their source. But the existing technical options here seem to be a mix of privacy-eroding (printers are dumb compared to phones/computers, and you'd have to prevent or ban sharing non-marked media) and ineffective (bulk color copying requires physical access to a device that's hard to make or modify; deepfakes can be constructed on a server in another country, or have their identifiers scrubbed after the fact).

2. Reactive, general verification. This is just an arms race between fakers and observers, like catching art forgers. Existing fakes have tells like a modified 'halo' around faces. Right now we only see manual checks, but major content hosts like Youtube could flag "suspected deepfake" like they do "music copyright strike". (Or hopefully better than that...) But it depends on staying ahead of fakers, and once the tells are too subtle to simply watch for it will only work when hosts or viewers choose to validate content.

3. Proactive, general verification. This corresponds to hard-to-implement, easy-to-verify security features like UV watermarks on money or prescriptions. But those rely on controlled supply, and decades of DRM failures tell us that digital fakes are much easier to make and safer to pass than physical ones. I don't expect this to expand beyond closed groups like news orgs giving out auto-watermarking cameras.

4. Specific authentication: not eradicating fakes but proving certain videos are legitimate. This is the most plausible, interesting category. We can't even prevent manipulative photo edits today, so we're unlikely to prevent manipulative deepfakes, but today we can prove specific aspects of specific images are legitimate. People will still believe fakes and lack proof of some real events, but this prevents a more fundamental transition to a "post-truth" era; we'll still have known-good records of key events.

We've had low-tech authentication since the dawn of photography - think of hostage photos taken with a daily newspaper to prove "this image is newer than this day". As photo editing emerged, steganography developed to catch out altered elements. Cryptographic signing took that further, allowing us to prove that an image is unaltered from a specific keyholder. We even have the reverse of the old newspaper photo; publishing a hash or encrypted file lets us date an image back to a specific time without having to actually release it.

Proving that a file is authentic to the world, not just an owner, is trickier. But we already have some steps: deepfakes take time, so any livestreamed video is not being edited that way after the fact. Authenticating something time-specific like a Presidential speech would only require combining that rapid turnaround with proof that the video wasn't prepared in advance; until on-the-wire editing becomes convincing a newspaper in the background would suffice. Quite likely we'll see more complex arrangements eventually, like trusted hosts that issue random values and demand their use in rapid responses.

None of this is going to stop people from believing fakes, but nothing ever has. What's more significant is whether we maintain the ability to create records which can be verified and trusted.

syshum · 6 years ago
Sadly they do not really need deep fake technology to spread fake news
csommers · 6 years ago
"can/will" - deception of this magnitude has already been seen: Adnan Hajj.
SimbaOnSteroids · 6 years ago
naive question, won't any deep faked video leave some pretty readily detectable artifacts in the underlying video file?
godelski · 6 years ago
If they are visible to the model, then these artifacts could also be generated.
make3 · 6 years ago
that's an issue with electing presidents and powerful people that are over 70 years old lol

Deleted Comment

2804t3qwp · 6 years ago
I'm curious to know why you feel like regulation and standards are the solution here. It seems fairly clear that the propaganda metagame heavily favors disinformation and is only getting better at creating disinformation products. If we accept that premise, then it seems like the real problem is that we're still trying to preserve the idea that the internet is a tool for learning. Certainly parts of the internet are, but if the bloggers vs. professional press are anything to go by then it seems like the most practical solution is to establish lists of trusted entities and the contexts in which they're trusted. The managing of lists is regulatory in a sense but the trust side of the equation seems more social than bureaucratic, people need to be much less trustful of the internet.

Dead Comment

eutropia · 6 years ago
We should be less worried about overt propaganda like DeepFakes and more worried about the assumptions baked into the media of all imperialist nations, the misdirection, and the selection of "what" to report on. States have been lying to us since time immemorial, and they don't need fancy video evidence to do it.

But this is a cool demo, and since the Genie is out of the bottle, this can be a great tool. You could use it to force those who speak in public the most to be accountable for the things that they don't say, or equivocate on, by making videos of them saying it, and forcing them to go on the record denying the video, in contradiction to their established (unspoken) position.

uponcoffee · 6 years ago
> States have been lying to us since time immemorial, and they don't need fancy video evidence to do it.

True, but now they'll be able to drive any narrative with as much A/V 'evidence' as they want. Lies by omission or one sided reporting are dangerous in their own right, but challenging that is different from challenging fake evidence.

It's hard to say "It wasn't me" if you're on tape/film. You'd have to have experts argue over validity but the public doesn't have the attention span or trust to follow that. Deepfakes have the potential to be extremely damning/damaging to public image/reputation in a way that biased reporting never did.

larnmar · 6 years ago
What is a non-“imperialist” nation, and what makes you think their media is any better?

The rest of your comment I agree with, but attributing it to “the media of imperialist nations” instead of just being a property of media organisations in general, is wrong.

eutropia · 6 years ago
I mean states which are ruled democratically rather than by elites. This means that the implicit bias in a democratic media is that of serving the people and exposing truth rather than covering for corpo-fascists and state-sponsered terrorism. There's such a thing as genuinely good reporting and integrity in journalism (even in authoritarian states)- it just needs to also be free of coercion and aware of the frame of reference that it exists in.
TimMurnaghan · 6 years ago
It still looks wrong. There's still some of the floating/stuck-on head feeling about it. Or is that just because we already know it's fake from the headline? (... and the way that the flag moves is totally wrong .../s).
jedberg · 6 years ago
You’d have to be paying real close attention to notice that, which you are primed to do because you’ve been told it’s a deep fake. You’re probably also more of an expert than most.

But think of the general US electorate. How many of them, seeing this clip on the news as they prepare dinner, would know it’s fake?

jdminhbg · 6 years ago
> But think of the general US electorate. How many of them, seeing this clip on the news as they prepare dinner, would know it’s fake?

The technology to fool someone who has the TV on in the background has existed forever. You could do it with a stunt double. The problem is fooling everyone such that "this video is faked" doesn't become a bigger story than the fake video in the first place.

minikites · 6 years ago
>But think of the general US electorate. How many of them, seeing this clip on the news as they prepare dinner, would know it’s fake?

I'm not convinced deepfakes will be that big of a problem because people already believe things that aren't real (and don't believe things that are real) without the help of technology.

jacquesm · 6 years ago
Real life is not exactly preparing people for spotting fakes these days. The bizarre is real, the plausible will likely get a pass.
hanniabu · 6 years ago
> But think of the general US electorate. How many of them, seeing this clip on the news as they prepare dinner, would know it’s fake?

This won't change much compared to now. There are videos of Trump saying something, then in an interview he denies he ever said that. People believe that he never said those things and real proof is dismissed. What I'm saying is, people believe lies even when there's no proof and when there's video proving that what was said is a lie.

jstanley · 6 years ago
I don't agree. If I didn't know that was a deepfake, there is no way I would guess that it is. It's more than good enough to convince me.
Loughla · 6 years ago
I'm with you. If this article was titled, "unbroadcasted footage of Nixon speech found in archive" I would've bought that headline 100%.

This is too good.

This is going to cause problems. Many, many problems.

We already have issues with what is real and what isn't, when you can still trust video for the most part. I am not excited to see what this does to society.

It's one of those - just because we can, does that mean we should - sort of scenarios.

asdfman123 · 6 years ago
That's true, but you have to also realize that Nixon looks wrong on TV in general. Maybe they did too good of a deep-fake. ;)
proverbialbunny · 6 years ago
If you're not old enough to tell what Nixon was really like, then you don't have that comparison. You wouldn't suspect it was a fake without being told.

However, there are some clear give-a-ways that may or may not stick around, priming at least myself into identifying a deep fake from not: His movements do not line up with the way muscles work in the human body. In a single direction it works, but how he bounces back would cause a lot of neck strain, so no one would do that.

friendlybus · 6 years ago
Yeah the acting is off. The mouth doesn't seem to carry the same gravitas and tension that his eyes do.

Also there's a line across his cheek in the closeup and the bottom left hand corner of the mouth fades into a blurry mess once or twice. The deepfake look generates blurry results for blending.

everdrive · 6 years ago
Don't forget the other side of the coin:

The President really does give a speech or make a comment, but 50% of the electorate is sure it's a deepfake.

dwoozle · 6 years ago
No one says that deep fakes are production ready today... but given the speed of progress, they’re going to be in FAR less time than takes for our social and political systems to mount a response, so there’s going to be some fucked up consequences of them.
Vysero · 6 years ago
I agree, I could tell. The shadowing was wrong, he looks like a bobble head. That being said, it was too close for my comfort... it won't take long before they can fool anyone.
DoofusOfDeath · 6 years ago
Both the head-motion and the voice seemed a little jumpy/choppy to me, reminiscent of Max Headroom.

Do authentic TV recordings of Nixon have that same quality?

vernie · 6 years ago
The disparity between his speech and head movement makes it looks like he's suffering from Parkinson's disease.
Swizec · 6 years ago
Honestly even knowing it was fake I had a hard time convincing myself it was fake. My only clue was the way his chin intersected with the collar looked a little off

But I’ve seen things look off like that in real life under certain lighting conditions so I would’ve waved it away.

driverdan · 6 years ago
The wider shot definitely had an uncanny valley feeling for me. His head movement seemed unnatural.

The tight shot, however, was really good. I'm not sure I'd think it was fake if I wasn't primed for it.

lefstathiou · 6 years ago
Part of me feels that the widespread proliferation of deep fakes can potentially be positive. Seeing will no longer be believing and society will be forced to look at things with a more critical and skeptical eye or with a higher level of diligence. Value should also shift back to more legitimate sources. The transition will certainly suck though.
netwanderer3 · 6 years ago
Make-A-Wish Foundation can really benefit from this technology by sending these "heartfelt" deepfake videos of famous celebrities and idols to dying kids, hoping to cheer them up. Is it an ethical thing to do? You be the judge!
jobigoud · 6 years ago
This is fantastically evil. I think you win the dangerous idea of the day award.

It reminds me of the market for answering machine messages, but here you will have a large catalog of celebrities and you can create a personalized video message of them talking about you or someone you know.

A con artist could also buy that to pretend they know a certain person.

So the website would work like this: you pick a celebrity, you pick a theme/context/environment, like Skype call, handheld phone video, at home webcam, etc. and then you pick the message.

You can have actors with similar build as the target celeb acting specific scenes, to make the scenes more unique but still reusable. Payment options differs based on the exclusivity of the acted scene.

Ozumandias · 6 years ago
In a thread full of awful futures, I find this one to be the most depressing.
undersuit · 6 years ago
Hallmark shares surge on the introduction of the Custom Celebrity Holiday Greetings Digital Card Series.
umvi · 6 years ago
More likely it will further reinforce echo chambers so that people will truly only see what they want to believe and nothing more. Anything they disagree with or is inconvenient to their worldview is "deep fake"
tigershark · 6 years ago
What’s the difference with today “fake news” outrage for absolutely real news?
undersuit · 6 years ago
You're spot on. It's going to be hard and difficult.

The complete breakdown in privacy described in 'The Light of Other Worlds' really changed my worldview. I am and will remain to be a proponent of personal privacy, but the world isn't going to respect that. We need to seriously start considering taking back some of our power. Example: UK CCTV footage, that should be public domain. The knowledge of those that access the footage, public domain too. Then include public education about the dangers of performing activities in the public. That someone might use the footage to steal your credit card shouldn't be a reason to hide this data. That someone could stalk another individual through the network isn't an excuse to hand over the data to the government.

krapp · 6 years ago
>Seeing will no longer be believing and society will be forced to look at things with a more critical and skeptical eye or with a higher level of diligence.

That's not what will happen. What will happen is that people will consider video which corroborates their biases to be legitimate, and video which contradicts their biases to be fraudulent. Bear in mind that skepticism is already on the rise - the web is rife with it, to the point that any news source whose reporting isn't sufficiently paranoid, cynical or piss-taking is dismissed out of hand as likely propaganda, yet this widespread mistrust in just about everything hasn't resulted in a rise in critical thinking or due diligence, rather the opposite, because it's easy for people to live in self-perpetuating alternate universes with multiple positive feedback loops provided through the web reinforcing their filter bubbles.

>Value should also shift back to more legitimate sources.

It might, if anyone believed that legitimate sources existed anymore outside of 4chan, Reddit and Comedy Central. Unfortunately it seems as if we as a society have decided to abandon the premise that objective truth exists, as the world around us is fed to us more and more as abstractions by untrustworthy arbiters. Deepfakes aren't going to help solve the problem of who can be trusted to "legitimize" truth in a "post-truth" world.

egdod · 6 years ago
That’s awfully optimistic. Even when seeing shouldn't be believing, people still will.
personjerry · 6 years ago
That's only the case if making the fakes becomes a very easy and common thing to do. I think more likely the case will be that certain already-powerful groups will have more ability to make these fakes and thus actually make the balance of information influence even worse.
tigershark · 6 years ago
There are already very good porn videos with celebrities, probably not as good as this, but is just a matter of time. The barriers to enter are really low, you don’t need the already-powerful groups.
axiomdata316 · 6 years ago
More likely everything even legitimate things will be 'poo pooed' as fake. Believing in nothing is as dangerous as believing everything. It's both a form of credulity.
CamperBob2 · 6 years ago
Seeing will no longer be believing and society will be forced to look at things with a more critical and skeptical eye or with a higher level of diligence. Value should also shift back to more legitimate sources.

Not to drag the thread in a political direction, but that's what I thought when Trump won the Presidency. Rather than a boost to our cognitive immunity, though, we're only seeing stronger polarization.

The transition will certainly suck though.

Exactly, only there's no sign of an end to the 'transition.' Today, the common person is more empowered than ever before to ignore what they don't want to hear.

Dead Comment

bem94 · 6 years ago
I find this as interesting as I do worrying. Given our tenuous grasp of shared facts and basis for (political) relality these days, making videos like this just seems irresponsible.

We don't need any more fuel for the fire: "They faked this Nixon Video, so they could have faked <X> too!"

ISL · 6 years ago
Well, in this case, they would be correct.

Finding a reliable way to exchange trusted information is a central problem of our time.

pjc50 · 6 years ago
There's already an outbreak of "fakes" by the simple technique of misleading editing:

https://www.independent.co.uk/news/uk/politics/corbyn-ira-vi...

https://www.mirror.co.uk/news/politics/tories-release-anothe...

mac01021 · 6 years ago
Is there any reason it should be a worse problem for video than it's been for print over the last 20 years?
keiferski · 6 years ago
The speculation on Deepfakes in politics reminds me of a particular period in Russian history known as the Time of Troubles, when numerous people claimed (called False Dmitris) to be the heir to the Russian throne. I believe a similar scenario happened a few times in the Ottoman Empire but I’m having difficulty finding it.

https://en.wikipedia.org/wiki/False_Dmitry

makerofspoons · 6 years ago
I hadn't thought about how deep fakes could help us explore alternate history before. That's a chilling clip.