Readit News logoReadit News
woadwarrior01 · 5 years ago
The article is likely a submarine[1] for the deep fake detection company that it mentions (Cyabra).

[1]: http://paulgraham.com/submarine.html

abathur · 5 years ago
Thanks for posting this. It's timely for me.

I'm stuck in a loop with my father for a while now where he forwards me a certain sort of propaganda. I feel like I've struggled to break through on a key point: when I say something smells like propaganda, it doesn't mean I think it is false--it means I think it's part of an organized campaign to build narratives and consensus.

In my case, the topic is highly charged. Reapproaching it with something low-stakes like this is at least worth a try.

sitkack · 5 years ago
Skilled propaganda isn't about making up stories, it is about amplifying and positioning existing truths. Nothing has to be a lie, and it affects the receiver as much as it impacts the whole information supply chain.

This piece might encourage someone else to research a different story and thus it amplifies the goals of the original propagandist. Many folks in a psyop/propaganda campaign are unaware of any sort of coercion. If the receiver understands it as propaganda the effort has failed.

It is an exciting time for sociologists, the disinformation war that is occuring right now is global in scope. If the cold war was WW3, we are currently in WW4.

cortesoft · 5 years ago
This is what frustrates me so much with people who seem to think any article that doesn't lie is totally fine and can never be problematic. There seems to be a belief that you can't deceive unless you lie.

The very best propaganda doesn't lie, it simply is very selective about what truth it shares. By carefully choosing your facts, using loaded language, and by properly framing your statements, you can make almost any argument without lying.

woadwarrior01 · 5 years ago
This resonates with me. I sometimes used to have similar arguments about some forwarded messages with my late father, who was an electronics engineer and really sharp in his prime.

Intuitively it'd seem that the greater dissemination of information with social media and messaging apps would make people smarter and promote critical thinking, but in reality they seem to have the opposite effect. I still keep wondering if it's the environment or his age that was the primary factor. Do we become more gullible and less discerning as we grow older?. Perhaps I'll be much worse when I'll be as old as he was.

schwartzworld · 5 years ago
does your father also have the MSNBC brain cancer?
ChrisMarshallNY · 5 years ago
Makes sense. This old set of articles probably still applies:

https://www.crikey.com.au/topic/spinning-the-media/

Cyabra · 5 years ago
Cool website, didn't know this one...thanks for sharing!
drited · 5 years ago
Could you please expand on what you mean? From the context I think you're saying it's a type of guerrilla marketing but the link doesn't seem to align with that. Nor does the article because the academics really do seem to have been falsely accused by the fake student/journalist mentioned in the article.
miseri · 5 years ago
There's something ironic about planted articles from a fake person being planted by a PR firm.
say_it_as_it_is · 5 years ago
interesting that PG had help from Aaron Swartz in writing the article (2005)
rthomas6 · 5 years ago
That's not a deepfake, it's a computer generated person. Deepfakes are when someone superimposes the face of someone onto a video of someone else.
werber · 5 years ago
I was thinking the same thing, is there a word for a computer generated person? The whole article felt off to me because of what I interpreted as a misunderstanding of what a deepfake is
phpnode · 5 years ago
> is there a word for a computer generated person?

Maybe "infomorph"?

basch · 5 years ago
The opposite will happen. I doubt the call for preciseness wins out.

Reuters and other media will fetch deepfake to mean more things, and it will become a category of related frauds.

I have to wonder when the meaning change occurred. It's hard to tell if they got it from an interview, or a misunderstanding introduced it into the article.

>A generative adversarial network is the name given to dueling computer programs that run through a process of trial and error, according to Hao Li, chief executive and cofounder of Pinscreen, a startup that builds AI avatars.

>One program, the generator, sequentially fires out millions of attempts at a face; the second program, the discriminator, tries to sniff out whether the first program’s face is a fake. If the discriminator can’t tell, Li said, a deepfake is produced.

https://graphics.reuters.com/CYBER-DEEPFAKE/ACTIVIST/nmovajg...

EamonnMR · 5 years ago
CGI?
WiseWeasel · 5 years ago
An avatar; placeholder for identity.
monadic2 · 5 years ago
Good luck convincing society of that before “deep fake” takes off as a term.
_5659 · 5 years ago
Media outlets are already using deepfake as a general term for photographic manipulation. You might recall that video of Joe Biden making lewd gestures on Trump's twitter. That was reported as being a deepfake but is in fact just using generic pinching technology.

To expand on the parent comment's point, this is most likely using some variant of StyleGan2 which has plenty of endpoints on the Internet to make it easy for someone to just download a face for free.

However, they all have this uncanny StyleGAN glare. And there are still artifacts. Quality's improving though. Scrubbing it of EXIF data doesn't really deter detection.

Go ahead, try posting some StyleGAN avatars to Facebook or something. Your account will get flagged immediately for doing so. There's already a robust amount of interest in "deepfake" detection, and there are methods to fingerprint or adversarial attacking your machine learning models with data poisoning so you can reasonably determine a dead giveaway. Kind of like putting a dye pack in your bills.

What is NOT helping, is how media outlets can drum up some cheap panic over the "end of truth" or insinuating how the barrier for disinformation campaigns is lowering. Both statements can be valid but I think we are teaching the wrong lesson having people told they can't trust anything rather than to establish what trust means to them in the first place.

Cyabra · 5 years ago
While you're professionally right, the term stuck to the image and video manipulation aspects to simplify the terminology for consumers and readers for the last 24 months. And finding a cool name for the image frames that are generated through GAN's (generative adversarial networks) is a great idea...

Probably something as simple, and catchy, as deepfakes (which was ultimately linked to the initial user from github, if I remember correctly).

yveys · 5 years ago
Oliver's face was probably created with https://thispersondoesnotexist.com/

Surprised that publications don't do more research as anyone can create a social media profile that looks legit these days.

Trolls have been here for years and they'll continue to find new ways to exploit weaknesses.

tantalor · 5 years ago
That's not a deep fake, that's just a regular fake
hairofadog · 5 years ago
Right – I thought from the headline they had concocted a video of the activists doing or saying something repugnant, which does seem like a problem that’s fast approaching.

This, on the other hand, is a case of a fake online profile being used to say terrible things about the activist couple. The only notable thing is the profile photo was computer generated.

dwighttk · 5 years ago
I mean I bet they even just used thispersondoesnotexist.com until they got a picture that they thought would work.
raverbashing · 5 years ago
Yeah I agree

As similar to several created on twitter to spread misinformation

xrd · 5 years ago
Has anyone ever experimented with totally sanitizing identities when reading news? For example, if there were a browser plugin that would cleanse pictures, associations, academic credentials, and especially names from news, would that make a difference in the way that we as humans process information?

Deepfakes create an "adjacent identity" to trick people into aligning their opinions with someone who is in the same group. But, if we were aware of this, and removed the identity to strictly review the information, it might change the way that information is received.

This would only work if a majority did this, so it would never work, nevermind.

dredmorbius · 5 years ago
Reputation matters.

There are differing views on how much to showcase authors (or reporters) names, and some organs (The Economist, Hacker News) intentionally hide or de-emphasize these.

Almost all questions of identity actally revolve around trust in one of its various guises: trust, credit, accountability, entitlement, and making or receiving of payment. In creation or relating (mediating) information, the persistence of authorship attribution provides a bundling handle for trust. An author might file numerous stories per year, month, week, day, etc. Assessing and assigning trust to each story is expensive. Bylines, editors, and publications accrue positive or negative associations with time. No, trust is not fully consistent or transative, but as an efficiency heuristic, rolling up and bundling reputation offers powerful gains, and numerous checks.

Even a pseudonymous source (you're reading one now) can accrue a certain reputation.

thephyber · 5 years ago
"News" is usually a hybrid of celebrity gossip, ephemeral sports/weather, PR / political narratives, and journalism. You seem to be under the impression that everyone wants to "read news" like they are investigative journalists.

Yes, the crowd on HN is more likely to do this, but I suspect it is very overwhelming for the average news reader to correlate so many data points.

sradman · 5 years ago
> “The distortion and inconsistencies in the background are a tell-tale sign of a synthesized image, as are a few glitches around his neck and collar,” said digital image forensics pioneer Hany Farid, who teaches at the University of California, Berkeley.

and:

> Artist Mario Klingemann, who regularly uses deepfakes in his work, said the photo “has all the hallmarks.”

As an amateur photographer who only* knows enough Photoshop to quickly enhance images, I find these two statements completely unconvincing. This type of he-said-she-said news article makes me uncomfortable. TTBD: Truth To Be Determined.

* Edit: I know nothing about deep fakes but I expect "fake faces" to have "hallmarks" in the face.

WhitneyLand · 5 years ago
What do you think is unconvincing about them and in what way do you believe that expertise in Photoshop has any relation to this technology?

At best Photoshop can play a role in covering tracks of evidence or artifacts of this much more sophisticated approach to faking identifies.

It’s not to say the developers and scientists at Adobe are lesser, it’s that it’s not the same tool or problem that’s being solved.

Put crudely Photoshop can let you draw a mustache on someone’s photo. This is about inventing a photo that never existed before.

interestica · 5 years ago
> in what way do you believe that expertise in Photoshop has any relation to this technology?

They weren't expressing expertise in Photoshop. They were saying that basic use of it is the extent of their expertise.

> What do you think is unconvincing about them

Them being the 'experts' - they did nothing to convince other than say "I'm an expert in this thing."

twak · 5 years ago
> It’s not to say the developers and scientists at Adobe are lesser

adobe research scientists are crazy-strong in the area of deep/neural graphics[1]. Perhaps we should disentangle adobe research from photoshop?

[1] https://research.adobe.com/publications/

chrisoverzero · 5 years ago
The article continues into the link at the bottom, which goes into great detail: https://graphics.reuters.com/CYBER-DEEPFAKE/ACTIVIST/nmovajg...
sradman · 5 years ago
That is more helpful, thank you. I'm still unconvinced since the discrepancies described are easily explained with optical depth-of-field as well as resizing and sharpening algorithms; the basic photography kind of stuff I'm familiar with.
Cthulhu_ · 5 years ago
I'm also not convinced; the fake is convincing enough for most people, also given it's unlikely a picture like that will ever be displayed at more than 100px large.
slim · 5 years ago
the lede is burried, the "deepfake" wrote for multiple newspapers :

  In an article in U.S. Jewish newspaper The Algemeiner, Taylor had accused Masri and his wife, Palestinian rights campaigner Ryvka Barnard, of being “known terrorist sympathizers.” 
so there's a real person behind the account that simply wants to stay anonymous, in this case traditionnaly the newspaper bears the responsability for the smear.

andrewflnr · 5 years ago
That doesn't follow. My first guess was today those articles were just part of the identity's cover, and were written by a group that was running "Taylor".
ChrisMarshallNY · 5 years ago
The lower left (his right) section of the collar has a bit of "crunchy pixels." Otherwise, it looks pretty damn realistic.

This appears to be a "thispersondoesnotexist.com" image. Is that site considered "deepfake"?

I always assumed "deepfake" to mean altering existing images to add actual people's likenesses (like the classic "celebrity porn" images). In particular, applied to video.

This type of thing is likely to become de rigeur, for astroturf and scam accounts. Disturbing, but not particularly newsworthy.

Before, people would just scrape images from stock photo sites, or even from some poor sap's Facebook photos. Doing it this way simply makes it less likely to be uncovered.

thephyber · 5 years ago
> This type of thing is likely to become de rigeur, for astroturf and scam accounts. Disturbing, but not particularly newsworthy.

It's newsworthy if the fake account's posts are used to assassinate the character of real people and the fake persona is pointed out for being fake:

> Reuters was alerted to Taylor by London academic Mazen Masri, who drew international attention in late 2018 when he helped launch an Israeli lawsuit against the surveillance company NSO on behalf of alleged Mexican victims of the company’s phone hacking technology.

rthomas6 · 5 years ago
Look at his rightmost tooth.
itronitron · 5 years ago
Also the neck seems off, as if the person is tipped way back in a chair but facing forward and smiling.