Readit News logoReadit News
jonathanmayer · 4 years ago
(Context: I teach computer security at Princeton and have a paper at this week's Usenix Security Symposium describing and analyzing a protocol that is similar to Apple's: https://www.usenix.org/conference/usenixsecurity21/presentat....)

The proposed attack on Apple's protocol doesn't work. The user's device adds randomness when generating an outer encryption key for the voucher. Even if an adversary obtains both the hash set and the blinding key, they're just in the same position as Apple—only able to decrypt if there's a hash match. The paper could do a better job explaining how the ECC blinding scheme works.

jobigoud · 4 years ago
> only able to decrypt if there's a hash match

This is one of the concerns in the OP, have an AI generate millions of variations of a certain kind of images and check the hashes. In this case it boils down to how common false positives neural hashes are.

NTroy · 4 years ago
Yes, this ^^^^^^

> The proposed attack on Apple's protocol doesn't work.

With all due respect, I think you may have misunderstood the proposed attack @jonathanmayer, as what @jobigoud said is correct.

amelius · 4 years ago
There may be another attack.

Given some CP image, an attacker could perhaps morph it into an innocent looking image while maintaining the hash. Then spread this image on the web, and incriminate everybody.

GistNoesis · 4 years ago
Yes perceptual hashes are not cryptographically secure so you can probably generate collisions easily, (i.e. a natural looking image which has a attacker-specified hash).

Here is a proof of concept I just created on how to proceed : https://news.ycombinator.com/item?id=28105849

AwaAwa · 4 years ago
Sounds like a fantastic way for law enforcement to get into your phone with probably cause. Random message you a benign picture from some rando account with a matching hash. Immediate capture for CP, data mine the phone, insert rootkit, 'so sorry about the time and money you lost - toodles'.
cube00 · 4 years ago
It'd be interesting to see how the way common images are reused (for example in memes by only adding text) would be enough to change that hash. If it wasn't enough it could spread very quickly.

Of course I'd dare not research or tinker with it lest I'll be added to a list somewhere such is the chilling effect.

I guess in that case they'd delete that single hash from the database because they'd still have an endless (sadly) supply of other bad image hashes to use instead.

macintux · 4 years ago
> Then spread this image on the web, and incriminate everybody.

You'd still have to generate several images and persuade people to download multiple of them into their photo roll. And as I understand it there's yet another layer of Apple employees to review the photo metadata before it ever makes its way to law enforcement.

kfprt · 4 years ago
It won't be long until these type of systems are mandated. Combined with a hardware root of trust it's not inconceivable that modifying your hardware not to report home will also be made a crime. It never stops with CSAM either, pretty soon it's terrorism and whatever vague new definition they use.

The focus on CSAM seems extremely hypocritical when authorities make such little effort to stop ongoing CSA. I would encourage everyone to research the Sophie Long case. Unless there is image or video evidence the police make little effort to investigate CSA because it's resource intensive.

stjohnswarts · 4 years ago
Total surveillance is definitely the end goal of policing forces. It's in their very nature of getting their job done (what better way to catch criminals than a computer constantly scanning every move of everyone) and why people need to always push back against these "think of the children" scapegoats they use to get their foot in the door and get more control.
zimpenfish · 4 years ago
> It never stops with CSAM either, pretty soon it's terrorism and whatever vague new definition they use.

But PhotoDNA has been scanning cloud photos (Google, Dropbox, Microsoft, etc.,) to detect CSAM content for a decade now and this "pretty soon it's terrorism" slippery slope hasn't yet manifested, has it?

If the slope was going to be slippery, wouldn't we have seen some evidence of that by now?

randomhodler84 · 4 years ago
Don’t be so naive. It took less than 3 years for dns blocking to go from csam to copyright infringement. It always was about building censorship infra. I’ve been fighting internet censorship for over a decade and it only gets worse and worse every generation of technology. I want to throw all this government spyware away. Dystopia landed a long time ago.
joe_the_user · 4 years ago
Regardless of whether this attack works or not, you'd assume this scheme produces a wider attack surface against pictures in iCloud and against iCloud users. One attack I could imagine is a hacker uploading child porn to a hacked device to trigger immediate enforcement against a user (and sure, maybe there are more controls involved but would you carry around a very well-protected, well-designed hand grenade in your wallet just so you're bad, it'll explode).
selsta · 4 years ago
How is this iCloud specific? You could do the same with Google Photos or OneDrive.
joe_the_user · 4 years ago
"How is this iCloud specific?"

In case you didn't the topic, what is specific (for now, for now...)to iCloud/apple is the "we're scanning your photos on your device and maybe reporting them if they're bad" approach. So you get the local hashes on the supposedly encrypted files and you get the situation of local files trigger global effects like the police swooping down and arresting you. So that's why despicable and hair-brained scheme in specific produces a greater "attack surface" in multiple ways.

And again, sure, Apple doing this quite possibly will set a precedent for Google et al to answer the other ambiguous meanings your ambiguous comment has.

nicce · 4 years ago
Literally for almost every other big cloud provider. (Facebook, Instagram, Discord, Reddit, Twitter and so on.) Granting that you have access by phone.
mnd999 · 4 years ago
Or even a hash collision with a banned image. Actually, if that could be generated this thing could fall apart pretty quickly if such collisions could be widely distributed.
jl6 · 4 years ago
For some reason, after reading the initial reporting on this system, I thought it was running against any photos on your iPhone, but now I read the actual paper, it seems like it only applies to photos destined to be uploaded to iCloud? So users can opt out by not using iCloud?
foerbert · 4 years ago
Much of the discussion is about how trivial it would be for Apple to start scanning any photos on the phone at a later date.

Right now they are able to bill this as doing what they currently do server side, but client side. Later, they can say they are simply applying the same "protections" to all photos instead of merely the ones being uploaded to iCloud.

nicce · 4 years ago
They can do it already. System is full black box, and all we have is their word. So, saying that adding something might enable something else, is not strong argument.
tandav · 4 years ago
Friendly reminder: until ios source code is closed all privacy claims is only backed by trust. They easily can do whatever they want if you're not compiling from source.
cmsj · 4 years ago
This isn't really true in a world where it's trivial to reverse engineer and decompile binaries.

For example, we already now have a tool for generating NeuralHash hashes for arbitrary images, thanks to KhaosT:

https://github.com/khaost/nhcalc

RegnisGnaw · 4 years ago
Also don’t upload to MS, Google, Dropbox as they also scan for CSAM.
NTroy · 4 years ago
If Apple is to keep their word about guaranteeing the privacy of non-CSAM photos (which this whole discussion is about them not doing a very good job of), then they would only be able to do that with photos stored in iCloud because of this technical specification as to how the identification process works. That being said, other photos across your device are still monitored in a different way. For example Apple will scan photos that you send or receive via iMessage to automatically detect if they're nudes, and if you're underage, they will block them/send a notification to your parents.
zimpenfish · 4 years ago
> Apple will scan photos that you send or receive via iMessage to automatically detect if they're nudes

Only if they're being sent to or from a minor, I thought?

sharikone · 4 years ago
Did you ever experience that you turned some setting off but it was "accidentally" turned on again after some update/reboot?
dathinab · 4 years ago
As far as I know apple plans to put up 2 systems, one focused on phones of people age < 13 which filters "more or less" any photos and uses AI to detect explicit photos and one which looks for known child pornographic photos and for now seems to not necessary apply to all photos.

But I haven't looked to closely into it.

bengale · 4 years ago
Yeah this is basically it.

They have a system that checks for hashes of images to try and find specific CSAM from a database when images are uploaded to iCloud, this already happens but is now moving on device. When explaining this I've used the analogy that here they are looking for specific images of a cat, not all images that may contain a cat. When multiple images are detected (some threshold not defined) it triggers an internal check at apple of details about this hash and may then involve law enforcement.

The other one is for children 12 and under, that are inside a family group. The parents are able to set it up to show a pop up when it detects adult content. In this case they are looking for cats in any image, rather than specific cat image. The popup lets them know it may be an image not suitable for kids, that its not their fault and they can choose to ignore it. It also lets them know if they chose to open it anyway their parents will get notified and be able to see what they've seen.

This is a good rundown: https://www.apple.com/child-safety/

avianlyric · 4 years ago
Yeah pretty much. Another way of thinking about it, is that to upload an image to iCloud, your phone must provide a cryptographic safety voucher to prove the image isn’t CSAM.
shuckles · 4 years ago
The question presumes the database leak also comes with the server side secret for blinding the CSAM database, which is unlikely (that’s not how HSMs work) and would be a general catastrophe (it would leak the Neural Hashes of photos in the NCMEC database, which are supposed to remain secret).
gorgonzolachz · 4 years ago
Yeah, I've worked with HSMs in the past and to say that it's a challenge to get key material out of them is an understatement. That said, a lot of this depends on the architecture surrounding the HSM - if the key material leaves the HSM at any point, you've basically increased your attack surface from an incredibly secure box to whatever your surrounding interfaces are. At Apple's scale, I have to imagine it's more economical to have some kind of envelope encryption - maybe this is the right attack vector for a malicious actor to hit?
NTroy · 4 years ago
The question doesn't presume that, as the the secret for blinding the CSAM database would only be helpful if a third party were also looking to see which accounts contained CSAM.

In this case, the question assumes that an attacker would more or less be creating their own database of hashes and derived keys (to search for and decrypt known photos and associate them with user accounts, or to bruteforce unknown photos), and would therefore have no need to worry about acquiring the key used for blinding the CSAM hash database.

shuckles · 4 years ago
> What's to stop an attacker from generating a NeuralHash of popular memes, deriving a key, then bruteforcing the leaked data until it successfully decrypts an entry, thus verifying the contents within a specific user's cloud photo library, and degrading their level of privacy?

Decrypting vouchers requires the server blinding key and the NeuralHash derived metadata of the input image (technical summary page 10, Bellare Fig. 1 line 18). This attacker only has the latter.

ashneo76 · 4 years ago
Pretty soon housing your own infra and not using the mandated govt phone could be made a crime.

But think of the children and security of the society. Couple that with constant monitoring of your car and you can be monitored anywhere

kaba0 · 4 years ago
It already is. You are only allowed to use specific wavelengths, and basically every modem is proprietary.
kook_throwaway · 4 years ago
Barely related, but is CSAM a new acronym? I hadn't heard it until this fiasco.
floatingatoll · 4 years ago
No.
Teever · 4 years ago
What is the difference between CP and CSAM and why is everyone suddenly using the term CSAM instead of CP?
whatever1 · 4 years ago
Why does Apple even bother with encryption? They should just skip all of the warrant requirements etc and use their iCloud keys to unlock our content and store it unencrypted at rest.

Maybe they can also build an api so that governments can search easily for dissidents without the delays that the due process of law causes.

laurent92 · 4 years ago
Funny. The way I imagine NSA’s and FBI’s secret cooperation with Google is exactly this: Provide a search API that gives access to anything.
x2r · 4 years ago
They already have that. https://en.wikipedia.org/wiki/PRISM_(surveillance_program) But the 'security' services want access to what's on people's phones too.
gorgonzolachz · 4 years ago
Facetious as this is, I can't imagine this is anything other than Apple's endgame here.

The best of both worlds: keep advertising their privacy chops to the masses, while also allowing any and every government agency a programmatic way to hash-verify the data passing through their systems in real-time.