I think the problem Apple ran into was that there was no confusion at all. Apple announced they were going to scan users' devices after years of marketing themselves as a "privacy-focused" company. Shockingly, customers were pretty mad about the whole thing.
There is no way Apple released their initial PR piece without thinking it through and deliberately fusing all those new features together as one big unassailable initiative. It was typical my way or the highway.
Which also make it funny now that they attempt to distinguish between them and run into same hole that they dug for other people.
I don't know what you and tharne are talking about here. There was definitely confusion. HN is a tech forum and I still saw plenty of people here worried about how they would get in trouble for having innocent photos of their own children on their phone. You are allowed to be against Apple's plan while still recognizing that many people didn't understand what exactly was part of that plan.
>There is no way Apple released their initial PR piece without thinking it through and deliberately fusing all those new features together as one big unassailable initiative.
Something I bet wouldn't have happened when Katie Cotton was in charge. But yeah. Tim Cook thought he need new PR direction. And that is what we got. The new Apple PR machine since 2014.
This is inaccurate by definition, of course. Obviously. "My way or the highway" implies there is no alternative.
But in this case, of course, if you're an adult, the Messages part of this doesn't apply to you at all, and the photos part can be completely avoided by not using iCloud Photos.
They are gas lighting people who are upset about what is really happening, and what will happen in 5-10 years, and portraying them as confused and ignorant instead. It’s the standard “you’re holding it wrong” Apple play
Well there is a huge difference to AntennaGate. The two aren't really comparable. Not to mention Apple did in a way admit to the mistake and gave out a bumper within weeks of the complain.
Compared to their Keyboard which took nearly 3 years before they have a programme for free repair.
I didn't realize at first how much this was going to change my view of Apple. Used to be when I saw the "Verifying xyz..." popup on my laptop, I felt a little more secure. It popped up tonight, and I found myself wondering, "Am I in trouble?"
I guess what I'm saying is, at least for me, this backlash is blurring their entire privacy and security pitch. Apple built a walled garden, told me it was for my own protection, then come to find out the cameras are all pointing to the inside.
Agreed - even if Apple doesn’t back down, giving them hell would make other companies less likely to follow suit. This is a very important line in the sand that they have crossed
The "confusion" is splitting hairs. Federighi is trying to draw an artificial distinction between client-side scanning and in-transit scanning where the code performing that in-transit scanning merely happens to be running... on the client.
I was definitely confused, for the record. I had the impression that Apple would scan all photos on device, but that is not true. I was also confused because several changes were announced at once, and the conversations sometimes blended them.
"On 5 August, the company revealed new image detection software that can alert Apple if known illegal images are uploaded to its iCloud storage."
People assumed this opens the door for Apple to alerted of any known file uploaded to its iCloud storage.
IOW, they assumed Apple can check what someone is uploading,1 despite alleged "end-to-end encryption" and a gazilion promises of "privacy".
No one except the people managing the "detection software" know what files the hashes represent.
Theres no way for the owner of an Apple computer to verify what files Apple is actually checking for.
Is this confusion. It sounds more like lack of trust.
1 Mind you, for a majority of computer owners the uploading is likely occuring by default, automatically, outside of the owner's awareness. As opposed to the owner consciously deciding to upload a particular file to a computer in an Apple datacenter. Tech cmpanies know that users rarely change defaults.
They knew they were being hypocritical, so they were reluctant to even divulge the fact that other cloud providers have already been doing it; they wanted to position themselves as the pioneer.
I can't imagine how they thought this would go well.
It's another example of Apple being stuck in an echo chamber and not being able to objectively assess how their actions will be perceived.
How many times have they made product and PR blunders like this?
If Craig tells me I’m misunderstanding this I distrust them further because I completely understand the full arena of possibilities and not just the narrow intent.
Because of this interview I’ve added Craig to my mental list of executives who need to lose their job over this little stunt of theirs.
Apple has spent billions in engineering and marketing to establish themselves as the privacy leader, all wiped away by this idiotic system so full of holes you could serve it on crackers.
Vote with your money. If you own AAPL stock, sell it. And loudly refuse to buy Apple devices. That is the only language organizations like this will understand.
This is limited to users of iCloud photos. If you want to store your photos on Apple servers, shouldn’t they have the right to exclude CSAM content? Apple owns those servers and is legally liable. Why is this such a big issue?
> If you want to store your photos on Apple servers, shouldn’t they have the right to exclude CSAM content?
This seems worded to get a Yes answer. So, yes.
It's a big deal because it's unprecedented (to my knowledge) outside of the domain of malware*. Other cloud providers run checks of their own property, on their own property. This runs a check of your property, on your property. That's why people care now. The fact that this occurs because of an intention to upload to their server doesn't really change the problem, not unless you're only looking at this like an architectural diagram. Which I fear many people are.
A techie might look at this and see a simple architectural choice. Client-side code instead of server-side. Ok, neat. A more sophisticated techie might see a master plan to pave the way for E2EE. A net-win for privacy. Cool. But the problem doesn't go away. My phone, in my pocket, is now checking itself for evidence of a heinous crime.
*I hope the comparison isn't too extra. I was thinking, the idea of code running on my device, that I don't want to run, that can gather criminal evidence against me, and report it over the internet... yeah I can't get around it, that really reminds me of malware. Not from society's perspective. From society's perspective maybe it's verygoodware. But from the traditional user's perspective, code that runs on your device, that hurts you, is at least vigilante malware, even if you are terrible.
Personally I don't see on device scanning as significantly different than cloud scanning. I think the widespread acceptance of scanning personal data stored on the cloud is a serious mistake. Cloud storage services are acting as agents of the user and so should not be doing any scanning or interpreting of data not explicitly for providing the service to the end user. Scanning/interpreting should only happen when data is shared or disseminated, as that is a non-personal action.
If I own my data, someone processing this data on my behalf has no right or obligation to scan it for illegal content. The fact that this data sometimes sits on hard drives owned by another party just isn't a relevant factor. Presumably I still own my car when it sits in the garage at the shop. They have no right or obligation to rummage around looking for evidence of a crime. I don't see abstract data as any different.
Because if the content is entirely encrypted(like apple says it is) they aren't legally liable and it's entirely voluntary that they do this.
Also, no one(well, most people) has any issue with photos being scanned in the iCloud. Photos in Google Photos have been scanned for years and no one cares. The problem is that apple said that photos are encrypted on your device and in the cloud, but now your phone will scan the pictures and if they fail some magical test that you can't inspect, your pictures will be sent unencrypted for verification without telling you. So you think you're sending pictures to secure storage, but nope, actually their algorithm decided that the picture is dodgy in some way so in fact it's sent for viewing by some unknown person. But hey don't worry, you can trust apple, they will definitely only verify it and do nothing else. Because a big American corporation is totally trustworthy.
The issue is that the scanning happens on your device just before upload. So now your own device is scanning for illegal activity _on_ your phone not the servers.
The second issue is that it will alert authorities.
In regards to CSAM content those issues may not sound terrible. But the second it is expanded to texts, things you say, websites you visit or apps you use it's a lot scarier. And what if instead of CSAM content it is extended to alert authorities for _any_ activity deemed undesirable by your government
I’d expect a secure and privacy focused cloud data storage provider to not know what I’m storing _at all_.
Let’s not beat about the bush, if someone wants to store information in a form that can’t be decrypted by Apple, they can. This is a stupid dragnet policy that won’t catch anyone sophisticated.
Apple focused the last years pitching themselves as the tech giant who actually cares about privacy. They seemed to be consciously building this image.
To now implement scanning of private information and then try and sell this obvious 180degree slippery slope turnaround in the most weasel worded “but think of the children” trope is an insult to the customers’ intelligence.
I was a keen Apple consumer because I felt that even if their motivation was profit, this was a company who focused on privacy. It was a distinct selling point.
I certainly won’t be buying more Apple products.
For me, Apple lost the main reason to buy their stuff. If they are going to do the same thing everyone else is doing, I refuse to pay the premium they charge.
If they were scanning images that were uploaded to icloud on Apple's servers, no one would care. iCloud is not encrypted and Apple provides governments access to iClod data, everyone knows that, and other cloud providers already scan content for CSAM material. The difference is that Apple is doing this scanning on your phone/computer. Right now, they say that only images that uploaded to iCloud will be scanned, but what's to stop them from scanning other files too? There's been a lot of pushback because this is essentially a back door into the device that governments can abuse.
While apple owns the servers they shouldn't be legally lake. No more than a self storage facility is liable for the items individuals sure in their units.
They are scanning images on iPhones and iPads prior to uploading those images to iCloud. If you're not uploading images to iCloud, your photos won't be scanned -- but if you are using iCloud, Apple will absolutely check images on your device.
From Apple's Child Safety page:
> Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.
> Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.
You're splitting hairs unnecessarily. Apple is scanning users' photos on their devices. To say that they are not "scanning devices" because they are (currently) only targeting photos and not every single other part of the phone is unhelpful at best, and detracts from the point that this is a massive violation of their users' privacy. The exact wording here really doesn't matter as much as you think it does.
This is one of those failure apologies, that's just making us dislike them even more.
I have no idea why they haven't done a 180 yet, this is a bigger failure than the butterfly keyboard. They are letting themselves become the symbol of technological dystopia in the public consciousness. Even an acquaintance who does construction was venting to me about how bad apple's policy is and why she is getting a pixel.
After entirely removing that feature and making a commitment to fight against that kind of future I feel like they owe two more apologies to get on my good side - one for screwing up this bad in the first place and one for insulting my intelligence with their handling of the outcry. This isn't 1990, you don't handwave a mistake this big.
> Even an acquaintance who does construction was venting to me about how bad apple's policy is
I overheard a group of women on the marketing team at my company talking about how creepy it is and I’ve started having a lot people ask me about it - it doesn’t seem contained to just techie at this point but it is concentrated there. I do think it will continue to grow though, apple has lost control of the narrative around their brand.
On TikTok there are plenty of videos now going around saying “Apple is scanning your phone to report you to the authorities”, with little to no nuance.
Bingo. They're not deliberately pushing this, they're just the public face on the initiative. You can complain about Apple all you like, but you're not given the choice to boycott the CIA.
The only reason we were even told this was being introduced in the first place is because it's being run on edge hardware (ie, phones). One talk at DEFCON on weird resource/energy spikes on apple devices and its existance leaks to the public domain which is even worse PR. The only difference is that historically such government level analysis has been conducted behind data center black boxes.
I speculate their hand is being forced by one or more governments and rather than admit that they tried to sell it as best they could. Just speculation.
The stock isn't a like/dislike button. Apparently, the stockholders think that, regardless of what happens, Apple will still be here tomorrow. And to be fair, it's not like a significant portion of Apple customers will throw away their phones.
I think the "confusion" was 100% intentional. That the two features (iMessage scanning & on-device spying pre-upload to iCloud) were intentionally released at the same time to make the whole thing harder to criticize in a soundbite.
Confusion is the best-case scenario for Apple because people will tune it out. If they had released just the on-device spying, public outcry and backlash would have been laser targeted on a single issue.
Fanatics also have a tendency to try to latch onto whatever details may offer a respite from the narrative. The core problem here is that Apple is effectively putting code designed to inform the government of criminal activity on the device. It’s a bad precedent.
Apple gave its legendary fan base a fair few facts to latch onto; the first being that it’s a measure against child abuse, which can be used to equate detractors to pedophile apologists or simply pedophiles (these days, more likely directly to the latter.) Thankfully this seems cliché enough to have not been a dominant take. Then there’s the fact that right now, it only runs in certain situations where the data would currently be unencrypted anyways. This is extremely interesting because if they start using E2EE for these things in the future, it will basically be uncharted territory, but what they’re doing now is only merely lining up the capability to do that and not actually doing that. Not to mention, these features have a tendency to expand in scope in the longer term. I wouldn’t call it a slippery slope, it’s more like an overton window of how much people are OK with a surveillance state. I’d say Americans on the whole are actually pretty strongly averse to this, despite everything, and it seems like this was too creepy for many people. Then there’s definitely the confusion; because of course, Apple isn’t doing anything wrong; everyone is just confusing what these features do and their long-term implications.
Here’s where I think it backfired: because it runs on the device, psychologically it feels like the phone is not trustworthy of you. And because of that, using anti-CSAM measures as a starting point was a Terrible misfire, because to users, it just feels like your phone is constantly assuming you could be a pedophile and need to be monitored. It feels much more impersonal when a cloud service does it off into the distance for all content.
In practice, the current short-term outcome doesn’t matter so much as the precedent of what can be done with features like this. And it feels like pure hypocrisy coming from a company whose CEO once claimed they couldn’t build surveillance features into their phones because of pressures for it to be abused. It was only around 5 years ago. Did something change?
I feel like to Apple it is really important that their employees and fans believe they are actually a principled company who makes tough decisions with disregard for “haters” and luddites. In reality, though, I think it’s only fair to recognize that this is just too idealistic. Between this, the situation with iCloud in China, and the juxtaposition of their fight with the U.S. government, one can only conclude that Apple is, after all, just another company, though one whose direction and public relations resonated with a lot of consumers.
A PR misfire from Apple of this size is rare, but I think what it means for Apple is big, as it shatters even some of the company’s most faithful. For Google, this kind of misfire would’ve just been another Tuesday. And I gotta say, between this and Safari, I’m definitely not planning on my next phone being from Cupertino.
> I’d say Americans on the whole are actually pretty strongly averse to this, despite everything, and it seems like this was too creepy for mant people.
You mean that country which gives a damn about privacy altogether because all those fancy corps are giving them toys to play? You know, those companies which feed on the worlds populations data as a business model. The country which has a camera on their front door which films their neighbourhood 24/7? The country which has listening devices all over their homes in useless gadgets?
You have to be joking or that scale you impose here is useless.
This whole thing will go by fast and there won't be much damage on the sales side. Apple is the luxus brand. People don't buy it for privacy. Most of the customers won't probably even understand the problem here.
The only thing we might be rid of are those songs of glory in technical spheres.
> The core problem here is that Apple is effectively putting code designed to inform the government of criminal activity on the device. It’s a bad precedent.
This is wildly disingenuous.
Apple is putting code on the device which generates a hash, compares hashes, and creates a token out of that comparison. That is 100% of what happens on the device.
Once the images and tokens are uploaded to iCloud photos, iCloud will alert if 30+ of those security tokens show a match, it will alert Apple's team, and they will get access to only those 30+ photos. They will manually review those photos, and if they then discover that you are indeed hoarding known child pornography then they report you to the authorities.
Thus, it would be more accurate to say that apple is putting on your device code which can detect known child pornographic images.
> And it feels like pure hypocrisy coming from a company whose CEO once claimed they couldn’t build surveillance features into their phones because of pressures for it to be abused.
This isn't a surveillance feature. If you don't like it, disable iCloud Photos. Yes, it could theoretically be abused if Apple went to the dark side, but we'll have to see what this 'auditability' that he was talking about is all about.
Honestly, with all of the hoops that Apple has jumped through to promote privacy, and to call out people who are violating privacy, it feels as though we should give Apple the benefit of the doubt at least until we have all the facts. At the moment, we have very few of the facts.
It's a feature that only applies to kids under 18 who're in a family group, whose parents turn it on. It warns the kid before letting them see an image which machine-learning thinks is nudity. If the kid is 12 or under, their parents can be notified if they choose to see it. It apparently does no reporting to anyone apart from that parental notification.
"The system could only match "exact fingerprints" of specific known child sexual abuse images, he said."
This disinfo really angers me. That is the exact opposite of what I've read up till now. People talking about "NeuralHash" and being able to detect if the image is cropped/edited/"similar". SO what is the truth?
He carefully avoided saying that the image itself is the same. The exact fingerprint is the same, yes, but the fingerprint is just a hash of the actual image. Disinformation indeed!
The whole point of the system is that you get a matching hash after mirroring/rotating/distorting/cropping/compressing/transforming/watermarking the source image. The system would be pretty useless if it couldn't match an image after someone, say, added a watermark. And if the algorithm was public, it would be easy to bypass.
The concern, of course, is that all of this many-to-one hashing might also cause another unrelated image to generate the same fingerprint, and thereby throw an innocent person to an unyielding blankface bureaucracy who believes their black-box system without question.
It simply means that they can have whatever the hell kind of method they use to identify specific images, and the scary part is: there IS an error-margin built-in because otherwise, as you said, this tech would be pretty useless.
"Find all images and tag them if they look like this fingerprint" doesn't mean that. It means: "Find all images and tag them if they look 80% like this fingerprint".
Which also means that it will allow governments to upload photographs of people's faces and say: "Tag anyone who looks like this".
Worse, this will allow China to track down more Uyghurs, find people based on guides in the form of images that are spread around to stay safe from the Chinese government, and countries like Saudi Arabia can start looking for phones with a significant amount of atheist-related images, tracking down atheists, and killing them. Because that's what that country does.
These perceptual hashes do have high number of false positives. That's why they employ AI to discard images that don't have certain features from the pool to minimise the risk. But that method in general without actual human checking manually is a recipe for disaster.
> Apple decided to implement a similar process, but said it would do the image-matching on a user's iPhone or iPad, before it was uploaded to iCloud.
Is this list of hashes already public? If not, seems like adding it to every iPhone and iPad will make it public. I get the "privacy" angle of doing the checks client-side, but it's little like verifying your password client-side. I guess they aren't concerned about the bogeymen knowing with certainty which images will escape detection.
> The main purpose of the hash is to ensure that identical and visually similar images result in the same hash, and images that are different from one another result in different hashes. For example, an image that has been slightly cropped or resized should be considered identical to its original and have the same hash. The system generates NeuralHash in two steps. First, an image is passed into a convolutional neural network to generate an N-dimensional, floating-point descriptor. Second, the descriptor is passed through a hashing scheme to convert the N floating-point numbers to M bits. Here, M is much smaller than the number of bits needed to represent the N floating-point numbers. NeuralHash achieves this level of compression and preserves sufficient information about the image so that matches and lookups on image sets are still successful, and the compression meets the storage and transmission requirements
Just like a human fingerprint is a lower-dimensional representation of all the atoms in your body that's invariant to how old you are or the exact stance you're in when you're fingerprinted... technically Federighi is being accurate about the "exact fingerprint" part. The thing that has me and others concerned isn't necessarily the hash algorithm per se, but rather: how can Apple promise to the world that the data source for "specific known child sexual abuse images" will actually be just that over time?
There are two attacks of note:
(1) a sophisticated actor compromising the hash list handoff from NCMEC to Apple to insert hashes of non-CSAM material, which is something Apple cannot independently verify as it does not have access to the raw images, which at minimum could be a denial-of-service attack causing e.g. journalists' or dissidents' accounts to be frozen temporarily by Apple's systems pending appeal
(2) Apple no longer being able to have a "we don't think we can do this technically due to our encryption" leg to stand on when asked by foreign governments "hey we have a list of hashes, just create a CSAM-like system for us"
That Apple must have considered these possibilities and built this system anyways is a tremendously significant breach of trust.
That "exact fingerprint" is, in my opinion, intentionally confusing.
This DaringFireball[0] article states the goal of the system is to "generate the same fingerprint identifier if the same image is cropped, resized, or even changed from color to grayscale."
So while the fingerprint may be "exact", it's still capable of detecting images which have been altered in some way
An exact match of a perceptual hash is basically deliberately misleading. The entire point of a perceptual hash is that there are an almost unlimited number of images which it will match "exactly."
But hey, I'm just one of the screeching voices of the minority.
The truth is they're rephrasing what was already known. They're going to match match finger prints of pictures against a database. Every picture. This was widely report. What confusion they're referring to I don't know, because they're saying exactly what has been reported.
The simple summary is: NeuralHash is not a cryptographic hash function. It's a private neural network trained on some images. We have no guarantees of its difficulty to reverse, find collisions for, etc. The naming of it as a 'hash' has confused people (John Gruber's post comes to mind) into thinking this is a cryptographic hash. It simply is not.
Just wait till you download that meme image, to upload to your reaction meme folder on icloud, that some troll has kept the background of some csam image and edited meme text over the bits that might have made you aware of its origins. will that match?
Who cares, the NCMEC database is certainly full of unreviewed material, given even their employees can’t automatically have access to it. For any dystopian state, the goal is to have as many false positives as possible in the NCMEC database, to be able to legitimately have your photos uploaded to their headquarters.
Does it make a difference? My iPhone shouldn’t do anything to or with my photos unless and until I direct it to. Scanning, hashing, whatevering — Apple doesn’t get to decide to do any of it. I do. And only I do.
How so? It would require passing the threshold to get human review, so actual material + false flags > threshold. This should probably result in the person getting in trouble. The case of false flags > threshold should not result in an any trouble since it would then go through human review.
WTF is he talking about? There is no “confusion” they are scanning your phone for data they have decided is bad. Yes today it is allegedly CP, tomorrow it is anything.
“The system could only match "exact fingerprints" of specific known child sexual abuse images, he said.”
Or like whatever they, the US govt, or any govt where they want to make money (such as China) wants. Is anyone auditing the blacklist? Is it publicly reviewable? (Since it contains CP of course not)
And when a government want to scan for “illegal” images, they will just fall back to the argument that it’s the law there. It’s a terribly slippery slope.
What's worse than child pornography in, say, Saudi Arabia? Atheism is. They can force Apple to tag accounts that have images that are popular in atheist circles (memes, information, etc.) and track these people down. The penalty for that in Saudi Arabia is death.
China can start finding Uyghurs based on the images they tend to share. If we're unlucky (as a world), they can even start searching for particular individuals.
"Save the children" is just the classic political ploy to get a ruling through that's just a precursor for evil things to come.
I don't see how this is any different from what Apple could already have been forced to do. If the argument is that they're going to knuckle under to an abusive request involving this system, then they'd presumably have done so under the prior status quo which was no more secure.
They already were storing the photos unencrypted (or with keys available, at least) on their servers, so any government that was able to push them to add a hash to this scanning system could have gotten them to scan for something in iCloud.
China, in particular, could definitely already be doing that, since China made Apple host all iCloud data for Chinese users on servers inside China that're operated by a Chinese company. See: https://support.apple.com/en-us/HT208351
It's worth bearing in mind that the human review step does mean that a government can't just slip stuff in without securing Apple's cooperation (including training their review staff about all the political content they have to look for). Otherwise the reviewers would presumably just go "huh, that's weird, this Winnie the Pooh meme definitely isn't child porn" and move on.
Can a government secure Apple's cooperation in that? I have no idea. But it does make a useful subversion of the hash database a more complicated thing to accomplish.
In some ways I think human review is even creepier. I don’t want an algorithm looking at my private photos but I _definitely_ don’t some rando “reviewer” looking at them! But I guess it all comes down to the same thing: I don’t want anyone looking at my photos unless I’ve deliberately shared my photos with them.
It’s been a while it hasn’t been discussed, but isn’t there laws to give gov special agencies the power to force private entities to cooperate, with potentially a ban on officially recognizing the request ever happened in the first place ?
And if memory serves well it doesn’t need to be Apple as a whole cooperating, a single employee with oversee power could be enough.
Unless some exec loses their job over this, this entire sequence of events was already playbooked by Apple. They knew to wrap the feature with CSAM to hopefully quell the protests, and also to add two features at the same time, so they could backpedal in case pushback was strong, and then they could blame “misunderstanding”. Even though they are being purposefully obtuse about the “misunderstanding” because there is none.
It’s a perfectly planned PR response but no one except the biggest sheep is buying it.
The other feature they're packaging with this (nudity warnings for children/teenagers) should be relatively uncontroversial. It seems well designed and respects the user's privacy: It shows a bypassable warning on the device, only sends a warning to parents for children up to 12 years old, and only if the child chooses to view it, and only after disclosing that the parents will be notified. I don't think there is much criticism they'd catch for that, no protests to quell.
On the other hand, the proposal that they're (rightfully) under fire for now is something that they can't easily back out of (they will immediately be accused of supporting pedophiles), and it's basically a "do or don't" proposal, not something that they can partially back out of. The press is also incredibly damaging to the "iPhones respect your privacy" mantra that's at the core of their current PR campaign.
I don't think they expected this level of pushback.
> the proposal that they're (rightfully) under fire for now is something that they can't easily back out of
They could just say “Following the unexpected pushback, we will scan iCloud content on server like all other major cloud providers. Unfortunately this also forces us to shelve plans for end to end encryption for photos”.
>The other feature they're packaging with this (nudity warnings for children/teenagers) should be relatively uncontroversial.
To me that's the worst of the two. From what I understand, it uses AI trained to detect nude photos. This is much more likely to produce false positives than a hash compare.
Not only is it less accurate by nature, it's a backdoor that can be later used to scan the entire device for whatever photos it's trained to find and report to whoever it's coded to report to.
The issue with things like this is, it's often a tradeoff of making the public happy vs making the government. If the initiative is partisan it might make sense to make the public happy. If the it's bi-partisan you make the government happy, and if you're lucky the government/political complex will eventually alter public opinion until your not really fighting the public anymore.
>The issue with things like this is, it's often a tradeoff of making the public happy vs making the government.
The company should only be concerned with following the law, not earning bownie points for extralegal behavior. Making the government happy shouldn't be a thing in a country ruled by law.
They announced the whole thing back on Monday, though. If they were trying to hide it, the initial announcement would have been buried. Burying the "huh, we didn't expect this backlash" comment makes no sense.
So many people in this thread are convinced this whole thing is intentionally malicious, that Apple is doing this because they want to enable government spying, and they are intentionally using child sex abuse as a way of trying to make it palatable in a PR battle.
I don't think that is the most likely situation at all.
Apple has been, as part of their privacy initiatives, trying to do as much as possible on the device. That's how they have been defining privacy to themselves internally. Then someone said "can we do something about CSAM" and they came up with a pretty good technical solution that operates on device and therefore, to them, seemed like it would not be particularly controversial. They've been talking about doing ML and photo scanning object recognition on device for years, they're moving much of Siri to on-device in iOS 15, all as part of their privacy initiatives.
It seems to have backfired in that people actually seem to prefer scanning in the cloud to on-device scanning for things like this, because it feels less like a violation of your ownership of the device.
I think the security arguments about how this system can be misused are compelling and it's a fine position to be strongly against this, but I don't know that there's good justification that Apple has some ulterior motive and is faking caring about privacy. I think they were operating with a particular set of assumptions about how people view privacy that turned out to be wrong and they are genuinely surprised by the blowback.
That's one of the few good aspects of a legendary fuck-up like this: you learn about people defending it.
People defending CSAM should go to hell, fast. But are we already done destroying all low-hanging fruits? Did we stop Johnny Savile? Did we put all clerical actors behind bars? Did we extinguish the child porn network behind Marc Dutroux (https://en.wikipedia.org/wiki/Marc_Dutroux)?
And even if we did, would that be enough of an excuse to implicitly accuse anyone? My spouses' family (well-off, so using iDevices) took photos of their young age kids playing, partially naked at the sea. They are now frightened if their photos could be stolen by someone and marketed as child porn.
Doesn't matter how technically well done this is. I do not want my device to poke my files and send them to an AI software to make a decision. It makes me uncomfortable.
This is malicious. I do not want them to touch my photos or anything personal. I paid for this device, now it is doing things against my will.
I don't think Apple has insidious reason, but their implementation can come across as betrayal.
Unlike Android phone, iphone will know about your uploading potentially problematic image beforehand. Yet, it will not alert you or block you from doing so.
> because it feels less like a violation of your ownership of the device.
It doesn't just feel like a violation, it is one. This opens doors for a number of future abuses of power, and abuses of rights.
The average user has no way of knowing what's in the database of hashes being compared against, and we have no way of controlling what goes into those databases. If some party in the US government decided that they didn't like the Quran, and put a million pictures of it in the database, and served the Apple reviewers with a gag order or some other nonsense, we'd have a 80's dystopian blockbuster as everyday life in our country.
My device, my property, my rules. Apple doesn't get to do a stop and frisk, even if it's "for the good of the children".
My device my property my rules is pretty rich talking about a platform where you can’t run your own apps without Apple’s approval, you have no control over how the OS behaves, and there is built-in DRM.
Your phone has been Apple’s device since the moment you bought it and turned it on.
Actually, I'm with you in that I think Apple's motives are different than people think, but I think yours are incorrect as well.
> Apple has been, as part of their privacy initiatives, trying to do as much as possible on the device. That's how they have been defining privacy to themselves internally. Then someone said "can we do something about CSAM" [...]
As a separate issue, many people in the company certainly do care about privacy, and that may go all the way to to Tim Cook. Who knows.
What is much more important to Apple the company, though, is making money. Governments have been hounding them for years about letting them spy on users. And they have painted themselves into a bit of a corner, by having the most secure phones.
Now the government comes to them with an offer they can't refuse, cloaked in child porn motivations. I believe many (most?) of the people involved are sincere. It's clear they have tried to make the least invasive system that still does what the government wants.
But that's not good enough in the crazy connected cyber-world we find ourselves in today.
Apple doesn't have a motivation to do this themselves. But they will do what they calculate they need to do.
Apple has a huge potential profit motive. Once they roll this out for US users, they can sell the exact same capabilities to authoritarian regimes for detecting subversive images, images of warcrimes, etc.
China would happily pay billions of dollars per year for this capability.
> because it feels less like a violation of your ownership of the device.
Agreed that this was not intended to be malicious. Apple has always been pretty clear they they should decide what happens on their devices. This sort of on-device scanning that doesn't serve the user is just the latest example of it, and one that people who would never be affected by the code signing restrictions can relate to.
It may not be helping child abuse enough if it does not detect newly-taken child pornography. Couple of years fast forward and machine learning will be scanning all the photos you take and watch. A little bit forward and it will be forbidden to take photos of many things and there will be no-photo places. Apple and governments will dictate what is right and what is wrong to take and watch. They are doing it for the safety of everyone, right?
I doubt most think this is malicious but Apple did preach "what happens on an iphone stays on the iphone" which aged terribly given this feature.
Apple has also benefited from the perception that Google (their smartphone rival) spies on people by collecting location information where as I am sure they also crowd source iphone location to improve their Maps and road traffic data
I agree with you. I’m all for privacy but have no issues whatsoever with companies scanning for CSAM. I do not feel this is an invasion of my privacy, because I know how hashing works and I know I do not have CSAM on my device.
Do you know how NeuralHash works? NeuralHash is not a cryptographic hash. Unless you're an Apple employee, you can't - the model is private and not inspectable.
I think some people here have similar feelings, but as other replies point out, this particular hash is not cryptographic and we have to trust Apple’s word for it that false positives are extremely rare.
Also worth mentioning that the original corpus of data used to build the hashes isn’t inspectable, so we don’t know how much garbage exists on the input side.
A recent blog post suggests that the corpus of CSAM isn’t even very comprehensive, so the value of this system is questionable.
Exactly! There has been so much FUD, conspiracy theory, and fear-mongering.
None of the usual anti-regulation apologists have pointed out that Apple shouldn’t be forced to download and host potentially-illegal material in the interest of ensuring whether or not it’s actually illegal.
This whole program is their intelligent solution to protecting as much user privacy as possible while still being compliant with the law. On-device hashing is actually pro privacy compared to in-the-cloud scanning (which all other cloud hosting providers are also required to do).
I believe I understand the distinction between the two features perfectly well.
Personally, I don't have a problem with the parental control feature in Messages (it's pretty clear what it does and the user can decide whether to use it or not, or the parent for younger kids -- that's exactly as it should be).
I do have a problem with the feature where they scan images on my phone to match against a database of images. To be clear, here's a list of things that don't make me feel better about it: that it scans only a certain subset of images on my phone; that the technology parts of it are probably good; that NCMEC maintains the database of images (is there any particular reason to believe the database is near perfect and has all appropriate quality controls in place to ensure it remains so?)
There are several issues about this that Apple does not address. A big one for me is the indignity and humiliation of them force scanning my phone for CP.
Here's a hypothetical for Craig Federighi and Tim Cook to consider:
Suppose we know there are people who smuggle drugs on airplanes on their person for the purpose of something terrible, like addicting children or poisoning people. If I run an airport I could say: to stop this, I'm going to subject everyone who flies out of my airport to a body-cavity search. Tim, and Craig, are you OK with this? If I can say, "Don't worry! We have created this great robots that ensure the body cavity searches are gentle and the minimum needed to check for illegal drugs," does it really change anything to make it more acceptable to you?
> There are several issues about this that Apple does not address. A big one for me is the indignity and humiliation of them force scanning my phone for CP.
Are you okay (conceptually, assuming a perfect database and hashing function) with them scanning pictures uploaded to iCloud for this material if the scanning happens on their servers? Or is this a complete "these pictures should never be scanned, regardless of where it happens" position?
If the former, I personally don't feel a distinction between "a photo is scanned immediately before upload" and "a photo is scanned immediately after upload" is very meaningful. I'd be more concerned if there wasn't a clear way to opt-out. I acknowledge that there's room to disagree on this, and maybe I'm unusual in drawing my boundaries where I do.
If the latter... I think that ship has sailed. Near as I can tell, all the major cloud platforms are scanning for this stuff post-upload, and Apple was a bit of an outlier in how little they were doing before this.
If Congress has to obey the Constitution, then they cannot create an organization which they control, and then push for that organization to execute functions they cannot perform by getting in cahoots with industry.
There is no way Apple released their initial PR piece without thinking it through and deliberately fusing all those new features together as one big unassailable initiative. It was typical my way or the highway.
Which also make it funny now that they attempt to distinguish between them and run into same hole that they dug for other people.
[1] https://www.apple.com/child-safety/
I don't know what you and tharne are talking about here. There was definitely confusion. HN is a tech forum and I still saw plenty of people here worried about how they would get in trouble for having innocent photos of their own children on their phone. You are allowed to be against Apple's plan while still recognizing that many people didn't understand what exactly was part of that plan.
Apple:
- Isn't going to be remote work friendly.
- Shut down internal polls on compensation.
- Bows to the FBI, CIA, FSB, CCP.
- Treats its customers as criminals.
- Treats its employees as criminals.
- (Spies on both!)
- Doesn't let customers repair their devices or use them as they'd like.
- Closes up (not opens up) the world of computing. Great synergy with the spy dragnet.
Take your time and talent elsewhere. This bloated whale is bad for the world. There are a lot of good jobs out there that pay well and help society.
Something I bet wouldn't have happened when Katie Cotton was in charge. But yeah. Tim Cook thought he need new PR direction. And that is what we got. The new Apple PR machine since 2014.
Industry is learning from omnibus bills
But in this case, of course, if you're an adult, the Messages part of this doesn't apply to you at all, and the photos part can be completely avoided by not using iCloud Photos.
Compared to their Keyboard which took nearly 3 years before they have a programme for free repair.
Five years from now it'll be: "you're wrong"
HN will as usual agree and take pride in being wrong.
We drank the Apple Privacy Kool-aid, and now we are holding them to it.
This is totally a battle worth fighting!
I guess what I'm saying is, at least for me, this backlash is blurring their entire privacy and security pitch. Apple built a walled garden, told me it was for my own protection, then come to find out the cameras are all pointing to the inside.
Are you sure? My local Apple store is just as crowded as it was two weeks ago.
Apple promising not to use the scanner is a weak promise they know they can’t keep (NSLs)
People assumed this opens the door for Apple to alerted of any known file uploaded to its iCloud storage.
IOW, they assumed Apple can check what someone is uploading,1 despite alleged "end-to-end encryption" and a gazilion promises of "privacy".
No one except the people managing the "detection software" know what files the hashes represent.
Theres no way for the owner of an Apple computer to verify what files Apple is actually checking for.
Is this confusion. It sounds more like lack of trust.
1 Mind you, for a majority of computer owners the uploading is likely occuring by default, automatically, outside of the owner's awareness. As opposed to the owner consciously deciding to upload a particular file to a computer in an Apple datacenter. Tech cmpanies know that users rarely change defaults.
Remember how Apple had zero accountability:
https://www.newscientist.com/article/dn26133-jennifer-lawren...
https://arstechnica.com/information-technology/2014/09/what-...
Ricky Gervais' advice made sense. Wonder why he deleted it.
I can't imagine how they thought this would go well.
It's another example of Apple being stuck in an echo chamber and not being able to objectively assess how their actions will be perceived.
How many times have they made product and PR blunders like this?
Deleted Comment
Apple has spent billions in engineering and marketing to establish themselves as the privacy leader, all wiped away by this idiotic system so full of holes you could serve it on crackers.
Me (Last month): "Apple is taking privacy very seriously. I'm going to vote with my dollars and switch from Android."
Me (This month): "..."
I sold every share the day they announced this.
And that's the crux of the problem.
Limiting the scanner to iCloud is a policy decision one NSL away from changing.
This seems worded to get a Yes answer. So, yes.
It's a big deal because it's unprecedented (to my knowledge) outside of the domain of malware*. Other cloud providers run checks of their own property, on their own property. This runs a check of your property, on your property. That's why people care now. The fact that this occurs because of an intention to upload to their server doesn't really change the problem, not unless you're only looking at this like an architectural diagram. Which I fear many people are.
A techie might look at this and see a simple architectural choice. Client-side code instead of server-side. Ok, neat. A more sophisticated techie might see a master plan to pave the way for E2EE. A net-win for privacy. Cool. But the problem doesn't go away. My phone, in my pocket, is now checking itself for evidence of a heinous crime.
*I hope the comparison isn't too extra. I was thinking, the idea of code running on my device, that I don't want to run, that can gather criminal evidence against me, and report it over the internet... yeah I can't get around it, that really reminds me of malware. Not from society's perspective. From society's perspective maybe it's verygoodware. But from the traditional user's perspective, code that runs on your device, that hurts you, is at least vigilante malware, even if you are terrible.
If I own my data, someone processing this data on my behalf has no right or obligation to scan it for illegal content. The fact that this data sometimes sits on hard drives owned by another party just isn't a relevant factor. Presumably I still own my car when it sits in the garage at the shop. They have no right or obligation to rummage around looking for evidence of a crime. I don't see abstract data as any different.
Also, no one(well, most people) has any issue with photos being scanned in the iCloud. Photos in Google Photos have been scanned for years and no one cares. The problem is that apple said that photos are encrypted on your device and in the cloud, but now your phone will scan the pictures and if they fail some magical test that you can't inspect, your pictures will be sent unencrypted for verification without telling you. So you think you're sending pictures to secure storage, but nope, actually their algorithm decided that the picture is dodgy in some way so in fact it's sent for viewing by some unknown person. But hey don't worry, you can trust apple, they will definitely only verify it and do nothing else. Because a big American corporation is totally trustworthy.
The second issue is that it will alert authorities.
In regards to CSAM content those issues may not sound terrible. But the second it is expanded to texts, things you say, websites you visit or apps you use it's a lot scarier. And what if instead of CSAM content it is extended to alert authorities for _any_ activity deemed undesirable by your government
Let’s not beat about the bush, if someone wants to store information in a form that can’t be decrypted by Apple, they can. This is a stupid dragnet policy that won’t catch anyone sophisticated.
Apple focused the last years pitching themselves as the tech giant who actually cares about privacy. They seemed to be consciously building this image.
To now implement scanning of private information and then try and sell this obvious 180degree slippery slope turnaround in the most weasel worded “but think of the children” trope is an insult to the customers’ intelligence.
I was a keen Apple consumer because I felt that even if their motivation was profit, this was a company who focused on privacy. It was a distinct selling point.
I certainly won’t be buying more Apple products.
For me, Apple lost the main reason to buy their stuff. If they are going to do the same thing everyone else is doing, I refuse to pay the premium they charge.
They are scanning images on iPhones and iPads prior to uploading those images to iCloud. If you're not uploading images to iCloud, your photos won't be scanned -- but if you are using iCloud, Apple will absolutely check images on your device.
From Apple's Child Safety page:
> Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.
> Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.
Source: https://www.apple.com/child-safety/
I have no idea why they haven't done a 180 yet, this is a bigger failure than the butterfly keyboard. They are letting themselves become the symbol of technological dystopia in the public consciousness. Even an acquaintance who does construction was venting to me about how bad apple's policy is and why she is getting a pixel.
After entirely removing that feature and making a commitment to fight against that kind of future I feel like they owe two more apologies to get on my good side - one for screwing up this bad in the first place and one for insulting my intelligence with their handling of the outcry. This isn't 1990, you don't handwave a mistake this big.
I overheard a group of women on the marketing team at my company talking about how creepy it is and I’ve started having a lot people ask me about it - it doesn’t seem contained to just techie at this point but it is concentrated there. I do think it will continue to grow though, apple has lost control of the narrative around their brand.
This is really, really bad for their brand.
Pressure from governments.
The only reason we were even told this was being introduced in the first place is because it's being run on edge hardware (ie, phones). One talk at DEFCON on weird resource/energy spikes on apple devices and its existance leaks to the public domain which is even worse PR. The only difference is that historically such government level analysis has been conducted behind data center black boxes.
I speculate their hand is being forced by one or more governments and rather than admit that they tried to sell it as best they could. Just speculation.
Look at the stock. It barely moved (up).
Expect everyone else to follow suit and not apologize for it because "Apple is doing it".
Confusion is the best-case scenario for Apple because people will tune it out. If they had released just the on-device spying, public outcry and backlash would have been laser targeted on a single issue.
Apple gave its legendary fan base a fair few facts to latch onto; the first being that it’s a measure against child abuse, which can be used to equate detractors to pedophile apologists or simply pedophiles (these days, more likely directly to the latter.) Thankfully this seems cliché enough to have not been a dominant take. Then there’s the fact that right now, it only runs in certain situations where the data would currently be unencrypted anyways. This is extremely interesting because if they start using E2EE for these things in the future, it will basically be uncharted territory, but what they’re doing now is only merely lining up the capability to do that and not actually doing that. Not to mention, these features have a tendency to expand in scope in the longer term. I wouldn’t call it a slippery slope, it’s more like an overton window of how much people are OK with a surveillance state. I’d say Americans on the whole are actually pretty strongly averse to this, despite everything, and it seems like this was too creepy for many people. Then there’s definitely the confusion; because of course, Apple isn’t doing anything wrong; everyone is just confusing what these features do and their long-term implications.
Here’s where I think it backfired: because it runs on the device, psychologically it feels like the phone is not trustworthy of you. And because of that, using anti-CSAM measures as a starting point was a Terrible misfire, because to users, it just feels like your phone is constantly assuming you could be a pedophile and need to be monitored. It feels much more impersonal when a cloud service does it off into the distance for all content.
In practice, the current short-term outcome doesn’t matter so much as the precedent of what can be done with features like this. And it feels like pure hypocrisy coming from a company whose CEO once claimed they couldn’t build surveillance features into their phones because of pressures for it to be abused. It was only around 5 years ago. Did something change?
I feel like to Apple it is really important that their employees and fans believe they are actually a principled company who makes tough decisions with disregard for “haters” and luddites. In reality, though, I think it’s only fair to recognize that this is just too idealistic. Between this, the situation with iCloud in China, and the juxtaposition of their fight with the U.S. government, one can only conclude that Apple is, after all, just another company, though one whose direction and public relations resonated with a lot of consumers.
A PR misfire from Apple of this size is rare, but I think what it means for Apple is big, as it shatters even some of the company’s most faithful. For Google, this kind of misfire would’ve just been another Tuesday. And I gotta say, between this and Safari, I’m definitely not planning on my next phone being from Cupertino.
You mean that country which gives a damn about privacy altogether because all those fancy corps are giving them toys to play? You know, those companies which feed on the worlds populations data as a business model. The country which has a camera on their front door which films their neighbourhood 24/7? The country which has listening devices all over their homes in useless gadgets?
You have to be joking or that scale you impose here is useless.
This whole thing will go by fast and there won't be much damage on the sales side. Apple is the luxus brand. People don't buy it for privacy. Most of the customers won't probably even understand the problem here.
The only thing we might be rid of are those songs of glory in technical spheres.
This is wildly disingenuous.
Apple is putting code on the device which generates a hash, compares hashes, and creates a token out of that comparison. That is 100% of what happens on the device.
Once the images and tokens are uploaded to iCloud photos, iCloud will alert if 30+ of those security tokens show a match, it will alert Apple's team, and they will get access to only those 30+ photos. They will manually review those photos, and if they then discover that you are indeed hoarding known child pornography then they report you to the authorities.
Thus, it would be more accurate to say that apple is putting on your device code which can detect known child pornographic images.
> And it feels like pure hypocrisy coming from a company whose CEO once claimed they couldn’t build surveillance features into their phones because of pressures for it to be abused.
This isn't a surveillance feature. If you don't like it, disable iCloud Photos. Yes, it could theoretically be abused if Apple went to the dark side, but we'll have to see what this 'auditability' that he was talking about is all about.
Honestly, with all of the hoops that Apple has jumped through to promote privacy, and to call out people who are violating privacy, it feels as though we should give Apple the benefit of the doubt at least until we have all the facts. At the moment, we have very few of the facts.
Check the section "WHAT IS APPLE DOING WITH MESSAGES?" in this article: https://www.theverge.com/2021/8/10/22613225/apple-csam-scann...
This disinfo really angers me. That is the exact opposite of what I've read up till now. People talking about "NeuralHash" and being able to detect if the image is cropped/edited/"similar". SO what is the truth?
The whole point of the system is that you get a matching hash after mirroring/rotating/distorting/cropping/compressing/transforming/watermarking the source image. The system would be pretty useless if it couldn't match an image after someone, say, added a watermark. And if the algorithm was public, it would be easy to bypass.
The concern, of course, is that all of this many-to-one hashing might also cause another unrelated image to generate the same fingerprint, and thereby throw an innocent person to an unyielding blankface bureaucracy who believes their black-box system without question.
"Find all images and tag them if they look like this fingerprint" doesn't mean that. It means: "Find all images and tag them if they look 80% like this fingerprint".
Which also means that it will allow governments to upload photographs of people's faces and say: "Tag anyone who looks like this".
Worse, this will allow China to track down more Uyghurs, find people based on guides in the form of images that are spread around to stay safe from the Chinese government, and countries like Saudi Arabia can start looking for phones with a significant amount of atheist-related images, tracking down atheists, and killing them. Because that's what that country does.
Is this list of hashes already public? If not, seems like adding it to every iPhone and iPad will make it public. I get the "privacy" angle of doing the checks client-side, but it's little like verifying your password client-side. I guess they aren't concerned about the bogeymen knowing with certainty which images will escape detection.
> The main purpose of the hash is to ensure that identical and visually similar images result in the same hash, and images that are different from one another result in different hashes. For example, an image that has been slightly cropped or resized should be considered identical to its original and have the same hash. The system generates NeuralHash in two steps. First, an image is passed into a convolutional neural network to generate an N-dimensional, floating-point descriptor. Second, the descriptor is passed through a hashing scheme to convert the N floating-point numbers to M bits. Here, M is much smaller than the number of bits needed to represent the N floating-point numbers. NeuralHash achieves this level of compression and preserves sufficient information about the image so that matches and lookups on image sets are still successful, and the compression meets the storage and transmission requirements
Just like a human fingerprint is a lower-dimensional representation of all the atoms in your body that's invariant to how old you are or the exact stance you're in when you're fingerprinted... technically Federighi is being accurate about the "exact fingerprint" part. The thing that has me and others concerned isn't necessarily the hash algorithm per se, but rather: how can Apple promise to the world that the data source for "specific known child sexual abuse images" will actually be just that over time?
There are two attacks of note:
(1) a sophisticated actor compromising the hash list handoff from NCMEC to Apple to insert hashes of non-CSAM material, which is something Apple cannot independently verify as it does not have access to the raw images, which at minimum could be a denial-of-service attack causing e.g. journalists' or dissidents' accounts to be frozen temporarily by Apple's systems pending appeal
(2) Apple no longer being able to have a "we don't think we can do this technically due to our encryption" leg to stand on when asked by foreign governments "hey we have a list of hashes, just create a CSAM-like system for us"
That Apple must have considered these possibilities and built this system anyways is a tremendously significant breach of trust.
http://www.fmwconcepts.com/misc_tests/perceptual_hash_test_r...
This DaringFireball[0] article states the goal of the system is to "generate the same fingerprint identifier if the same image is cropped, resized, or even changed from color to grayscale."
So while the fingerprint may be "exact", it's still capable of detecting images which have been altered in some way
[0] https://daringfireball.net/2021/08/apple_child_safety_initia...
But hey, I'm just one of the screeching voices of the minority.
This is the confusion, it's only photos being uploaded to iCloud.
Of course, that's just a nasty way to imply that the images match exactly.
“The system could only match "exact fingerprints" of specific known child sexual abuse images, he said.”
Or like whatever they, the US govt, or any govt where they want to make money (such as China) wants. Is anyone auditing the blacklist? Is it publicly reviewable? (Since it contains CP of course not)
China can start finding Uyghurs based on the images they tend to share. If we're unlucky (as a world), they can even start searching for particular individuals.
"Save the children" is just the classic political ploy to get a ruling through that's just a precursor for evil things to come.
I'm absolutely disgusted by Apple.
They already were storing the photos unencrypted (or with keys available, at least) on their servers, so any government that was able to push them to add a hash to this scanning system could have gotten them to scan for something in iCloud.
China, in particular, could definitely already be doing that, since China made Apple host all iCloud data for Chinese users on servers inside China that're operated by a Chinese company. See: https://support.apple.com/en-us/HT208351
Can a government secure Apple's cooperation in that? I have no idea. But it does make a useful subversion of the hash database a more complicated thing to accomplish.
And if memory serves well it doesn’t need to be Apple as a whole cooperating, a single employee with oversee power could be enough.
Their approval/report rate will make a FISA court judge blush.
It’s a perfectly planned PR response but no one except the biggest sheep is buying it.
The other feature they're packaging with this (nudity warnings for children/teenagers) should be relatively uncontroversial. It seems well designed and respects the user's privacy: It shows a bypassable warning on the device, only sends a warning to parents for children up to 12 years old, and only if the child chooses to view it, and only after disclosing that the parents will be notified. I don't think there is much criticism they'd catch for that, no protests to quell.
On the other hand, the proposal that they're (rightfully) under fire for now is something that they can't easily back out of (they will immediately be accused of supporting pedophiles), and it's basically a "do or don't" proposal, not something that they can partially back out of. The press is also incredibly damaging to the "iPhones respect your privacy" mantra that's at the core of their current PR campaign.
I don't think they expected this level of pushback.
They could just say “Following the unexpected pushback, we will scan iCloud content on server like all other major cloud providers. Unfortunately this also forces us to shelve plans for end to end encryption for photos”.
To me that's the worst of the two. From what I understand, it uses AI trained to detect nude photos. This is much more likely to produce false positives than a hash compare.
Not only is it less accurate by nature, it's a backdoor that can be later used to scan the entire device for whatever photos it's trained to find and report to whoever it's coded to report to.
The PR show is kind of besides the point.
The company should only be concerned with following the law, not earning bownie points for extralegal behavior. Making the government happy shouldn't be a thing in a country ruled by law.
I don't think that is the most likely situation at all.
Apple has been, as part of their privacy initiatives, trying to do as much as possible on the device. That's how they have been defining privacy to themselves internally. Then someone said "can we do something about CSAM" and they came up with a pretty good technical solution that operates on device and therefore, to them, seemed like it would not be particularly controversial. They've been talking about doing ML and photo scanning object recognition on device for years, they're moving much of Siri to on-device in iOS 15, all as part of their privacy initiatives.
It seems to have backfired in that people actually seem to prefer scanning in the cloud to on-device scanning for things like this, because it feels less like a violation of your ownership of the device.
I think the security arguments about how this system can be misused are compelling and it's a fine position to be strongly against this, but I don't know that there's good justification that Apple has some ulterior motive and is faking caring about privacy. I think they were operating with a particular set of assumptions about how people view privacy that turned out to be wrong and they are genuinely surprised by the blowback.
People defending CSAM should go to hell, fast. But are we already done destroying all low-hanging fruits? Did we stop Johnny Savile? Did we put all clerical actors behind bars? Did we extinguish the child porn network behind Marc Dutroux (https://en.wikipedia.org/wiki/Marc_Dutroux)?
And even if we did, would that be enough of an excuse to implicitly accuse anyone? My spouses' family (well-off, so using iDevices) took photos of their young age kids playing, partially naked at the sea. They are now frightened if their photos could be stolen by someone and marketed as child porn.
So unbelievable.
That sounds bad, someone high up at Apple should do an interview clarifying that that's not what's happening!
This is malicious. I do not want them to touch my photos or anything personal. I paid for this device, now it is doing things against my will.
Simply disable iCloud Photo Library, and nothing gets scanned.
However, I also think this is the poster child of the proverb that the road to hell is paved with good intentions.
Now Apple needs to cancel this misguided initiative and never speak of it again, if they want to salvage some of their reputation.
Unlike Android phone, iphone will know about your uploading potentially problematic image beforehand. Yet, it will not alert you or block you from doing so.
It doesn't just feel like a violation, it is one. This opens doors for a number of future abuses of power, and abuses of rights.
The average user has no way of knowing what's in the database of hashes being compared against, and we have no way of controlling what goes into those databases. If some party in the US government decided that they didn't like the Quran, and put a million pictures of it in the database, and served the Apple reviewers with a gag order or some other nonsense, we'd have a 80's dystopian blockbuster as everyday life in our country.
My device, my property, my rules. Apple doesn't get to do a stop and frisk, even if it's "for the good of the children".
Your phone has been Apple’s device since the moment you bought it and turned it on.
> Apple has been, as part of their privacy initiatives, trying to do as much as possible on the device. That's how they have been defining privacy to themselves internally. Then someone said "can we do something about CSAM" [...]
As a separate issue, many people in the company certainly do care about privacy, and that may go all the way to to Tim Cook. Who knows.
What is much more important to Apple the company, though, is making money. Governments have been hounding them for years about letting them spy on users. And they have painted themselves into a bit of a corner, by having the most secure phones.
Now the government comes to them with an offer they can't refuse, cloaked in child porn motivations. I believe many (most?) of the people involved are sincere. It's clear they have tried to make the least invasive system that still does what the government wants.
But that's not good enough in the crazy connected cyber-world we find ourselves in today.
Apple doesn't have a motivation to do this themselves. But they will do what they calculate they need to do.
China would happily pay billions of dollars per year for this capability.
Agreed that this was not intended to be malicious. Apple has always been pretty clear they they should decide what happens on their devices. This sort of on-device scanning that doesn't serve the user is just the latest example of it, and one that people who would never be affected by the code signing restrictions can relate to.
Apple has also benefited from the perception that Google (their smartphone rival) spies on people by collecting location information where as I am sure they also crowd source iphone location to improve their Maps and road traffic data
Also worth mentioning that the original corpus of data used to build the hashes isn’t inspectable, so we don’t know how much garbage exists on the input side.
A recent blog post suggests that the corpus of CSAM isn’t even very comprehensive, so the value of this system is questionable.
That's an oxymoron.
None of the usual anti-regulation apologists have pointed out that Apple shouldn’t be forced to download and host potentially-illegal material in the interest of ensuring whether or not it’s actually illegal.
This whole program is their intelligent solution to protecting as much user privacy as possible while still being compliant with the law. On-device hashing is actually pro privacy compared to in-the-cloud scanning (which all other cloud hosting providers are also required to do).
Personally, I don't have a problem with the parental control feature in Messages (it's pretty clear what it does and the user can decide whether to use it or not, or the parent for younger kids -- that's exactly as it should be).
I do have a problem with the feature where they scan images on my phone to match against a database of images. To be clear, here's a list of things that don't make me feel better about it: that it scans only a certain subset of images on my phone; that the technology parts of it are probably good; that NCMEC maintains the database of images (is there any particular reason to believe the database is near perfect and has all appropriate quality controls in place to ensure it remains so?)
There are several issues about this that Apple does not address. A big one for me is the indignity and humiliation of them force scanning my phone for CP.
Here's a hypothetical for Craig Federighi and Tim Cook to consider:
Suppose we know there are people who smuggle drugs on airplanes on their person for the purpose of something terrible, like addicting children or poisoning people. If I run an airport I could say: to stop this, I'm going to subject everyone who flies out of my airport to a body-cavity search. Tim, and Craig, are you OK with this? If I can say, "Don't worry! We have created this great robots that ensure the body cavity searches are gentle and the minimum needed to check for illegal drugs," does it really change anything to make it more acceptable to you?
Are you okay (conceptually, assuming a perfect database and hashing function) with them scanning pictures uploaded to iCloud for this material if the scanning happens on their servers? Or is this a complete "these pictures should never be scanned, regardless of where it happens" position?
If the former, I personally don't feel a distinction between "a photo is scanned immediately before upload" and "a photo is scanned immediately after upload" is very meaningful. I'd be more concerned if there wasn't a clear way to opt-out. I acknowledge that there's room to disagree on this, and maybe I'm unusual in drawing my boundaries where I do.
If the latter... I think that ship has sailed. Near as I can tell, all the major cloud platforms are scanning for this stuff post-upload, and Apple was a bit of an outlier in how little they were doing before this.
If Congress has to obey the Constitution, then they cannot create an organization which they control, and then push for that organization to execute functions they cannot perform by getting in cahoots with industry.