The last time this was proposed there was quite a bit of concern about how uploads weren't viewable by anyone and could/would be abused by 3rd parties as an anonymous way to take down rival content without any recourse against another OnlyFans producer, to cause headache for a FB friend, etc.
It looks like this system, as opposed to NCMEC, is opt in so maybe some of these issues have been resolved?
It seems you could solve this by allowing anonymous & privacy preserving blocking of images only if that particular image is nowhere on the internet already.
If the image is already published, then a review process is necessary to verify the identity of the person requesting removal.
>To use Take It Down, anyone—minors, parents, concerned parties, or adults concerned about their own underage images being posted online—can anonymously access the platform on NCMEC’s site. Take It Down will then generate a hash that represents images or videos reported by users as sexualizing minors, including images with nudity, partial nudity, or sexualized poses. From there, any online platform that has partnered with the initiative will automatically block uploads or remove content matching that hash.
This sounds impressive if you don't know how file hashing works. If a malicious actor wants to get around this, all they would have to do is change a single pixel and/or re-export as a different format.
Not necessarily. There are much newer technologies than simple hashes of files now, which are effectively content-aware image hashing algorithms which are highly resistant to manipulation techniques (re-encoding, resizing, even things like rotation/blur) they are of course tunable algorithms which the more you want to catch the more false positive rate there is but you can already today do much better than simple file hash.
I think it's definitely more useful, especially long term, in a more controlled system where the government agency that is handling the actual CSAM is simply submitting hashes of the content the company (Microsoft, Apple, or whoever else) to add to their database with which they can use to flag/review suspicious content.
However, the system described in the article is open to the public, and simultaneously privacy/anonymity oriented. I see this as a double-edged sword. While it does protect the identity of legitimate users, that also opens it up to nefarious actors flooding the system with images/videos taken from legitimate content creators on OnlyFans other sites, potentially getting those creators' content flagged/removed. Even if this simply triggers a manual review, you could feasibly spam the system with so many that it grinds to a halt.
Your own links talks about how perceptual hashing hasn't been proven to be robust enough for this use case, and also introduces a new problem: Hash collisions, such that you can generate images that hash to the same perceptual hash as an illicit image.
Only semi related, but can someone explain to me how the hashing process works?
Of course I get the process of hashing passwords and such, but how do you hash a photo so that the hash isn’t invalidated as soon as I convert it to a different format or trim a single pixel off of it or add a scrolling banner that “this video was uploaded to…”?
It looks like this system, as opposed to NCMEC, is opt in so maybe some of these issues have been resolved?
If the image is already published, then a review process is necessary to verify the identity of the person requesting removal.
This system will just be abused until it's taken down.
You really just need actual enforcement of clear criminal cases to seriously manage the problem.
This sounds impressive if you don't know how file hashing works. If a malicious actor wants to get around this, all they would have to do is change a single pixel and/or re-export as a different format.
Look at https://www.microsoft.com/en-us/photodna and https://openbase.com/python/ImageHash/documentation
Edit: here's a source https://www.anishathalye.com/2021/12/20/inverting-photodna/
However, the system described in the article is open to the public, and simultaneously privacy/anonymity oriented. I see this as a double-edged sword. While it does protect the identity of legitimate users, that also opens it up to nefarious actors flooding the system with images/videos taken from legitimate content creators on OnlyFans other sites, potentially getting those creators' content flagged/removed. Even if this simply triggers a manual review, you could feasibly spam the system with so many that it grinds to a halt.
This sounds plausible if you don't know how perceptual hashing works:
https://en.wikipedia.org/wiki/Perceptual_hashing
https://arxiv.org/pdf/2111.06628.pdf
But why? Is there any need to log the file name?
Of course I get the process of hashing passwords and such, but how do you hash a photo so that the hash isn’t invalidated as soon as I convert it to a different format or trim a single pixel off of it or add a scrolling banner that “this video was uploaded to…”?
Dead Comment