Loading parent story...
Loading comment...
Loading parent story...
Loading comment...
Loading parent story...
Loading comment...
Apple tried and made good progress. They had bugs which could be resolved but your insistence that it couldn't be done caused too much of an uproar.
You can have a system that flags illicit content with some confidence level and have a human review that content. You can make any model or heuristic used is publicly logged and audited. You can anonymously flag that content to reviewers, and when deemed as actually illicit by a human, the hash or some other signature of the content can be published globally to reveal the devices and owners of those devices. You can presume innocence (such as a parent taking a pic of their kids bathing) and question suspects discretely without an arrest. You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.
These are just some of the things that are possible that I came up with in the last minute of typing this post. Better and more well thought out solutions can be developed if taken seriously and funded well.
However, your response of "Yes." is materially false, law makers will catch on to that and discredit anything the privacy community has been advocating. Even simple heuristics that isn't using ML models can have a higher "true positive" rate of identifying criminal activity than eye witness testimony, which is used to convict people of serious crimes. And I suspect, you meant security, not privacy. Because as I mentioned, for privacy, humans can review before a decision is made to search for the confirmed content across devices.
I understand that you seem to think that adding systems like this will placate governments around the world but that is not the case. We have already conceded far more than we ever should have to government surveillance for a false sense of security.
> You can have a system that flags illicit content with some confidence level and have a human review that content. You can make any model or heuristic used is publicly logged and audited. You can anonymously flag that content to reviewers, and when deemed as actually illicit by a human, the hash or some other signature of the content can be published globally to reveal the devices and owners of those devices. You can presume innocence (such as a parent taking a pic of their kids bathing) and question suspects discretely without an arrest. You can require cops to build multiple sufficient points of independently corroborated evidence before arresting people.
What about this is privacy preserving?
> However, your response of "Yes." is materially false, law makers will catch on to that and discredit anything the privacy community has been advocating. Even simple heuristics that isn't using ML models can have a higher "true positive" rate of identifying criminal activity than eye witness testimony, which is used to convict people of serious crimes. And I suspect, you meant security, not privacy. Because as I mentioned, for privacy, humans can review before a decision is made to search for the confirmed content across devices.
It's not "materially false." Bringing a human into the picture doesn't do anything to preserve privacy. If, like in your example, a parent's family photos with their children flag the system, you have already violated the person's privacy without just cause, regardless of whether the people reviewing it can identify the person or not.
You cannot have a system that is scanning everyone's stuff indiscriminately and have it not be a violation of privacy. There is a reason why there is a process where law enforcement must get permission from the courts to search and/or surveil suspects - it is supposed to be a protection against abuse.
It's a noble fight trying to get E2EE be compatible with the law. But I think some perspective for privacy advocates is due. People don't want freedom and privacy at the cost of their own security. We shouldn't have to choose, but if nothing else, the government has one most important role, and that is not safeguarding freedoms, but ensuring the safety of its people.
No government, no matter how free or wealthy can abdicate its role in securing its people. There must be a solution to fight harmful (not neccesarily illegal) content incorporated into secure messaging solutions. I'm not arguing for backdoors in this post, but even things like Apple's CSAM scanning approach are met with fierce resistance from the privacy advocate community.
This stance that "No, we can't have any solutions, leave E2EE alone" is not a practical stance.
Speaking purely as a citizen, if you're telling me "you will lose civil liberties and democracy, if you let governments reduce cp content", my response would be "what's the hold up?". Even if governments are just using that as an excuse. As someone slightly familiar with the topic, of course I wouldn't want to trade my liberties and freedoms, but is anyone working on a solution? are there working groups? Why did Apple get so much resistance, but there are no opensource solutions?
There are solutions for anonymous payments using homomorphic encryption. Things like Zcash and Monero exist. But you're telling me privacy preserving solutions to combat illicit content are impossible? My problem is with the impossible part. Are there researchers working to make this happen using differential privacy or some other solution? How can I help? Let's talk about solutions.
If your position is that governments (who represent us,voters) should accept the status quo, and just let their people suffer injustice, I don't think I can support that.
Mullvad is also in for a rude awakening. If criminals use Tor or VPNs, those will also face a ban. We need to give governments solutions that lets them do what they claim they want to do (protect the public from victimization) while preserving privacy to avoid a very real dystopia.
Freedoms and liberties must not come at the cost of injustice. And as i argued elsewhere on HN, in the end, ignoring ongoing injustice will result in even less freedoms and liberties. If there was a pluralistic referendum in the EU over chat control, I would be surprised if the result isn't a law that is even far worse than chat control.
EDIT: Here is one idea I had: Sign images/video with hardware-secured chips (camera sensor or GPU?) that is traceable to the device. When images are further processed/edited, then they will be subject to differential-privacy scanning. This can also combat deepfakes, if image authenticity can be proven by the device that took the image.
Yes. You cannot have a system that positively associates illicit content with an owner while preserving privacy.
Loading parent story...
Loading comment...
Loading parent story...
Loading comment...
Through my attempts, I've been told they don't really do adult adhd diagnoses without documentation of issues as a kid. I was recommended Wellbutrin to deal with symptoms in 2017. Got onto adderall when I moved health insurance in 2021. Back to Kaiser in 2024, I was routed to the same psychiatrist who once again wouldn't budge on adderall and once again recommended Welbutrin.
I used an online clinic to get my assessment (which I understand isn't taken seriously) which is what she cited. I asked what aspect of the assessment documentation did she think left me unqualified and she cited marijuana use in 2016. I asked her how she squares the fact that I'm an adult professional that makes comparable money to her, I have experience using both wellbutrin and adderall and see the former doing nothing and the latter helping, there's hundreds of times more evidence for adderall efficacy vs the flakey data on wellbutrin... She responded with something like: "I believe in my heart of hearts that what I am doing is right".
I thought the entire situation was kind of insane. Further research into the person makes me think they're a bit of a loon.
I'm now on a PPO plan and have been using Vyvanse for over a year now. It's lead to a dramatic improvement in my quality of life. I grieved for the time and opportunities I had lost due to not having been diagnosed and treated in childhood.
HMOs have a lot of upsides, but Kaiser's behavioral healthcare is awful (at least in the DC Metro area) and there's not much recourse unless you want to/can afford to pay out of pocket.
There's so much cynicism about ADHD even existing, even among healthcare professionals. Any time on HN any mention of ADHD seems to invite a lot of cynicism as well. That, compounded with that one of the most effective treatments for it is something that pretty much everyone can see a positive effect from (stimulant medication), makes it really difficult to navigate.
I hope that you can find a better option because it seems like Kaiser is just very antagonistic towards ADHD.
Loading parent story...
Loading comment...