That's the failure of European union
As for latter Sweden, doesn't currently have capacity for it and I don't think they have been very interested in increasing it, currently Finland often benefits from the fact that there isn't enough transport capacity between Southern and Northern Sweden electric grids so Finland gets some cheap electricity from there.
Here is the actual text: https://data.consilium.europa.eu/doc/document/ST-15318-2025-...
High risk classification is at the end of the text.
Some highlights of what is defined as high risk, and thus can be forced to go through mandatory scanning or forbidden:
- Encrypted messaging follows closely due to privacy concerns and the potential for misuse. Posting and sharing of multimedia content are also high-risk activities, as they can easily disseminate harmful material.
- The platform lacks functionalities to prevent users from saving harmful content (by making recordings, screenshots etc.) for the purpose of the dissemination thereof (such as for example not allowing recording and screenshotting content shared by minors)
- Possibility to use peer-to-peer downloading (allows direct sharing of content without using centralised servers)
- The platforms’ storage functionalities and/or the legal framework of the country of storage do not allow sharing information with law enforcement authorities.
- The platform lacks functionalities to limit the number of downloads per user to reduce the dissemination of harmful content.
- Making design choices such as ensuring that E2EE is opt-in by default, rather than opt-out would require people to choose E2EE should they wish to use it, therefore allowing certain detection technologies to work for communication between users that have not opted in to E2EE
Also, a lot of these points do not sound like they are about the safety of children
- Platforms lack a premoderation system, allowing potentially harmful content to be posted without oversight or moderation
- Frequent use of anonymous accounts
- Frequent Pseudonymous behavior
- Frequent creation of temporary accounts:
- Lack of identity verification tools
Based on the light of the proposal, Hacker News is very dangerous place and need to have its identity verification and CSAM policies fixed, or face the upcoming fines in the EU.
So you make it so that when the user starts the application you ask them "Your current configuration allows government, and probably some hackers as well, to see your messages. Do you want to enable encryption? Your government's suggestion is that you should say 'No' here. That's also what the foreign intelligence agencies suggest" "Yes, enable encryption" "No". That's clearly opt-in, you even provide the government's recommendation. And of course you then ask that whenever they open the application if they selected "No", we have learned that it's completely fine to keep asking same question from the user.
Oh, and make sure that the other party is clearly aware that the other side has not enabled encryption.
Some more details:
https://noyb.eu/en/eu-commission-about-wreck-core-principles... Textual analysis of the changes from the original leaked draft (especially "Overview Table of the Draft & Comments by noyb")
https://noyb.eu/en/digital-omnibus-first-legal-analysis Video about the proposed changes (there are some changes compared to the leaked draft)
The key issue is that anonymization under GDPR requires that a link to a real person can never be re-established even considering the person doing the anonymization. Consider a clincial study on 100 patients and their some diagnostic parameter such as creatinine or H1bc which was legally collected using consent and everything. Lets assume we would like to share only the 100 values of the diagnostic without any personal data. It would seem quite anonymous, but GDPR would put a simple test if anybody using reasonable efforts could re-establish an identity. And sure the original researcher can because s/he has a master file containing the mapping. So the data isn't anonymous and actually can never be anonymous.
What the GDPR requires is that the user is informed about the processing and the suppliers used, and in some cases, provides consent to the processing.
The new proposal which suggests that pseudonymized data is not always PII is a different thing. It actually opens the door to a lot of new problems in my opinion. For example, with this new interpretation, big tech might question whether IP addresses are still personal data (which is something EU top courts had previously established)? What about cryptographically hashed values of your social security number (easy to break)?
This actually is already the case, see the recent CJEU C‑413/23 P. Currently the main question is if the recipient has a way to unmask the user. In case of IP address the answer is almost always yes since the recipient could ask competent authority to unmask the IP address if there is crime involved. That was the exact reasoning provided in the Breyer case.
In C‑413/23 P the recipient didn't have any reasonable way to map the opinion to real person so it was determined that it's not PII from recipient's POV but it was from the data controller's.
One of the issues in the new proposal is that it lowers the standard quite a bit compared to C‑413/23 P.
One hack would be to use recursion and let stack exhaustion stop you.
I mean, it absolutely worked for effectively sinking the GDPR, where pretty much everyone now equates that law with obnoxious 'cookie banners', to the point that these regulations are being relaxed, despite never requiring these banners in any way, shape or form in the first place.
But, yeah, despite that, I'd say they'll get away with this as well...
I don't think DMCA has anything to do with that though I did wish everyone hated it. You probably meant GDPR.