Readit News logoReadit News
ankit219 · a month ago
Not just Meta, 40 EU companies urged EU to postpone roll out of the ai act by two years due to it's unclear nature. This code of practice is voluntary and goes beyond what is in the act itself. EU published it in a way to say that there would be less scrutiny if you voluntarily sign up for this code of practice. Meta would anyway face scrutiny on all ends, so does not seem to a plausible case to sign something voluntary.

One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way. For open source, it's a very hard requirement[1].

> GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.

[1] https://www.lw.com/en/insights/2024/11/european-commission-r...

m3sta · a month ago
The quoted text makes sense when you understand that the EU provides a carveout for training on copyright protected works without a license. It's quite an elegant balance they've suggested despite the challenges it fails to avoid.
Oras · a month ago
Is that true? How can they decide to wipe out the intellectual property for an individual or entity? It’s not theirs to give it away.
t0mas88 · a month ago
Sounds like a reasonable guideline to me. Even for open source models, you can add a license term that requires users of the open source model to take "appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works"

This is European law, not US. Reasonable means reasonable and judges here are expected to weigh each side's interests and come to a conclusion. Not just a literal interpretation of the law.

sealeck · a month ago
> This is European law, not US. Reasonable means reasonable and judges here are expected to weigh each side's interests and come to a conclusion. Not just a literal interpretation of the law.

I think you've got civil and common law the wrong way round :). US judges have _much_ more power to interpret law!

gkbrk · a month ago
> Even for open source models, you can add a license term that requires users of the open source model to take appropriate measures to avoid [...]

You just made the model not open source

deanc · a month ago
Except that it’s seemingly impossible to prevent against prompt injection. The cat is out the bag. Much like a lot of other legislation (eg cookie law, being responsible for user generated content when you have millions of it posted per day) it’s entirely impractical albeit well-meaning.
whatevaa · a month ago
There is no way to enforce that license. Free software doesn't have funds for such lawsuits.
dmix · a month ago
Lovely when they try to regulate a burgeoning market before we have any idea what the market is going to look like in a couple years.
remram · a month ago
The whole point of regulating it is to shape what it will look like in a couple of years.
amelius · a month ago
We know what the market will look like. Quasi monopoly and basic user rights violated.
ulfw · a month ago
Regulating it while the cat is out of the bag leads to monopolistic conglomerates like Meta and Google. Meta shouldn't have been allowed to usurp instagram and whatsapp, Google shouldn't have been allowed to bring Youtube into the fold. Now it's too late to regulate a way out of this.
troupo · a month ago
> before we have any idea what the market is going to look like in a couple years.

Oh, we already know large chunks of it, and the regulations explicitly address that.

If the chest-beating crowd would be presented with these regulations piecemeal, without ever mentioning EU, they'd probably be in overwhelming support of each part.

But since they don't care to read anything and have an instinctive aversion to all things regulatory and most things EU, we get the boos and the jeers

Deleted Comment

rapatel0 · a month ago
I literally lived this with GDPR. In the beginning every one ran around pretending to understand what it meant. There were a ton of consultants and lawyers that basically made up stuff that barely made sense. They grifted money out of startups by taking the most aggressive interpretation and selling policy templates.

In the end the regulation was diluted to something that made sense(ish) but that process took about 4 years. It also slowed down all enterprise deals because no one knew if a deal was going to be against GDPR and the lawyers defaulted to “no” in those orgs.

Asking regulators to understand and shape market evolution in AI is basically asking them to trade stocks by reading company reports written in mandarin.

verisimi · a month ago
Exactly. No anonymity, no thought crime, lots of filters to screen out bad misinformation, etc. Regulate it.

Deleted Comment

ekianjo · a month ago
they dont want a marlet. They want total control, as usual for control freaks.
zizee · a month ago
It doesn't seem unreasonable. If you train a model that can reliably reproduce thousands/millions of copyrighted works, you shouldn't be distributibg it. If it were just regular software that had that capability, would it be allowed? Just because it's a fancy Ai model it is ok?
Aurornis · a month ago
> that can reliably reproduce thousands/millions of copyrighted works, you shouldn't be distributibg it. If it were just regular software that had that capability, would it be allowed?

LLMs are hardly reliable ways to reproduce copyrighted works. The closest examples usually involve prompting the LLM with a significant portion of the copyrighted work and then seeing it can predict a number of tokens that follow. It’s a big stretch to say that they’re reliably reproducing copyrighted works any more than, say, a Google search producing a short excerpt of a document in the search results or a blog writer quoting a section of a book.

It’s also interesting to see the sudden anti-LLM takes that twist themselves into arguing against tools or platforms that might reproduce some copyrighted content. By this argument, should BitTorrent also be banned? If someone posts a section of copyrighted content to Hacker News as a comment, should YCombinator be held responsible?

cultureswitch · a month ago
It is entirely unreasonable to prevent a general purpose model to be distributed for the largely frivolous reason that maybe some copyrighted works could be approximated using it. We don´t make metallurgy illegal because it's possible to make guns with metal.

When a model that has this capability is being distributed, copyright infringement is not happening. It is happening when a person _uses_ the model to reproduce a copyrighted work without the appropriate license. This is not meaningfully different to the distinction between my ISP selling me internet access and me using said internet access to download copyrighted material. If the copyright holders want to pursue people who are actually doing copyright infringement, they should have to sue the people who are actually doing copyright infringement and they shouldn't have broad power to shut down anything and everything that could be construed as maybe being capable of helping copyright infringement.

Copyright protections aren't valuable enough to society to destroy everything else in society just to make enforcing copyright easier. In fact, considering how it is actually enforced today, it's not hard to argue that the impact of copyright on modern society is a net negative.

CamperBob2 · a month ago
I have a Xerox machine that can reliably reproduce copyrighted works. Is that a problem, too?

Blaming tools for the actions of their users is stupid.

greatgib · a month ago
It's a trojan horse, they try to do the same thing that is happening in the banking sector.

By this they want AI model provider to have a strong grip on their users, so controling their usage to not risk issues with the regulator. Then, the European technocrats will be able control the whole field by being able to control the top providers, that then will overreach by controlling their users.

badsectoracula · a month ago
> One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way

AFAICT the actual text of the act[0] does not mention anything like that. The closest to what you describe is part of the chapter on copyright of the Code of Practice[1], however the code does not add any new requirements to the act (it is not even part of the act itself). What it does is to present a way (which does not mean it is the only one) to comply with the act's requirements (as a relevant example, the act requires to respect machine-readable opt-out mechanisms when training but doesn't specify which ones, but the code of practice explicitly mentions respecting robots.txt during web scraping).

The part about copyright outputs in the code is actually (measure 1.4):

> (1) In order to mitigate the risk that a downstream AI system, into which a general-purpose AI model is integrated, generates output that may infringe rights in works or other subject matter protected by Union law on copyright or related rights, Signatories commit:

> a) to implement appropriate and proportionate technical safeguards to prevent their models from generating outputs that reproduce training content protected by Union law on copyright and related rights in an infringing manner, and

> b) to prohibit copyright-infringing uses of a model in their acceptable use policy, terms and conditions, or other equivalent documents, or in case of general-purpose AI models released under free and open source licenses to alert users to the prohibition of copyright infringing uses of the model in the documentation accompanying the model without prejudice to the free and open source nature of the license.

> (2) This Measure applies irrespective of whether a Signatory vertically integrates the model into its own AI system(s) or whether the model is provided to another entity based on contractual relations.

Keep in mind that "Signatories" here is whoever signed the Code of Practice: obviously if i make my own AI model and do not sign that code of practice myself (but i still follow the act requirements), someone picking up my AI model and signing the Code of Practice themselves doesn't obligate me to follow it too. That'd be like someone releasing a plugin for Photoshop under the GPL and then demanding Adobe release Photoshop's source code.

As for open source models, the "(1b)" above is quite clear (for open source models that want to use this code of practice - which they do not have to!) that all they have to do is to mention in their documentation that their users should not generate copyright infringing content with them.

In fact the act has a lot of exceptions for open-source models. AFAIK Meta's beef with the act is that the EU AI office (or whatever it is called, i do not remember) does not recognize Meta's AI as open source, so they do not get to benefit from those exceptions, though i'm not sure about the details here.

[0] https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:...

[1] https://ec.europa.eu/newsroom/dae/redirection/document/11811...

Deleted Comment

Dead Comment

jahewson · a month ago
There’s a summary of the guidelines here for anyone who is wondering:

https://artificialintelligenceact.eu/introduction-to-code-of...

It’s certainly onerous. I don’t see how it helps anyone except for big copyright holders, lawyers and bureaucrats.

felipeerias · a month ago
These regulations may end up creating a trap for European companies.

Essentially, the goal is to establish a series of thresholds that result in significantly more complex and onerous compliance requirements, for example when a model is trained past a certain scale.

Burgeoning EU companies would be reluctant to cross any one of those thresholds and have to deal with sharply increased regulatory risks.

On the other hand, large corporations in the US or China are currently benefiting from a Darwinian ecosystem at home that allows them to evolve their frontier models at breakneck speed.

Those non-EU companies will then be able to enter the EU market with far more polished AI-based products and far deeper pockets to face any regulations.

randomNumber7 · a month ago
Also EU Users will try to use the better AI products with e.g. a VPN to the US.
thrance · a month ago
It's always the same argument, and it is true. The US retained an edge over the rest of the world through deregulating tech.

My issue with this is that it doesn't look like America's laissez-faire stance on this issues helped Americans much. Internet companies have gotten absolutely humongous and gave rise to a new class of techno-oligarchs that are now funding anti-democracy campaigns.

I feel like getting slightly less performant models is a fair price to pay for increased scrutiny over these powerful private actors.

Workaccount2 · a month ago
And then they'll get fined a few billion anyway to cover the gap for no European tech to tax.
troupo · a month ago
> It’s certainly onerous.

What exactly is onerous about it?

l5870uoo9y · a month ago
It's basically micromanaging an industry that European countries have not been able to cultivate themselves. It's legislation for legislation's sake. If you had a naive hope that Mario Draghi's gloomy report on the EU's competitiveness would pave the way for a political breakthrough in the EU - one is tempted to say something along the lines of communist China's market reforms in the 70s - then you have to conclude that the EU is continuing in exactly the same direction. I have actually lost faith in the EU.

Dead Comment

rockemsockem · a month ago
I'm surprised that most of the comments here are siding with Europe blindly?

Am I the only one who assumes by default that European regulation will be heavy-handed and ill conceived?

satellite2 · a month ago
Well Europe haven't enacted policies actually breaking American monopolies until now.

Europeans are still essentially on Google, Meta and Amazon for most of their browsing experiences. So I'm assuming Europe's goal is not to compete or break American moat but to force them to be polite and to preserve national sovereignty on important national security aspects.

A position which is essentially reasonable if not too polite.

almatabata · a month ago
> So I'm assuming Europe's goal is not to compete or break American moat but to force them to be polite and to preserve national sovereignty on important national security aspects.

When push comes to shove the US company will always prioritize US interest. If you want to stay under the US umbrella by all means. But honestly it looks very short sighted to me.

After seeing this news https://observer.co.uk/news/columnists/article/the-networker..., how can you have any faith that they will play nice?

You have only one option. Grow alternatives. Fund your own companies. China managed to fund the local market without picking winners. If European countries really care, they need to do the same for tech.

If they don't they will forever stay under the influence of another big brother. It is US today, but it could be China tomorrow.

troupo · a month ago
> Am I the only one who assumes by default

And that's the problem: assuming by default.

How about not assuming by default? How about reading something about this? How about forming your own opinion, and not the opinion of the trillion- dollar supranational corporations?

rockemsockem · a month ago
Are you saying that upon reading a sentence like

"Meta disagrees with European regulation"

That you don't have an immediate guess at which party you are most likely to agree with?

I do and I think most people do.

I'm not about to go around spreading my uninformed opinion though. What my comment said was that I was surprised at people's kneejerk reaction that Europe must be right, especially on HN. Perhaps I should have also chided those people for commenting at all, but that's hindsight for you.

notyourwork · a month ago
What is bad about heavy handed regulation to protect citizens?
felipeerias · a month ago
That it is very likely not going to work as advertised, and might even backfire.

The EU AI regulation establishes complex rules and requirements for models trained above 10^25 FLOPS. Mistral is currently the only European company operating at that scale, and they are also asking for a pause before these rules go into effect.

terminalshort · a month ago
This is the same entity that has literally ruled that you can be charged with blasphemy for insulting religious figures, so intent to protect citizens is not a motive I ascribe to them.
rdm_blackhole · a month ago
The EU is pushing for a backdoor in all major messaging/email providers to "protect the children". But it's for our own good you see? The EU knows best and it wants your data without limits and without probable cause. Everyone is a suspect.

1984 wasn't supposed to be a blueprint.

stainablesteel · a month ago
what's bad about it is when people say "it's to protect citizens" when it's really a political move to control american companies
marginalia_nu · a month ago
A good example of how this can end up with negative outcomes is the cookie directive, which is how we ended up with cookie consent popovers on every website that does absolutely nothing to prevent tracking and has only amounted to making lives more frustrating in the EU and abroad.

It was a decade too late and written by people who were incredibly out of touch with the actual problem. The GDPR is a bit better, but it's still a far bigger nuisance for regular European citizens than the companies that still largely unhindered track and profile the same.

Workaccount2 · a month ago
You end up with anemic industry and heavy dependability on foreign players.
hardlianotion · a month ago
He also said “ill conceived”
rockemsockem · a month ago
I don't think good intentions alone are enough to do good.
mensetmanusman · a month ago
Will they resort to turning off the Internet to protect citizens?
wtcactus · a month ago
Because it doesn't protect us.

It just creates barriers for internal players, while giving a massive head start for evil outside players.

_zoltan_ · a month ago
it does not protect citizens? the EU shoves down a lot of the member state's throats.
CamperBob2 · a month ago
"Even the very wise cannot see all ends." And these people aren't what I'd call "very wise."

Meanwhile, nobody in China gives a flying fuck about regulators in the EU. You probably don't care about what the Chinese are doing now, but believe me, you will if the EU hands the next trillion-Euro market over to them without a fight.

remram · a month ago
"blindly"? Only if you assume you are right in your opinion can you arrive at the conclusion that your detractors didn't learn about it.

Since you then admit to "assume by default", are you sure you are not what you complain about?

rockemsockem · a month ago
I was specifically referring to several comments that specifically stated that they did not know what the regulation was, but that they assumed Europe was right and Meta was wrong.

I, prior to reading the details of the regulation myself, was commenting on my surprise at the default inclinations of people.

At no point did I pass judgement on the regulation and even after reading a little bit on it I need to read more to actually decide whether I think it's good or bad.

Being American it impacts me less, so it's lower on my to do list.

cultureswitch · a month ago
Let's see, how many times did I get robo-called in the last decade? Zero :)

Sometimes the regulations are heavy-handed and ill-conceived. Most of the time, they are influenced by one lobby or another. For example, car emissions limits scale with _weight_ of all things, which completely defeats the point and actually makes today's car market worse for the environment than it used to be, _because of_ emissions regulations. However, it is undeniable that that the average European is better off in terms of privacy.

lovich · a month ago
I’d side with Europe blindly over any corporation.

The European government has at least a passing interest in the well being of human beings while that is not valued by the incentives that corporations live by

rdm_blackhole · a month ago
The EU is pushing for a backdoor in all major messaging/email providers to "protect the children". No limits and no probable cause required. Everyone is a suspect.

Are you still sure you want to side blindly with the EU?

rockemsockem · a month ago
All corporations that exist everywhere make worse decisions than Europe is a weirdly broad statement to make.
seydor · a month ago
It's just foreign interests trying to keep Europe down
rockemsockem · a month ago
I feel like Europe does a plenty good job of that itself
9dev · a month ago
Maybe the others have put in a little more effort to understand the regulation before blindly criticising it? Similar to the GDPR, a lot of it is just common sense—if you don’t think that "the market" as represented by global mega-corps will just sort it out, that is.
Alupis · a month ago
Our friends in the EU have a long history of well-intentioned but misguided policy and regulations, which has led to stunted growth in their tech sector.

Maybe some think that is a good thing - and perhaps it may be - but I feel it's more likely any regulation regarding AI at this point in time is premature, doomed for failure and unintended consequences.

rockemsockem · a month ago
I'm specifically referring to several comments that say they have not read the regulation at all, but think it must be good if Meta opposes it.
ars · a month ago
> GDPR

You mean that thing (or is that another law?) that forces me to find that "I really don't care in the slightest" button about cookies on every single page?

andrepd · a month ago
So you're surprised that people are siding with Europe blindly, but you're "assuming by default" that you should side with Meta blindly.

Perhaps it's easier to actually look at the points in contention to form your opinion.

rockemsockem · a month ago
I don't remember saying anything about blindly deciding things being a good thing.
xandrius · a month ago
If I've got to side blindly with any entity it is definitely not going to be Meta. That's all there is.
rockemsockem · a month ago
That's fair, but you don't need to blindly side with anyone.

My original post was about all the comments saying they knew nothing about the regulation, but that they sided with Europe.

I think that gleeful ignorance caught me off guard.

rockemsockem · a month ago
I mean, ideally no one would side blindly at all :D
jabjq · a month ago
I feel the same but about the EU. After all, I have a choice whether to use Meta or not. There is no escaping the EU sort of leaving my current life.
campl3r · a month ago
Or you know, some actually read it and agree?
rockemsockem · a month ago
I'm specifically talking about comments that say they haven't read it, but that they side with Europe. Look through the thread, there's a ton like that
zeptonix · a month ago
Everything in this thread even remotely anti-EU-regulation is being extreme downvoted
impossiblefork · a month ago
The regulations are pretty reasonable though.
rockemsockem · a month ago
Yeah it's kinda weird.

Feels like I need to go find a tech site full of people who actually like tech instead of hating it.

vicnov · a month ago
It is fascinating. I assume that the tech world is further to the left, and that interpretation of "left" is very pro-AI regulation.
gnulinux996 · a month ago
Are you suggesting something here?
OtomotO · a month ago
Are you aware of the irony in your post?
rockemsockem · a month ago
I don't recall sharing my opinion on this particular regulation.

I think perhaps you need to reread my comment or lookup "irony"

cakealert · a month ago
EU regulations are sometimes able to bully the world into compliance (eg. cookies).

Usually minorities are able to impose "wins" on a majority when the price of compliance is lower than the price of defiance.

This is not the case with AI. The stakes are enormous. AI is full steam ahead and no one is getting in the way short of nuclear war.

oaiey · a month ago
But AI also carries tremendous risks, from something simple as automating warfare to something like a evil AGI.

In Germany we have still traumas from automatic machine guns setup on the wall between East and West Germany. The Ukraine is fighting a drone war in the trenches with a psychological effect on soldiers comparable to WWI.

Stake are enormous. Not only toward the good. There is enough science fiction written about it. Regulation and laws are necessary!

tim333 · a month ago
I think your machine gun example illustrates people are quite capable of masacreing each other without AI or even high tech - in past periods sometimes over 30% of males died in warfare. While AI could get involved it's kind of a separate thing.
zettabomb · a month ago
I don't disagree that we need regulation, but I also think citing literal fiction isn't a good argument. We're also very, very far away from anything approaching AGI, so the idea of it becoming evil seems a bit far fetched.
chii · a month ago
regulation does not stop weapons from being created that utilizes AI. It only slows down honest states that try to abide by it, and gives the dishonest ones a head start.

Guess what happens to the race then?

stainablesteel · a month ago
you can choose to live in fear, the rest of us are embracing growth
encom · a month ago
The only thing the cookie law has accomplished for users, is pestering everyone with endless popups (full of dark patterns). WWW is pretty much unbearable to use without uBlock filtering that nonsense away. User tracking and fingerprinting has moved server side. Zero user privacy has been gained, because there's too much money to be made and the industry routed around this brain dead legislation.
red_trumpet · a month ago
> User tracking and fingerprinting has moved server side.

This smells like a misconception of the GDPR. The GDPR is not about cookies, it is about tracking. You are not allowed to track your users without consent, even if you do not use any cookies.

sublimefire · a month ago
Well in my case I just do not use those websites with an enormous amount of “partners” anymore. Cookie legislation was great because it now shows you how many businesses are ad based, it added a lot of transparency. It is annoying only because you want the shit for free and it carries a lot of cookies usually. All of the businesses that do not track beyond the necessary do not have that issue with the cookie banners IMO. GDPR is great for users and not too difficult to implement. All of the stuff related to it where you can ask the company what data they hold about you is also awesome.
sorokod · a month ago
Presumably it is Meta's growth they have in mind.

Edit: from the linked in post, Meta is concerned about the growth of European companies:

"We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them."

t0mas88 · a month ago
Sure, but Meta saying "We share concerns raised by these businesses" translates to: It is in our and only our benefit for PR reasons to agree with someone, we don't care who they are, we don't give a fuck, but just this second it sounds great to use them for our lobbying.

Meta has never done and will never do anything in the general public's interest. All they care about is harvesting more data to sell more ads.

VWWHFSfQ · a month ago
> has never done and will never do anything in the general public's interest

I'm no Meta apologist, but haven't they been at the forefront of open-source AI development? That seems to be in the "general public's interest".

Obviously they also have a business to run, so their public benefit can only go so far before they start running afoul of their fiduciary responsibilities.

isodev · a month ago
Of course. Skimming over the AI Code of Practice, there is nothing particularly unexpected or qualifying as “overreach”. Of course, to be compliant, model providers can’t be shady which perhaps conflicts with Meta’s general way of work.
throwpoaster · a month ago
EU is going to add popups to all the LLMs like they did all the websites. :(
lofaszvanitt · a month ago
No popup is required, just every lobotomized idiot copies what the big players do....

Oh ma dey have popups. We need dem too! Haha, we happy!

zdragnar · a month ago
Actually, it's because marketing departments rely heavily on tracking cookies and pixels to be their job, as their job is measured on things like conversations and understanding how effective their ad spend is.

The regulations came along, but nobody told marketing how to do their job without the cookies, so every business site keeps doing the same thing they were doing, but with a cookie banner that is hopefully obtrusive enough that users just click through it.

gond · a month ago
No, the EU did not do that.

Companies did that and thoughtless website owners, small and large, who decided that it is better to collect arbitrary data, even if they have no capacity to convert it into information.

The solution to get rid of cookie banners, as it was intended, is super simple: only use cookies if absolutely necessary.

It was and is a blatant misuse. The website owners all have a choice: shift the responsibility from themselves to the users and bugger them with endless pop ups, collect the data and don’t give a shit about user experience. Or, just don’t use cookies for a change.

And look which decision they all made.

A few notable examples do exist: https://fabiensanglard.net/ No popups, no banner, nothing. He just don’t collect anything, thus, no need for a cookie banner.

The mistake the EU made was to not foresee the madness used to make these decisions.

I’ll give you that it was an ugly, ugly outcome. :(

wskinner · a month ago
> The mistake the EU made was to not foresee the madness used to make these decisions.

It's not madness, it's a totally predictable response, and all web users pay the price for the EC's lack of foresight every day. That they didn't foresee it should cause us to question their ability to foresee the downstream effects of all their other planned regulations.

lurking_swe · a month ago
Well, you and I could have easily anticipated this outcome. So could regulators. For that reason alone…it’s stupid policy on their part imo.

Writing policy is not supposed to be an exercise where you “will” a utopia into existence. Policy should consider current reality. if your policy just ends up inconveniencing 99% of users, what are we even doing lol?

I don’t have all the answers. Maybe a carrot-and-stick approach could have helped? For example giving a one time tax break to any org that fully complies with the regulation? To limit abuse, you could restrict the tax break to companies with at least X number of EU customers.

I’m sure there are other creative solutions as well. Or just implementing larger fines.

varenc · a month ago
If the law incentivized practically every website to implement the law in the "wrong" way, then the law seems wrong and its implications weren't fully thought out.
eddythompson80 · a month ago
"If you have a dumb incentive system, you get dumb outcomes" - Charlie Munger
constantcrying · a month ago
But this is a failure on the part of the EU law makers. They did not understand how their laws would look in practice.

Obviously some websites need to collect certain data and the EU provided a pathway for them to do that, user consent. It was essentially obvious that every site which wanted to collect data for some reason also could just ask for consent. If this wasn't intended by the EU it was obviously foreseeable.

>The mistake the EU made was to not foresee the madness used to make these decisions.

Exactly. Because the EU law makers are incompetent and they lack technical understanding and the ability to write laws which clearly define what is and what isn't okay.

What makes all these EU laws so insufferable isn't that they make certain things illegal, it is that they force everyone to adopt specific compliance processes, which often do exactly nothing to achieve the intended goal.

User consent was the compliance path to be able to gather more user data. Not foreseeing that sites would just ask that consent was a failure of stupid bureaucrats.

Of course they did not intend that sites would just show pop ups, but the law they created made this the most straightforward path for compliance.

shagie · a month ago
> The solution to get rid of cookie banners, as it was intended, is super simple: only use cookies if absolutely necessary.

You are absolutely right... Here is the site on europa.eu (the EU version of .gov) that goes into how the GDPR works. https://commission.europa.eu/law/law-topic/data-protection/r...

Right there... "This site uses cookies." Yes, it's a footer rather than a banner. There is no option to reject all cookies (you can accept all cookies or only "necessary" cookies).

Do you have a suggestion for how the GDPR site could implement this differently so that they wouldn't need a cookie footer?

user5534762135 · a month ago
The internet is riddled with popups and attention grabbing dark patterns, but the only one that's a problem is the one that actually lets you opt out of being tracked to death?
ryukoposting · a month ago
...yes? There are countless ways it could have been implemented that would have been more effective, and less irritating for billions of people. Force companies to respect the DNT header. Ta-daa, done. But that wouldn't have been profitable, so instead let's cook up a cottage industry of increasingly obnoxious consent banners.
baby · a month ago
I hate these popups so much, the fact that they havent corrected any of this bs shows how slow these people are to move
Ylpertnodi · a month ago
Who are 'they', and 'these people'? Nb I haven't had a pop up for years. Perhaps it could be you that is slow. Do you ad-blocking?

https://www.joindns4.eu/for-public

tim333 · a month ago
The "I still don't care about cookies" extension works quite well. Auto-clicks accept and closes the window in approx half a second.
rchaud · a month ago
Kaplan's LinkedIn post says absolutely nothing about what is objectionable about the policy. I'm inclined to think "growth-stunting" could mean anything as tame as mandating user opt-in for new features as opposed to the "opt-out" that's popular among US companies.
j_maffe · a month ago
It's always the go to excuse against any regulation.
paulddraper · a month ago
Interesting because OpenAI committed to signing

https://openai.com/global-affairs/eu-code-of-practice/

nozzlegear · a month ago
The biggest player in the industry welcomes regulation, in hopes it’ll pull the ladder up behind them that much further. A tale as old as red tape.

Dead Comment

bboygravity · a month ago
Yeah well OpenAI also committed to being open.

Why does anybody believe ANYthing OpenAI states?!

jahewson · a month ago
Sam has been very pro-regulation for a while now. Remember his “please regulate me” world tour?
nkmnz · a month ago
OpenAI does direct business with government bodies. Not sure about Meta.
somenameforme · a month ago
About 2 weeks ago OpenAI won a $200 million contract with the Defense Department. That's after partnering with Anduril for quote "national security missions." And all that is after the military enlisted OpenAI's "Chief Product Officer" and sent him straight to Lt. Colonel to work in a collaborative role directly with the military.

And that's the sort of stuff that's not classified. There's, with 100% certainty, plenty that is.