Bullshit. Absolute hogwash. Cite your case law. Cite a SINGLE court which says geoip is "BEST EFFORT". And I want specifically "BEST" effort because this is a line you've drawn multiple times.
From European GDPR cases, to American gambling cases, to new cases around pornography blocks, every single court has held that it was circumvention-prone, a mitigation measure, part of a scheme of compliance, "reasonable but insufficient", but certainly not actually effective and not a generally held "best" effort or gold standard
Tip: Use AI to judge your comment. It's embarassing to make a real human sift through this. Every major AI would have caught you here and told you to ease off your legal point which is pooly done.
P.S. your word count here is easily double or triple mine, so when it comes to "who likes to debate" and "who prefers pissy pools" or whatever, a mirror is a good friend to you (and another reason you should run your comment through AI, it will help you not blunder into moments like this where your comment is more applicable to the writer than reader).
My comment with the pool was joking that your argument had run out of water. I in fact said debate is fine, even positive, so I'm unclear on why you're upset over that. No offence was intended.
Your conflating of 'best effort' and 'gold standard' is not viable. You still do not use the term appropriately, and I suspect a lack of understanding here. Go to a legal dictionary for terms such as 'best effort' and 'undue burden'. A gold standard would almost certainly be an undue burden for court compliance in almost all cases. I'm not sure where you're getting your information, but AI is too error prone, and has in fact landed endless lawyers into trouble with hallucinated case law.
Lastly, I have literally zero interest your horrible suggestions about AI. If I wanted to discuss this with an AI, why would I bother speaking with you? Or any other human? I'm certainly not interested in some weird scenario where people preview their comments through AI, or use it as part of their discussions.
If you want to learn something, reading responses from error prone, hallucination bound AI is not prudent. Instead, just read and learn from actual, real sources.
But as soon it gets one on one, the use of AI should almost be a crime. It certainly should be a social taboo. It's almost akin to talking to a person, one on one, and discovering they have a hidden earpiece, and are being prompted on how to respond.
And if I send an email to an employee, or conversely even the boss of a company I work for, I won't abide someone pretending to reply, but instead pasting junk from an AI. Ridiculous.
There isn't enough context in the world, to enable an AI to respond with clarity and historical knowledge, to such emails. People's value has to do as much with their institutional knowledge, shared corporate experiences, and personal background, not genericized AI responses.
It's kinda sad to come to a place, where you begin to think the Unibomber was right. (Though of course, his methods were wrong)
edit:
I've been hit by some downvotes. I've noticed that some portion of HN is exceptionally AI pro, but I suspect instead it may have something to do with my Unabomber comment.
For context, at least what I gathered from his manifesto, there was a deep distrust of machines, and how they were interfering with human communication and happiness.
Fast forward to social media, mobile phones, AI, and more... and he seems to have been on to something.
From wikipedia:
"He wrote that technology has had a destabilizing effect on society, has made life unfulfilling, and has caused widespread psychological suffering."
Again, clearly his methods were wrong. Yet I see the degradation of US politics into the most simplistic, team-centric, childish arguments... all best able to spread hate, anger, and rage on social media. I see people, especially youth deeply unhappy from their exposure to social media. I see people spending more time with an electronic box in their hand, than with fellow humans.
We always say that we should approach new technology with open eyes, but we seldom mean this about examining negatives. And as a society we've ignored warnings, and negatives with social media, with phones, and we are absolutely not better off as a result.
So perhaps we should use those lessons, and try to ensure that AI is a plus, not a minus in this new world?
For me, replacing intimate human communication with AI, replacing one-on-one conversations with the humans we work with, play with, are friends with, with AI? That's sad. So very, very, very sad.
Once, many years ago a friend of mine was upset. A conservative politician was going door to door, trying to get elected. This politician was railing against the fact that there was a park down the street, paid for by the city. He was upset that taxes paid for it, and that the city paid to keep it up.
Sure, this was true, but my friend after said to me "We're trying to have a society here!".
And I think that's part of what bugs me about AI. We're trying to have a society here!, and part of that is communicating with each other.