What amazes me is that this article fails to mention that the slippery slope is already underway. Multiple states have some variation on the "App Store Accountability Act" that requires you present ID just to download apps, including Texas (SB 2420) and Louisiana (HB 570), with several more underway. Then there's the various acts that try to regulate social media by demanding you present ID to be able to post (or else gimp your site to fit one of the carve-outs they have which conveniently ensures that users cannot engage in public posting of any kind towards one another) such as Texas's HB 186 (from 2024).
Put simply: You've all been asleep at the switch while the US-side Internet has been systematically under attack by pornscolds trying to implement Chinese-style censorship, this article's author included.
We haven’t been asleep. We’ve been saying no at every turn. But they’re using propaganda, and they will continue until something sticks. It’s an endless fight and we are losing, despite our efforts.
What gets me is that people keep voting in favor of this stuff?
It's clear that the HN crowd is a bit of an echo chamber. Somehow, these messages of warning are not getting to people who need to hear them in order to stop voting against their own interests.
Well, now I think about it, people vote against their own interests on all kinds of issues. So I suppose this one doesn't have to be any different?
I'll be honest. Many of us in the US are tired of fighting with people that vote against their own interests time and again. It's like having a family member that keeps letting a burglar in the back door, over and over again. At some point you start thinking it might be easier to just find somewhere else to live...
The way feel and then vote is a result of the information they are given, which is selected in order to produce the intended result, or “the engineering of consent” as it's put by Bernays.
People don't vote for this. Politicians do. People need to be made aware of what their politicians are up to in their name and encouraged to punish politicians for their acts of treachery at the polls, including the credible threat of a recall election. That, however, would require organizations that aren't a clownshow. I've been singularly unimpressed by the actions of NetChoice (who recently got slammed for handing in "expert testimony" that clearly was written by ChatGPT) and Free Speech Coalition (who clearly are the porn lobby and invariably approach every problem with an approach guaranteed to lose in court). The EFF seems content to wag their finger while doing nothing substantial. The FSF is utterly silent in the face of app store regulations that, if you read them carefully, would ensnare them and anyone else distributing software online, to say nothing of making it impossible to manufacture a PC or operating system that doesn't implement these child-detection controls.
TL;DR We appear to be seriously lacking in leadership and organization.
If we're honest, this is exactly what it will take for the bots to evaporate into the void. I have always been against 'having a license to internet', but I am very interested in seeing what will happen to all the bots if it does (temporarily) succeed. No bot should be able to pass an ID check, and if one does, its pure legal fuel to sue the system.
That's beyond naive. Nothing will happen to bots because they're not human (and/or) individuals from first world nations. Bot farm runners can either print IDs, post from unrestricted locations, or through bulk posting APIs offered behind doors. Social media operators has less issues with cooperative spams than actually organically trending posts because contents are less original.
This advocates a:
( ) technical
(*) legislative
( ) market-based
( ) vigilante
...solution to control explicit or controversial content online. It won’t work. Here’s why:
Why it fails:
(*) Can be bypassed with basic tools (VPNs, mirrors, alt accounts)
(*) Users and creators won’t tolerate the restrictions
(*) Requires unrealistic global cooperation
(*) Censors legitimate content (art, education, etc.)
(*) Lawmakers don’t understand the tech they’re regulating
(*) Platforms may quietly ignore or undermine it
(*) Trolls and bots will weaponize it
What you didn’t consider:
(*) Jurisdiction conflicts across countries
(*) Encrypted and decentralized content sharing
(*) Abuse of takedown/reporting systems
(*) Privacy and free expression concerns
(*) Content filters are always one step behind
And finally:
(*) Sorry, it just doesn’t work.
( ) This idea causes more harm than good.
( ) You're solving a symptom, not the problem
I’d say all three in the and “finally” category are relevant. This does cause more harm than good because it is more likely to be weaponized by the government when they start to carve out more exceptions to free speech. It is also solving a symptom (where kids go when they are curious about adult topics) rather than the problem (parenting…not providing a safe space for your kids to ask those questions).
I agree, there already is a lot of leaked IDs, enough to feed into AI to generate any ID you wish with any name on it. The ID verification system via a picture is dead on arrival.
I feel like this narrative is counterproductive. Sure, it is true that some people advocating for this are doing it out of ulterior motives, but it certainly isn't true for all of them. Telling the people with legitimate concerns that they don't actually care about children is going to push them into the camp of the people who want to take advantage of their concern. In order to actually prevent the kind of damage that these censorship systems can inflict, there probably needs to be an actual discussion about the problem these systems are ostensibly designed to address.
People have to remember this is a political issue and politics is about coalition building. Insulting large swaths of the general population as being nefarious liars isn't a great way to build coalitions.
The narrative is necessary because governments advocating for the safety of children are almost always doing so with an ulterior motive, and because people with legitimate concerns are often useful idiots for what turns out to be just another way to ratchet up surveillance and censorship and harass undesirables riding another fever wave of social panic and Christian moralizing.
And large swaths of the general population are nefarious liars who don't actually care about children. If building coalitions requires ignoring that fact, then we're not going to build coalitions. The real world isn't HN, where you're expected to assume good faith at all times, regardless of evidence to the contrary.
> Insulting large swaths of the general population as being nefarious liars isn't a great way to build coalitions.
This seems to be working okay for the current administration? Among the issues Trump ran on was demonizing a large swath of the population and vowing some nebulous form of revenge.
> Insulting large swaths of the general population as being nefarious liars isn't a great way to build coalitions.
On the contrary! Look at Qanon. They've essentially taken over the Republican party. They not only insulted the bulk of the population, Qanons want them dead. It worked fine.
It is more than some. When Project2025 talks about these laws it leads with keeping LGBT content away from children. They barely talk about actual porn.
We have had age restrictions on physical pornography (magazines, dvd/vhs) and XXX movie theaters for a century, and it didn't threaten the book publishing industry or Hollywood.
As an adult I can’t remember ever having to put my face into a permanent database and be tracked every time I browsed in a bookstore. So this is not a helpful analogy.
The internet includes porn, but is not limited to porn. Likewise the Internet allows the consuming of content, but also allows the production of content. This is where your analogy breaks down. The end user is both consumer and producer. Take this HN comment for example.
Internet censorship in Russia started around 10 years ago under the pretense of "protecting children". The initial law was kinda funny and relatively innocent: it banned information about drugs and suicide. Because if this information remains freely available, you know, children would get high and kill themselves.
Today the internet in Russia is utterly broken. A VPN or a DPI bypass tool isn't something nice to have — it's an absolute necessity, especially if you communicate with people in other countries.
There are things that are already illegal on the internet. Pirated media is generally illegal, which is meant to protect corporate profits. Most people are okay with such restrictions. But when it's actually about protecting children and forcing these shady companies to enforce their terms of service, it's censorship and control?
The ironic thing is many people who decry forcing these companies to verify age, would be fine with such age verification restrictions on Insta or TikTok.
They are going to start restricting VPN usage as well [1] and I can't even click on a Reddit profile without getting age verification pop up because they commented on a dating advice subreddit once. At the same time I can go on Google images, type "porn" and click filter off without any problems.
The smart children will figure it out if they haven't already, and then go and tell the less acute children how to do it properly, without an app and account. Then we're back to basics of domains and ports instead of apps and accounts.
Stupid question: is there a reason they did not mandate every ISP in the UK to allow the blocking of porn as an opt-in feature? (and make it the default for mobile subscribers under 18yo)
Put simply: You've all been asleep at the switch while the US-side Internet has been systematically under attack by pornscolds trying to implement Chinese-style censorship, this article's author included.
It's clear that the HN crowd is a bit of an echo chamber. Somehow, these messages of warning are not getting to people who need to hear them in order to stop voting against their own interests.
Well, now I think about it, people vote against their own interests on all kinds of issues. So I suppose this one doesn't have to be any different?
TL;DR We appear to be seriously lacking in leadership and organization.
It only hurts real users.
The only time politicians ever see children is when they can use them as a soapbox to push an agenda.
I feel like this narrative is counterproductive. Sure, it is true that some people advocating for this are doing it out of ulterior motives, but it certainly isn't true for all of them. Telling the people with legitimate concerns that they don't actually care about children is going to push them into the camp of the people who want to take advantage of their concern. In order to actually prevent the kind of damage that these censorship systems can inflict, there probably needs to be an actual discussion about the problem these systems are ostensibly designed to address.
People have to remember this is a political issue and politics is about coalition building. Insulting large swaths of the general population as being nefarious liars isn't a great way to build coalitions.
And large swaths of the general population are nefarious liars who don't actually care about children. If building coalitions requires ignoring that fact, then we're not going to build coalitions. The real world isn't HN, where you're expected to assume good faith at all times, regardless of evidence to the contrary.
This seems to be working okay for the current administration? Among the issues Trump ran on was demonizing a large swath of the population and vowing some nebulous form of revenge.
On the contrary! Look at Qanon. They've essentially taken over the Republican party. They not only insulted the bulk of the population, Qanons want them dead. It worked fine.
We will never get our privacy once this is widespread. Laws are too easy.
Today the internet in Russia is utterly broken. A VPN or a DPI bypass tool isn't something nice to have — it's an absolute necessity, especially if you communicate with people in other countries.
That’s not true. Sometimes they see kids for sex. I mean, isn’t this what Epstein is all about?
Dead Comment
The ironic thing is many people who decry forcing these companies to verify age, would be fine with such age verification restrictions on Insta or TikTok.
[1] https://www.bbc.co.uk/news/articles/cn438z3ejxyo
https://www.bbc.co.uk/news/articles/cn438z3ejxyo
Take a look at the Australian age verification law. Mainstream websites aren't even collateral damage, they are explicitly the target.