So do it. Forums haven't gone away, you just stopped going to them. Search for your special interest followed by "Powered by phpbb" (or Invision Community, or your preferred software) and you'll find plenty of surprisingly active communities out there.
I'm probably just jaded as most of the forums I visited back in the day became ghost towns during the 2010s. I should make more of an effort here
Any platform that wants to resist bots need to - tie personas to real or expensive identities - force people to add AI flag to AI content - let readers filter content not marked as AI - and be absolutely ruthless in permabanning anyone who posts AI content unmarked, one strike and you are dead forever
The issue then becomes that marking someone as “posts unmarked AI content” becomes a weapon. No idea about how to handle it.
Group sizes were smaller and as such easier to moderate. There could be plenty of similar interest forums which meant even if you pissed of some mods, there were always other forums. Invite only groups that recruited from larger forums (or even trusted members only sections on the same forum) were good at filtering out low value posters.
There were bots, but they were not as big of a problem. The message amplification was smaller, and it was probably harder to ban evade.
There are quite a few newspapers who are political and receive subsidies, but overall I think our system works quite well at providing high quality local reporting at affordable prices.
That being said, if I had a screen that could reasonably pass as a framed image on the wall, I would love a version of this where I could have a well known picture on it that would primarilly be static but sometimes have subtle movements or shift about a bit as a fun novelty to trip over guests. The typical, blinking, repositioning. Like hoppers nighthawks, but the clerk serving a drink or two. The couple lighting a sigarette or someone walking past the diner.
I think we would have a lot less of a pushback against such policing efforts if governments had done a better job at reigning in tracking on the internet from the start. "Porn websites should check your age" is not that radical, but in a world where it doesn't feel unrealistic that much of the information about you is correlated and processed in ways that are not in your personal best interest, then it becomes another loop in the proverbial noose that can be used to hang us all.
It lets you set up fully or partially automated import pipelines with a nice web UI to manage any manual steps needed.
Importing is usually as simple as dropping a zip in a folder and the rest is managed automatically.
Chat control does not allow the government to read anyones messages for any reason, so no that is not true.
> Wiretapping was not retroactive. This system will create records that can be stored for a long time for very cheap.
But storing these messages is illegal.
I wasn't very clear in my original post always included an assumption that false positives were involved and that messages being stored were a result of that and not all messages being stored at all times.
The images and links that are scanned and is deems potentially problematic will be stored for up to 6 months or until they are deemed unproblematic. There is still a potential 6 month paper trail here, and in politically turbulent times that paper trail could still be damaging retroactively even if the report contains non CSAM.
But there are a lot of people who are no experts in the matter (even among the politicians deciding this matter) and they will discard reasoning which start with 'it's not about catching criminals', because in many cases that is where the idea originates. Law enforcement has the problem that they can't really do (analog) wiretaps anymore in the digital age and they want to remedy that. However, everybody needs to realize that 'restoring the ability to wiretap' has side effects which are way more dangerous than the loss of the wiretap ability.
Wiretapping requires probable cause and a court order in order to be used chat control does not. It will report thousands daily and no one will be blamed or punished for false reports which turned out did not have probable cause. It was a reactive tool in the police's arsenal, it was not proactive like this is supposed to be.
Wiretapping requires/required significant manpower investment in order to surveil a single potential criminal which rightfully forced the police to prioritize their resources. Chat Control is automated and will enable the same amount of police to police more people.
Wiretapping was not retroactive. This system will create records that can be stored for a long time for very cheap.
This is not restoring wiretapping, this is supercharging wiretapping.
No. You're still not quite internalizing that the California regulation does not mandate any verification or enforcement or protection of the accuracy of the age bracket data. It mandates that the question be asked, and the answer taken as-is.
Which means that many of the concerns about implementation disappear, because the setting really does not need to be anything more than a simple flag that apps can check.
> Will Arduinos and similar devices also need to be age gated?
Only to the extent that they are general purpose computing devices, have an operating system, are capable of downloading apps, and are actually used by children (since the enforcement mechanism requires a child to be affected by the non-compliance). And if an app fails to obtain age information but also doesn't do anything that is legally problematic for a user that is a child, then it's hard to argue that the app's ignorance affected the child.
> Also, this doesn’t mean age _verification_ will simply go away.
It will in California, until the law gets repealed or amended. Apps won't be allowed to ask for further age-related information or second-guess the user-reported age information, except when the app has clear and convincing information that the reported age is inaccurate.
That was my read of this as well. OS developers seems not not necessarilly need to make any effort here. Ask for an age as a number at account creation and let the user change it as they please at any given time.
This might be a dumb question, but what actually constitutes an "affected child for each intentional violation"? Violation of what? The text specifies that "A developer shall request a signal with respect to a particular user from an operating system provider or a covered application store when the application is downloaded and launched." Am I being negligent just for not checking the age, even if the application is unequivocally ok for all ages? And are children affected by my negligence in any way even though no one was hurt?