I might be the only one here in favor of this, and wanting to see a federal rollout.
It is not reasonable to expect parents to spontaneously agree on a strategy for keeping kids off social media- and that kind of coordination is what it would take, because the kids + social media companies have more than enough time to coordinate workarounds. Have the law put the social media companies on the parents side, or these kids may never be given the chance to develop into healthy adults themselves.
But the only way to do this is to require ID checks, effectively regulating and destroying the anonymous nature of the internet (and probably unconstitutional under the First Amendment, to boot.)
It's the same problem with requiring age verification for porn. It's not that anyone wants kids to have easy access to this stuff, but that any of these laws will either be (a) unenforceable and useless, or (b) draconian and privacy-destroying.
The government doesn't get to know or regulate the websites I'm visiting, nor should it. And "protecting the children" isn't a valid reason to remove constitutional rights from adults.
(And if it is, let's start talking about gun ownership first...)
> But the only way to do this is to require ID checks, effectively regulating and destroying the anonymous nature of the internet
That seems intuitive, but it's not actually true. I suggest looking up zero-knowledge proofs.
Using modern cryptography, it is easy to send a machine-generated proof to your social media provider that your government-provided ID says your age is ≥ 16, without revealing anything else about you to the service provider (not even your age), and without having to communicate with the government either.
The government doesn't learn which web sites you visit, and the web sites don't learn anything about you other than you are certified to be age ≥ 16. The proofs are unique to each site, so web sites can't use them to collude with each other.
That kind of "smart ID" doesn't have to be with the government, although that's often a natural starting point for ID information. There are methods which do the same based on a consensus of people and entities that know you, for example. That might be better from a human rights perspective, given how many people do not have citizenship rights.
> (and probably unconstitutional under the First Amendment, to boot.)
If it would be unconstitutional to require identity-revealing or age-revealing ID checks for social media, that's all the more reason to investigate modern technical solutions we have to those problems.
Yes, this isn't the right solution. The power needs to be given to the users.
A better solution is more robust device management, with control given to the device owner (read: the parent). The missing legislative piece is mandating that social media companies need to respond differently when the user agent tells them what to send.
I should be able to take my daughter's phone (which I own), set an option somewhere that indicates "this user is a minor," and with every HTTP request it makes it sets e.g. an OMIT_ADULT_CONTENT header. Site owners simply respond differently when they see this.
> But the only way to do this is to require ID checks
Not necessarily, consider the counterexample of devices with parental-controls which--when locked--will always send a "this person is a minor" header. (Or "this person hits the following jurisdictional age-categories", or some blend of enough detail to be internationally useful and little-enough to be reasonably private and not-insane to obey.)
That would mostly puts control into the hands of parents, at the expense of sites needing some kind of code-library that can spit out a "block or not" result.
> It's the same problem with requiring age verification for porn. It's not that anyone wants kids to have easy access to this stuff,
Depends on the argument being made, on the ideology of the audience, on the current norms, etc.
I had an exchange here on HN some time back (topic was about schools removing certain books from their libraries), and very many people in support of those books, which dealt with gender-identity and sexual orientation, also supported outright porn (the example I used was pornhub) for kids of all ages as long as those books with pictures (not photos) of male-male sexual intercourse could stay in the library.
Right now, if you made the argument "There are some things kids below $AGE shouldn't be exposed to", you'll still get some (vocal) minority disagreeing because:
1. They feel that what $AGE kids get exposed to should be out of the parent's hands ("Should we allow parents to hide evolution from their children?", "Should we allow parents to hide transgenderism from their children?")
2. They know that, especially with young children, they will lose their chance to imprint a norm on the child if they are prevented from distributing specific material to young children.
In the case of sex and sexual education, there is currently a huge push for specific thoughts to be normalised, and unfortunately if it means that graphic sexual depictions are made to children, so be it.
The majority is rarely so vocal about things they consider "common sense", like no access to pornhub for 10 year olds.
> effectively regulating and destroying the anonymous nature of the internet
Technically you can make that work without issues (You only need to prove your age not your identity, something which can reasonably be archived without leaking your identity).
There are just two practical issues:
- companies, government and state (at least US police & spy agencies) will try to undermine any afford to create a reasonable anonymous
- it only technically works if a "reasonable degree" of proof is "good enough", i.e. it's must be fine that a hacker can create a (illegal?) tool with which a child could pretend to be 16+, e.g. by proxing the age check to a hacked device of an adult. Heck it should be fine if a check can be tricked by using the parents passport or phone. I mean it's an 16+ check, there really isn't much of a reason why it isn't okay to have a system which is only "good enough". But lawmakers will try nonsense.
Interestingly this is more a problem for the US then some other states AFIK due to how 1) you can't expect everyone 18+ to have an id and everyone 16+ to be able to easily get one (a bunch of countries have owing (not carrying) id requirements without it being a privacy issue. 2) Terrible consumer protection making it practically nearly impossible to create a privacy preserving system even if government and state agencies do not meddle.
Similar if there wouldn't be the issue with passports in US it probably wouldn't touch the First Amendment as it in the end protects less then a lot of people believe it does.
Lately I'm repeatedly reminded of how in Ecuador citizens, when interviewed during a protest, see it as a normal thing to tell their name as well as their personal ID number into the camera when also speaking about their position in regards of the protest. They stand to what they are saying without hiding.
Since about half a year I've noticed the German Twitter section getting sunk in hate posts, people disrespecting each other, ranting about politicians or ways of thinking, but being really hateful. It's horrible. I've adblocked the "Trending" section away, because its the door to this horrible place where people don't have anything good to share anymore but disrespect and hate.
This made me think about what we're really in need for, at least here in Germany, is a Twitter alternative, where people register by using their eID and can only post by using their real name. Have something mean to say? Say it, but attach your name to it.
This anonymity in social media is really harming German society, at least as soon as politics are involved.
I don't know exactly how it is in the US but apparently it isn't as bad as here, at least judging from the trending topics in the US and skimming through the posts.
How many social media users who create accounts and "sign in" are "anonymous". How would targeted advertising work if the website did not "know" their ages and other demographic information about them. Are the social media companies lying to advertisers by telling them they can target persons in a certain age bracket.
> But the only way to do this is to require ID checks
COPPA has entered the building. If you're under 13 and a platform finds out, they'll usually ban you until you prove that you're not under 13 (via ID) or can provide signed forms from your parent / legal guardian.
I've seen dozens of people if not more over the years banned from various platforms over this. We're talking Reddit, Facebook, Discord and so on.
I get what you're saying, but it kind of is a thing already, all one has to do is raise the age limit from 13 to say... 16 and voila.
Aren't the really problematic social networks the ones where you've lost your privacy and anonymity long ago and are being tracked and mined like crazy?
> But the only way to do this is to require ID checks,
No, it isn't. Check out Yivi [1]. Its fundamental premise is to not reveal your attributes. It's based on academic work into (a.o.) attribute-based encryption. The professor then took this a step further and spun off a (no profit) foundation to expand and govern this idea.
>It's not that anyone wants kids to have easy access to this stuff, but that any of these laws will either be (a) unenforceable and useless, or (b) draconian and privacy-destroying.
Surely not.
Imagine: government sells proof of age cards. They contain your name, and a unique identifier.
Each time you sign up to a service, you enter your name and ID. The service can verify with the government that your age is what it needs to be for the service. There are laws that state that you can't store that ID or use it for any other purpose.
> effectively regulating and destroying the anonymous nature of the internet
The bulk of the internet has not been anonymous for a while. Facebook requires an id already, Google tracks you using Google and the OS, reddit is tightening to control bots, amazon requires a phone number.
Think about it. What portion of your activities day to day on the Internet are anonymous? Now try to do them anonymously. It isn't practical/possible anymore and the internet of yesteryear is gone.
I propose the Leisure Suit Larry method. Just make users answer some outdated trivia questions that only olds will know when they sign up for an account.
It is pretty hard to give access to youtube to your kid with an account where age is stated. Yes kids can open browser in private mode… but they rarely do because it is a friction. If every social media would be moved to adult category the current rules in operating systems would do a good job.
I am not sure about 16 years ( i would support it as a father )… but up to 13-14 feels appropriate, there is PG-13
Ah so be it. I don’t care much for the things that come from anonymous culture. I want gatekeepers. This tyranny of the stupid online is pretty tiresome.
"Unconstitutional" arguments only go so far. I am not America (I'm a proud Australian) so I can easily see the incredibly obvious and ridiculous destruction "freedom" in your country entails.
Anonymity services can still exist without fostering an environment to addict children and young adults to social media or a device... and without your precious "rights" being taken away.
People can post all kinds of illegal things online and no one is suggesting that content should be approved before it can be visible on the Internet. It doesn't have to be strictly enforced to act as a deterrent. How effective of a deterrent it would be has yet to be seen.
The definition of "social media" in this bill actually seems to exempt anonymous social networks since it requires the site "Allows an account holder to interact with or track other account holders".
The internet has not been anonymous in fact or theory for decades now, and if you think the government can't get your complete browsing history on a whim I'm guessing you haven't paid any attention to the news about NSA buying user data bundles from online brokers. That said, "muh freedoms" is hardly a quality argument in the face of the widely documented pervasive harms caused to children by exposure to social media. The logical extreme of your position would be to declare smoking in public a form of self-expression and then demand age limits be removed for the sale of tobacco products because First Amendment. :P
>But the only way to do this is to require ID checks, effectively regulating and destroying the anonymous nature of the internet
Ban portable electronics for children. Demand that law enforcement intervene any time it's spotted in the wild. If you still insist that children be allowed phones, dumb flip phones for them.
It could be done if there was the will to do it, it just won't be done.
You know, I'm not really sure that requiring IDs for access to porn / social media is a terrible idea. Sure it's been anonymous and free since the advent of the internet, but perhaps it's time to change that. After all, we don't allow a kid into a brothel or allow them to engage in prostitution (for good reasons), and porn is equally destructive.
But with the topic at hand being social media, I think a lot of the same issues and solutions apply. It's harmful to allow kids to interact with anyone and everyone at any given time. Boundaries are healthy.
Aaaaand, finally there's much less destruction of human livelihood by guns than both of the aforementioned topics if we measure "destruction" by "living a significantly impoverished life from the standard of emotional and mental wellbeing". I doubt we could even get hard numbers on the number of marriages destroyed by pornography, which yield broken households, which yield countless emotional and mental problems.
So, no, guns aren't something we should discuss first. Also, guns have utility including but not limited to defending yourself and your family. Porn has absolutely zero utility, and social media is pretty damn close, but not zero utility.
I'm in favor of kids not using social media, but not of the government forcing this on people nor spinning up whatever absurd regulatory regime is required. And the chance of actually enforcing it is zero anyway. It's no more realistic to expect this to work than to expect all parents to do it as you say. It's just wasted money plus personal intrusion that won't achieve anything.
There is a societal problem that is beyond just parenting. The peer pressure for kids to feel left out and ostracized because they are the only ones not on the socials is something a teen is going to definitely rebel against their parents on. It's part of being a teen. I'm guessing the other parents would even put pressure on the parents denying the social access.
To me, the only way out of this is by changing one nightmare for another giving the gov't the decision of allowing/denying access. Human nature is not a simple thing to regulate since the desire for that regulating is part of human nature
It seems like you are in favor of something that requires coordination, but don't believe in coordination. Is there a different way you think this could be achieved?
Is there an alternative? Self-control - as we have now - brought us here. If the government shouldn't step in then the only other option left (only I can see) is magic. And we have a bad record with magic.
Several governments have already effectively banned sites like Pornhub by creating regimes where people have to mail their ID to a central clearinghouse (which creates a huge chilling effect.) The article talks about “reasonable age verification measures” and so saying it’s unenforceable seems a little bit premature. Also, you can bet those measures won’t be in any way reasonable once the Florida legislature gets through with them.
I used to want no govt intrusion for this. Then I understood how there are teams of PHDs tweaking feed to maximize addition at each social network. I think there could be even limits, or some sort of tax on gen pop.
I'm in favor of kids not using porn, but not of the government forcing this on people nor spinning up whatever absurd regulatory regime is required. And the chance of actually enforcing it is zero anyway. It's no more realistic to expect this to work than to expect all parents to do it as you say. It's just wasted money plus personal intrusion that won't achieve anything.
We don't even have to speculate, isn't it already the case for <13yo? Or is just Europe? Anyway - yeah, of course they're still on it. Expect less compliance and harder enforcement the older they are, not more/easier.
When I was first getting online, the expectation was that you at least had to be bright enough to lie about your age. Now I have to occasionally prune my timeline after it fills up with "literally a minor." Even an annoyance tax might have some positive effect. Scare the pastel death-threats back into their hole...
Social media's existence is predicated on their algorithms being good at profiling you. Facebook's already got some level of ID verification for names, where they'll occasionally require people to submit IDs. No reason that similar couldn't be applied to age if society agreed it was worthwhile.
I'm in favor of kids not using social media, but not of the government forcing this on people nor spinning up whatever absurd regulatory regime is required.
People said the same thing about age restrictions for smoking, alcohol, movies, and on and on and on.
It's not some unsolvable new problem just because it's the ad-tech industry.
I can't agree. This teaches kids that the government is the answer to everything. this should 100% be the responsibility and decision of parents. Kids are different, and these one-size-fits-all authoritarian tactics that have become a signature of the current GOP Floridian government are just the beginning of the totalitarian christofascist laws that they want to implement. Before you ask, I am a parent, and my kid's devices all have this crap blocked and likely will remain that way until he's at least 15, give or take a year depending on what I determine when he gets to that age. He knows that there are severe ramifications if he tries to work around my decision, and will lose many, many privileges if such a thing happens.
The libertarian equivalent is the model we are currently running and it hasn't worked. It's psychologically addictive, harmful and has parallels with smoking in a way. If the evidence for its harm was less robust.
Do you think simply labelling it as bad is sufficient? Parents have no idea.
I also believe that this is a Big Deal™ that we need to take seriously as a nation. I have yet to see any HN commentator offer a robust pro-social media argument that carries any weight in my opinion. The most common "they'll be isolated from their peers" argument seems pretty superficial and can easily be worked around with even a tiny amount of efforts on the parents' part.
As an added bonus, this latest legislation removes the issue of "everyone is doing it". I mean, sure, a lot still will be—but then it's illegal and you get to have an entirely separate conversation with your kid. :)
> The most common "they'll be isolated from their peers" argument seems pretty superficial and can easily be worked around with even a tiny amount of efforts on the parents' part.
This is so incorrect it makes the flat earth theory look good.
I have two teens and have yet to see the negative effects of social media for them or any of their peers. Not to say it doesn't exist, but I sincerely doubt it's as awful as the doomsayers think. My personal observation of being raised in he 80s is that kids were far more awful to each other then than now.
Every generation has this freakout about something or another. I expect modern kids will end up better at handling social media than their parents are.
I am an adult and I wish someone would take social media away from me. Honestly, I think social media has done more harm than good and I wish it would just cease to exist.
However, especially in Florida, social media may be the only way for some teens to escape political and religious lunacy and I fear for them. I think it's not wise to applaud them taking away means of communication to the "outside", in the context of legislation trends and events there.
IMHO the general idea isn't terrible the implementation is, suppar. But hy that's why it's good that's it's not yet US wide as it means there is time to make improvements.
- I'm not the biggest fan of hard cutoff
- addictive dark patterns which cause compulsive use should be general banned or age restricted no matter where they are used, honestly just ban most dark patterns they are intentional always malicious consumer deception not that far away from outright committing fraud. (And age restrict some less dark but still problematic patterns.)
- I think this likely will make all MMORPGs (and Roblox, lol) and similar 16+, I'm quite split about that. I have seen people between 14-18 get addicted to them and mess up their education path. But I have also seen cases of people which might not be alive anymore today if they hadn't fund a refuge and companions in some MMORPG.
- I guess if it can make platforms like YT, Facebook, Instagram, Snapshat etc. implement a "teen" mode with less dark patterns and tracking it would be good.
- The balance between proving your age and making things available and keeping privacy is VERY tricky (especially in the US) and companies, the government and spy agencies will try to abuse the new requirements for age verification to spy more reliable on everyone 16+.
- It's interesting how that affects messengers. Many have less dark patterns, some do not track users, or can easily decide not to track children. The aren't social networks per-se. But most have some social network like features. Even such which do not try to create compulsive use might still end up with it as long as their as is "live" chatting.
>addictive dark patterns which cause compulsive use should be general banned or age restricted no matter where they are used, honestly just ban most dark patterns they are intentional always malicious consumer deception not that far away from outright committing fraud.
How would you write a law that accomplishes your goal?
I am against government regulation of websites and classification of websites. If we allow government to do this, it will be politicized at some point.
We need to find a better way, for example (just a quick idea), social websites run and protected by a school-by-school basis. This way, it can be regulated and controlled.
In other words, government should regulate what they already have control over, not impose new control measures over things they don't.
> I might be the only one here in favor of this, and wanting to see a federal rollout.
I'm not American, I think it's perfectly reasonable to ban kids from the internet just by applying the logic used for film classification. Even just the thumbnails for YouTube (some) content can go beyond what I'd expect for a "suitable for all audiences" advert.
This isn't an agreement with the specific logic used for film classification: I also find it really weird how 20th century media classification treated murder perfectly acceptable subject in kids shows while the mere existence of functioning nipples was treated as a sign of the end times (non-functional nipples, i.e. those on men, are apparently fine).
Also, I find it hilariously ironic that Florida also passed the "Stop Social Media Censorship Act". No self-awareness at all.
Film classification is also dumb. In Australia Margaret Pomeranz used to run banned film viewing sessions that ended up in cops wrestling them for dvds.
You and I both know they're doing this for two reasons. To put a heavy burden that will be almost impossible to enforce on tech companies so they can easily punish them for political reasons and to stop teens from learning early how shitty the GOP is.
- Make this an ISP level thing? Somehow? They already know the makeup of a household. If they know a house has kids, something something ToS "You as the parents are liable..." Then maybe repeat those scary RIAA letters but "for good" when someone in that household hits a known adult IP?
- Maybe browsers send an "I'm an adult" flag similar to "Do Not Track," and to turn it on, the user has to enter a not-to-be-shared-with-kids PIN? If the browser and OS can coordinate, OSes would be able to tell the browser if the user is an adult, and skip the PIN entering.
- Force kids to use a list of Congress-approved devices that gate access to the wider Internet? YouTube Kids but for everything. Yes, hacker kids will be able to get by, but this being Hacker News, they'd deserve the fruits of that particular labor.
Just spitballing. Anything obvious I'm missing?
PS- I am neither for nor against the Florida-type legislation as of this comment.
I find myself agreeing with this.
Me 15 years ago would have raged at this. I have kids now and they are pressured to join all sorts of social media platforms. I still don't allow them to have it but I know they take a slight social hit for it.
There is zero positive to giving kids the ability to access social media sites designed to be addictive when they don't have the mental facilities to determine real from not real. Many adults seem to suffer from this as well. Plus kids don't understand that the internet is forever, really no need for an adult looking for a job or running for office to be crippled by a questionable post they made as an edgy teen.
I'm against a lot of government regulation but in this case I am even more against feeding developing kids to an algorithm
Just remove the temptation and pressure all together.
Same here. With the way the internet is nowadays, its probably best to keep kids off the internet until they are older. One just has to look at whats on places like Youtube 'Kids' to see all the stuff that is not kid friendly and probably detrimental to their mental health.
One thing I've found interesting with social media and children is that almost every parent I know recognizes the impact social media has on their children, but they willfully ignore it or feel powerless to avoid it. I hear stuff like, "It's impossible for kids to not be on social media. It's required these days.", and "The social consequences will be worse for them if they're not a social media."
The answer to our problems is not less freedom. It should never be less freedom. I’m opposed of any law that restricts freedom of information. No matter the age. I think we need to do better on educating kids on online behaviors and we should hold social media companies accountable for addictive features but what we absolutely shouldn’t do is blame little Jenny and take away her access to groups and social interaction online.
Yes. Little Jenny should also be free to purchase and drink a bottle of whiskey, but only if her parents are sufficiently negligent.
I would've backed your argument up until a few years ago, but the science is coming down pretty hard now showing that social media use is absolutely detrimental to still-developing minds.
I'm in favor of this if the only enforcement actions are against social media companies for being predatory, and not against families for breaking the law and allowing their kids on social media. And it's useful for indicating that social media on the balance is not good for kids.
Enforceability is a foregone conclusion, and when it comes to things like this it's somewhat expected. The same can be said for pornography, drugs and alcohol and tobacco (remember Joe Camel?), and anything else that would fall under blue-laws.
The goal of this is to bring attention to the fact that it's a problem and should be seen as undesirable, like pornography or Joe Camel. The cancellation of Joe didn't prevent kids from getting cigarettes but it did draw attention to the situation and there has been a marked decline in youth smoking since the late 90s when the mascot was removed. It's correlative, for sure, but the outcomes are undeniable. The same happened with the DARE program and class 1 drugs (except for marijuana iirc).
This discussion can be seen every time when the EU decides on some regulation against tech industry. A lot of people will jump that it won't be enforced, then when we see the first fines those people will jump that it won't move a needle, then when the tech giants do change a bit their course then... well the tech bros will always find a reason to jump against doing anything to curb tech.
It’s dumb policy, because Florida GOP. The smart move is to target advertising for kids. If you attack the ability to advertise to the underage audience using mom’s iPad, social media will self police.
Yes. Rather than mandating verification, can we just mandate that there a registry or that websites are legally required to include a particular HTTP header, combined with opt-in infrastructure in place for parents to use?
e.g. You could set up a restricted account on a device with a user birthdate specified. Any requests for websites that return an AGE_REQUIRED header that don’t validate would be rejected.
we have laws against drinking and smoking / vaping… for kids under age of 16 and that has been working so GREAT that we should get more of these laws in place (federal preferrably) so that our kids can be even healthier adults. moar laws please - need as much of federal government in our families and parenting as we can get that I’d suggest 1 federal agent be assigned to each child born to ensure healthy adulthood
You can be in favor but in the US it is unconstitutional for a gov to broadly restrict speech. It's why each of these age verification + social media laws eventually get tossed. Legislators know this (or are too dumb to) but it's not their own dollars that are getting burned during this vote-baiting performance.
No I definitely agree. I'm a little skeptical of how they'll enforce this but ultimately I think less kids on the internet and social media will be a positive and I agree that it doesn't seem like parents have managed to figure out how to address this.
I don’t think so. There’s nothing about social media that makes me feel like kids need it. Hell, if we could ban it for adults that would be an unmitigated good.
16 seems too young. Why not tie it to drinking age. There are way too many people who have gotten better at online manipulation in the past few decades.
I guess for me it depends on what the law considers "social media".
Is something like the bulletin boards we used to have around the late 90s/early 2000s social media? What about chat rooms? Local social web sites for the school or your city? I think a lot of these things can even be beneficial, if I think about my own experiences as a somewhat introverted teenager.
And what about things like Netflix, Youtube, Podcasts? They can be just as harmful as TokTok and Instagram. Especially on Youtube you have a lot of similar content.
I've found accounts that claim to be official accounts of children's shows - maybe they even are - and which are full of nonsensical videos, just randomly cut together action scenes of multiple episodes. It's like crack for children. Of course YouTube doesn't do anything, they want you to pay for YouTube kids. And the rights holders want you to buy the content, so they leave the poor quality stuff up.
The thing is, exploitative content is always going to be created as long as there are incentives to do so. You can ban stuff, but it's whack-a-mole, and you are going to kill a lot of interesting stuff as collateral damage. The alternative is much harder, change the incentives so we can keep our cool technology and people are not awarded for making harmful stuff with it. But that would require economic and political changes, and people don't like to think about it.
> I guess for me it depends on what the law considers "social media".
It's a bill written by the Florida House of Representatives, so there's a definition there. Mind you, it's the Florida House, which has put out some extremely bad laws in its current session -- from "Parental Rights in Education" to the Disney speech retaliation. But given that this is a less ostensibly partisan issue, there are reasons for hope.
The definition seems narrowly tailored. I think that part (d)1d is a questionable choice, since most social media platforms will probably argue that they are not really "designed" to be addictive (for various definitions of "designed" and "addictive"). It appears that specific exemptions were made for YouTube, Craigslist and LinkedIn (without mentioning those companies by name), and algorithmic content selection is part of the definition. This is one of the better versions of this law I could imagine being written by a state legislature, though it isn't without its faults. It's nice to see my home state in the news for something good for once.
I agree that YouTube is a particularly difficult case. But part of the problem comes from using it as a digital pacifier, rather than peer pressure. There's no particular reason why the technology market should produce a free stream of child-appropriate videos. Ad-supported media has its ups and downs, but when the targets of those ads are young children, it's much harder to defend. And parents have more control over the behavior of their 4-year-olds than their 14-year-olds.
Here's the definition:
>(d) "Social media platform:"
>1. Means an online forum, website, or application offered39
by an entity that does all of the following:
>a. Allows the social media platform to track the activity
of the account holder.
>b. Allows an account holder to upload content or view the
content or activity of other account holders.
>c. Allows an account holder to interact with or track
other account holders.
>d. Utilizes addictive, harmful, or deceptive design
features, or any other feature that is designed to cause an
account holder to have an excessive or compulsive need to use or
engage with the social media platform.
>e. Allows the utilization of information derived from the
social media platform's tracking of the activity of an account
holder to control or target at least part of the content offered
to the account holder.
>2. Does not include an online service, website, or
application where the predominant or exclusive function is:
>a. Electronic mail.
>b. Direct messaging consisting of text, photos, or videos
that are sent between devices by electronic means whe re messages
are shared between the sender and the recipient only, visible to
the sender and the recipient, and are not posted publicly.
>c. A streaming service that provides only licensed media
in a continuous flow from the service, website, or application
to the end user and does not obtain a license to the media from
a user or account holder by agreement to its terms of service.
>d. News, sports, entertainment, or other content that is
preselected by the provider and not user generated, and any
chat, comment, or interactive functionality that is provided
incidental to, directly related to, or dependent upon provision
of the content.
>e. Online shopping or e-commerce, if the interaction with
other users or account holders is generally limited to the
ability to upload a post and comment on reviews or display lists
or collections of goods for sale or wish lists, or other
functions that are focused on online shopping or e-commerce rather than interaction between users or account holders.
> f. Interactive gaming, virtual gaming, or an online
service, that allows the creation and uploading of content for
the purpose of interactive gaming, edutainment, or associated
entertainment, and the communication related to that content.
> g. Photo editing that has an associated photo hosting
service, if the interaction with other users or account holders
is generally limited to liking or commenting.
> h. A professional creative network for showcasing and
discovering artistic content, if the content is required to be
non-pornographic.
> i. Single-purpose community groups for public safety if
the interaction with other users or account holders is generally
limited to that single purpose and the community group has
guidelines or policies against illegal content.
> j. To provide career development opportunities, including
professional networking, job skills, learning certifications,
and job posting and application services.
> k. Business to business software.
> l. A teleconferencing or videoconferencing service that
allows reception and transmission of audio and video signals for
real time communication.
> m. Shared document collaboration.
> n. Cloud computing services, which may include cloud o. To provide access to or interacting with data
visualization platforms, libraries, or hubs.
> p. To permit comments on a digital news website, if the
news content is posted only by the provider of the digital news
website.
> q. To provide or obtain technical support for a platform,
product, or service.
> r. Academic, scholarly, or genealogical research where the
majority of the content that is posted or created is posted or
created by the provider of the online service, website, or
application and the ability to chat, comment, or interact with
other users is directly related to the provider's content.
> s. A classified ad service that only permits the sale of
goods and prohibits the solicitation of personal services or
that is used by and under the direction of an educational
entity, including:
The fact that are well over a dozen exceptions carved out strongly suggests that the definition is anything but narrowly tailored, and the authors of the bill preferred to add in exceptions to everyone who objected rather than rethinking their broad definitions.
1a-c will be trivially satisfied by anything that "has user accounts" and "allow users to comment". 1e is clearly meant to cover "algorithmic" recommendations, but it's worded so broadly that a feature that includes "threads you've commented on" would satisfy this prong. 1d is problematic; it can be interpreted so narrowly that nothing applies, or so broadly that everything applies. IANAL, but I think you'd have a decent shot of going after this for unconstitutionally vague on this prong for sure.
Discounting 1d, this means that virtually every website in existence qualifies as social media sites, at least before you start applying exceptions. Not just Facebook or Twitter, but things like Twitch, Discord, Paradox web forums, Usenet, an MMO game, even news sites and Wikipedia are going to qualify as social media platforms.
Actually, given that it's not covered by any of the exceptions, Wikipedia is a social media platform according to Florida, and I guess would therefore be illegal for kids to use. Even more hilariously, Blackboard (the software I had to use in school for all the online stuff at school) qualifies as a social media platform that would be illegal for kids to use.
> f. Interactive gaming, virtual gaming, or an online service
This bill is already out of date. The new generation's social media are games like Roblox. And these are as addictive as the old social media.
Good luck with this whack-a-mole. A comprehensive bill would stop this at the source: kids owning smartphones. But addressing smarphones would upset too many parents and too much business, so it won't get done.
> Utilizes addictive, harmful, or deceptive design features, or any other feature that is designed to cause an account holder to have an excessive or compulsive need to use or engage with the social media platform.
Many platforms can argue that they're not engaging in this behavior. Do Mastodon and Lemmy count as addicting? They look like Twitter and Reddit on the surface, but they don't have a sorting algorithm that maximizes for engagement. So would they be included in the definition or not?
And if they don't, what's stopping big companies from claiming the same, since you can't actually see their source code for news feed sorting?
Thank you for sharing all the context; very interesting.
1d does stand out. I can guess what they were going for. I wonder if it could be somehow scoped to gamified, or to feed-based algorithmic sites. As a random example, Reddit's site definitely underwent such feed based boosting in the last few years. I'm constantly getting suggested content that is some form of region-based outrage event, Person X doing horrible thing to person Y, etc, and its nauseating. You click one such thing and it knows and it just hits you again and again, and eventually you just have to get out. Which sucks because every time I pick up a new interest, its an easy place to go to find more people that are into it; but you can't get that without the BS.
This makes me think of how, as a child, every site asked “are you over 13”, and I diligently clicked “yes”. Some more clever sites asked for my birth year… forcing me to do the arduous work of taking the current year and subtracting 14.
Though I suppose the real plan here is to pass the law and then have the government selectively prosecute social media companies for having users under 16.
I remember my daughter at an astonishingly young age encountering a age-login screen, turning to me and asking "How would they be able to tell?" then merrily telling the system she was 18.
A small transaction would cover 99% of cases (e.g., pay a dollar that's immediately refunded). It would stop kids from casually creating accounts. The kids who can do this are already precocious enough to bypass any other verification steps you could come up with.
Maybe if they use a profile pic that you algorithmically determine is someone underage, you could do some additional checks. The smart ones would learn not to utilize a profile pic of themselves, which would ultimately be better.
I wonder if it'd really cover anything remotely close to 99% of cases. Even if 100% of parents knew about it and watched their credit cards enough to notice a $1 refunded transaction it just takes something like one friend in high school with a credit card to sign up all their little brother's friends. It may even just cause more credit cards being shared around than kids it stops from getting to the site they want on.
Then there'd be even more unintended consequence. Instead of sites you don't want kids creating accounts on you'd have sites selling 5 minutes of ads to create an account for them or increasingly shady stuff. Preventing this kind of site is the same as the original issue.
> have the government selectively prosecute social media companies for having users under 16.
The US government is already legally mandated to prosecute companies known to harbor information, collected online, concerning minors less than 13 years old without consent from their parents or legal guardians.[1]
It's why Youtube blocks comments and doesn't personalize ads on videos published for kids, to pick out a prominent example.
Laws are getting stricter. Around the world, there is increasing regulatory requirement for businesses to actively investigate user behavior (tracking!) to identify and exclude underage users who are concealing their age.
Yeah, similarly I had just gotten used to entering an elder sibling's birthday whenever asked. Adding these arbitrary age restrictions does nothing but make it increasingly obvious to kids how little our leaders and other supporters of these arbitrary age restrictions actually care.
I don't think enforcement actually needs to be very tightly controlled. The barriers that are put in place like the one you describe are already enough to create a social milieu where parents and kids with think twice about these things and understand that there is a recognised harm potential.
There's nothing stopping you pouring your youngsters a glass of wine with dinner, but as a society we've made the dangers of alcohol and similar things so well understood that no parent wants to.
> as a society we've made the dangers of alcohol and similar things so well understood that no parent wants to.
Unfortunately, as a society, we have a much harder time grasping social media threat data. I suppose some of that is due to how news orgs consistently+bizarrely+hugely overstating the actual harms in the data.
Some more clever sites asked for my birth year… forcing me to do the arduous work of taking the current year and subtracting 14.
But why? You could have just picked a year that worked, and sticked with it. Obviously, there's no way of telling which year works, but you could have bruteforced that just once.
I remember when I was 10 years old at a computer camp during the summer at a local college. They had me set up my first email account with hotmail. They all asked us to lie about our age. I think even then they had restrictions that you had to be 13 years old.
But - that was over 25 years ago. The internet was a much different place.
Fast forward to today. Ours came home with a google account in the 5th grade I think. Something I explicitly did not want. They didn't send a permission slip home like they do for everything else either.
Another teacher around the time had the kids set up on GoodReads. They were under 13 and there was a TOS at the time restricted to 13+. Mostly adults on that site.
> Fast forward to today. Ours came home with a google account in the 5th grade I think. Something I explicitly did not want. They didn't send a permission slip home like they do for everything else either.
Google Workspace accounts, especially those for education[0], have Web & App Activity, as well as Location History, automatically turned off. It's just a tool for schools to get free/cheap email, storage, and classroom tools. For your child under 13 to be able to use it compliant with COPPA[1], your school must have either used some level of blanket consent, or the school didn't bother to actually get the parental consent Google requires.
I remember joining ebay (well, auctionweb - aw.com/ebay, IIRC) and it not even being an issue that I was around 14, we mostly trusted each other, and just mailed money orders around. A different time.
I feel like I can almost guarantee that this bill has nothing to do with protecting children and has more to do with brainwashing children and restricting their access to opposing viewpoints, especially given this is Florida.
That being said, I am not strictly opposed to a bill like this. But 16 is way too old. I feel like likely somewhere within the 10 to 13 range since most don't allow for under 13 anyways would be fine. But then if they all block under 13 what is the point of the bill?
Restricting social media use is tantamount to brainwashing? I don't see the connection.
As for restricting access to "opposing" (opposition to what?) viewpoints, what children can be exposed to has long been restricted.
But since there isn't a syllabus for what children will be presented on social media, I don't see the viewpoint restriction angle either.
In fact, that position is illogical to the point that it raises the question of whether or not people concerned with it have an agenda to expose kids to "viewpoints" that their parents would disapprove of. Under the radar of supervision.
Going only on my experience with social media, a valid and more plausible reason for this restriction would be that social media seeks to optimize the feed of users for engagement. In a manner that "hacks" psychology in a way that makes it difficult for even adults to disengage. Given that minors do not have fully developed brains, the ability to disengage may be even more hindered.
Television programing has long sought this goal as well and with some success. While that use isn't restricted, there is theoretically a red line. Florida may see it in social media use.
> Restricting social media use is tantamount to brainwashing? I don't see the connection.
The idea is that social media exposes kids to viewpoints that they wouldn't otherwise be exposed to, so parents who want their kids to be a certain way would not want this, as they cannot easily control what viewpoints their kids are exposed to online.
Of course, every parent wants their kid to be a certain way, whether or not this is negative is dependent on how narrow that certain way is. The same applies to restricting what kids are exposed to: it is good to restrict exposure to some things, but too much restriction becomes bad.
The Florida legislature has recently been restricting the education system's ability to talk about gender and race, and pushing for more Christianity in schools. This makes some people feel there is an implied extension to the apparent "This is to protect kids" message: This is to protect kids (by making them conservative and Christian)
Well, true, TikTok probably has more negatives than positives, but I have a feeling the American Talibans[1] in power don't like teens organizing, and where do they organize? On social media...
[1] Yes this is an apt comparison. Suppression of opposing viewpoints, growing voter suppression and not even accepting results of democratic elections, and then the whole anti-Abortion movement.
This is coming from the state that is trying to ban books based on some backwards concern of a white kid feeling bad. (for the record I am white).
I don't care what side of politics you are on... you know what opposition I am talking about. Whether you are for or against it.
> not people concerned with it have an agenda to expose kids to "viewpoints" that their parents would disapprove of.
Yes! Because otherwise the parents are brainwashing their kids into their viewpoint not allowing them to see the real world.
This isn't a hard concept to understand here.
I mean I am liberal and atheist. But even I have wondered if I ever have kids if I don't expose them to the choice of religion if that is brainwashing of its own. I may try to justify it with the harm that religion has caused, but I am still denying my kid another prospective that is different than my own.
Edit:
To clarify here. If this was a state that was not actively removing opposing viewpoints from their libraries and teachings than I may buy that they actually care about their kids. But it's not, it's Florida. The beacon of being scared of their kids knowing anything about the real world and daring to have compassion for someone different than them.
This is a valid question for pretty much all legislation. It serves by allowing the congress critters to toot their horns as doing something for those that only pay attention to news bites while doing no harm by doing nothing
I feel like the Australian TV show Utopia should be mandatory viewing for anyone who wants to understand government, even though it is ostensibly a comedy.
I really hate how right you are. And that meaningless thing will be all over ads and or your opponent voted against it (since it was meaningless and shouldn't be on the books) that turns into an attack on them.
Its should be categorized in the same way as gambling. Its addictive and useless in any form. The whole world would be better off without any social media. Including the most anti social people.
I disagree, what social media has turned into thanks to algorithms and engagement is a problem.
But in its purest form Social media isn't a bad thing and is a good way to actually keep in contact with friends. Also a good way to keep up on events happening around the world without relying on the news for everything.
I'm in agreement with you. I'm amazed at how many people that do I do can look at this myopic, sanctimonious rage bait and immediately start figuring out how to implement it with an encrypted token exchange.
This is still awful no matter how much crypto you throw at it. The end result of solving this little puzzle of a problem is that everything is worse shortly afterwards. Congratulations to them, I guess.
The reddit /r/politics trolls are entering HN. If you want to criticize something, better to understand it first.
Small Federal Government, more local government control. Let local communities decide for themselves what is important. Now I know your first thought is to ignore what I'm saying to find examples that prove this wrong. But this is the conservative approach in general.
NY and CA are never going to pass a bill like this, if having your kids on TikTok is important, move somewhere where you are with like minded people.
As a Floridian and someone in IT - I'm curious how this will be implemented
I can't remember the last time I signed up for a new social network; do they ask age? Is it an ask to Apple / Google to add stronger parental approval? Verify drivers license #?
We heard about this days ago on local news and I've been struggling to figure out short of are you 16 years or older how this is gonna get done and how do you fine someone if it's breached.
If I remember correctly, at one time Google even tried to enforce it and there were usability problems with typos and wrong dates and things - there was no verification and no easy way to fix an error. IE, if a mid-40s adult accidentally entered 1/1/2024, they'd be locked out. And if a kid entered 1/1/1977, they'd have an account (but not way to correct that date when they eventually turned 18).
(Putting aside if the law is good or bad and the constitutionality of it.)
Put criminal penalties to the directors if no reasonable attempt to keep kids out.
Plus corporate death penalty if they purposely target kids.
Then how they enforce it doesn't really matter as long as there are periodic investigations. The personal risks are too great and the companies will figure it out.
The FTC already implements a "corporate death penalty" in the form of massive fines if an organization collects data on kids and uses it to target advertising (see COPPA)
The only way to determine age is to compile a database of gov-issued IDs and related data. Which is an unconstitutional barrier to speech. Which is why this will get struck-down like each similar law.
The part about ID data eventually being shared with 3rd parties, agencies - and/or leaked - is a bonus.
It sounds like you are envisioning age verification that involves just two parties: the user and the site that they need to prove their age to. The user shows the site their government issued ID and the site uses the information on the ID to verify the age.
That would indeed allow the site to compile a database of government issues IDs and give that information (willfully or via leaks) to third parties.
Those issues can be fixed by using a three party system. The parties are the user, the site that they need to prove their age to, and a site that already has the information from the user's government ID.
Briefly, the user gets a token from the social media site, presents that token and their government ID to the site that already has their ID information, and that site sign that token if the user meets the age requirements. The user presents that signed token back to the social network which sees that it was signed by the third site which tells it the third site says the user meeds the age requirement.
By using modern cryptographic techniques (blind signatures or zero knowledge proofs) the communication between the user and the third site can be done in a way that keeps the third site from getting any information about which site they are doing the age check for.
With some additional safeguards in the protocol and in what sites are allowed to be the ID checking sites it can even be made so that someone who gets records of both the social media site and the third site can't use timing information to match up social media accounts with verifications and so could work with sites that allow anonymous accounts.
I'm still on the fence about government doing a parent's job here, especially for kids under 13, but I can't stand that no one pushing these bills has come up with an actually reasonable age verification method.
The problem here is that it's pretty much out of the hands of the parents. If your kids' friends have social media, your kids will absolutely need it too in order to not be left out. I've witnessed the pressure, and it's not pretty. Add to that the expectation from society that children shall have access to social media.
Regulation is pretty much the only way to send the right signals to parents, schools, media companies (e.g. Swedish public service TV has a kids app that until recently was called "Bolibompa Baby", but it's now renamed to "Bolibompa Mini"), app designers, and so on.
We're barreling towards an internet that requires an id before you can use it.
It's a bit upsetting but I don't harbor the early 2000s naiveté about the free internet where regulation doesn't exist, the data exchange happens over open formats and connecting people from across the world is viewed as an absolute positive.
Govt meddling on social media platforms, the filter bubble, platforms locking data in, teenage depression stats post Instagram, doom scrolling on tiktok have flipped me the other way.
Internet Anonymity is going to die - let's see if that makes this place any better.
And the government having unfettered knowledge of every site you visit - in particular the more salacious ones - is how we solve that? Surely that won't be used as a political cudgel to secure power at any point, nor will it ever be used to target specific demographics or accidentally get leaked.
I'm still on the fence about government doing a parent's job here, especially for kids under 13, but I can't stand that no one pushing these bills has come up with an actually reasonable age verification method.
Anonymous credentials. A central authority with verified age information of each person grants credentials that verify the age to third parties, but the authentication tokens used with the third party can't be used by the third party nor the central authority to identify anything else about the credential holder.
Private State Tokens enable trust in a user's authenticity to be conveyed from one context to another, to help sites combat fraud and distinguish bots from real humans—without passive tracking.
An issuer website can issue tokens to the web browser of a user who shows that they're trustworthy, for example through continued account usage, by completing a transaction, or by getting an acceptable reCAPTCHA score.
A redeemer website can confirm that a user is not fake by checking if they have tokens from an issuer the redeemer trusts, and then redeeming tokens as necessary.
Private State Tokens are encrypted, so it isn't possible to identify an individual or connect trusted and untrusted instances to discover user identity.
This system clearly and trivially deanonymizes the internet. Even worse than a centralized system, it uses a simple "just trust me bro" mentality that issuers would never injure users for personal gain and would never keep logs or have data leaks, which would expose the Internet traffic of a real person.
> I'm still on the fence about government doing a parent's job here
The issue is, as a parent who is not very technical, how do they _safely_ audit their child's social media use?
I am reasonably confident that I could control my kid's social media habit, but only up to a point. there isn't anything really stopping them getting their own cheap phone/signing in on another person's machine.
The problem is, to safely stop kids getting access requires either strong authentication methods to the ISP. ie, to get an IP you need 2fa to sign in. But thats also how censorship/de-anonymisation happens.
While people are on the fence about it, our children are having their youth, innocence and brains destroyed by tiktok et al. Those platforms are cancer to adults even, let alone impressionable kids... yet here we are still debating it and faffing around about "1st amendment yaddi yadda".
>children are having their youth, innocence and brains destroyed by tiktok
For one, ease up on the hyperbole if you want to be taken seriously. I'll give you the benefit of the doubt because the news is nothing but hyperbole these days, so it's easy to pick up the habit. Second, most kids aren't having "their youth, innocence and brains destroyed." The news takes the edge cases, amplifies them and presents it as the norm to peddle fear because fear sells. Nothing ever is bad as the news makes it out to be, but they gotta make a dollar, you see how bad the news business is since the internet?
FWIW, my kid uses social media and just connects with her friends. Nothing overly malicious goes on, they just goof off. I've checked.
You really wanna protect the kids from anxiety and whatnot, block the news and all the talking heads trying to manipulate the next generation to their political opinions.
It is not reasonable to expect parents to spontaneously agree on a strategy for keeping kids off social media- and that kind of coordination is what it would take, because the kids + social media companies have more than enough time to coordinate workarounds. Have the law put the social media companies on the parents side, or these kids may never be given the chance to develop into healthy adults themselves.
It's the same problem with requiring age verification for porn. It's not that anyone wants kids to have easy access to this stuff, but that any of these laws will either be (a) unenforceable and useless, or (b) draconian and privacy-destroying.
The government doesn't get to know or regulate the websites I'm visiting, nor should it. And "protecting the children" isn't a valid reason to remove constitutional rights from adults.
(And if it is, let's start talking about gun ownership first...)
That seems intuitive, but it's not actually true. I suggest looking up zero-knowledge proofs.
Using modern cryptography, it is easy to send a machine-generated proof to your social media provider that your government-provided ID says your age is ≥ 16, without revealing anything else about you to the service provider (not even your age), and without having to communicate with the government either.
The government doesn't learn which web sites you visit, and the web sites don't learn anything about you other than you are certified to be age ≥ 16. The proofs are unique to each site, so web sites can't use them to collude with each other.
That kind of "smart ID" doesn't have to be with the government, although that's often a natural starting point for ID information. There are methods which do the same based on a consensus of people and entities that know you, for example. That might be better from a human rights perspective, given how many people do not have citizenship rights.
> (and probably unconstitutional under the First Amendment, to boot.)
If it would be unconstitutional to require identity-revealing or age-revealing ID checks for social media, that's all the more reason to investigate modern technical solutions we have to those problems.
A better solution is more robust device management, with control given to the device owner (read: the parent). The missing legislative piece is mandating that social media companies need to respond differently when the user agent tells them what to send.
I should be able to take my daughter's phone (which I own), set an option somewhere that indicates "this user is a minor," and with every HTTP request it makes it sets e.g. an OMIT_ADULT_CONTENT header. Site owners simply respond differently when they see this.
The First Amendment guarantees free expression, not anonymous expression.
For example, there are federal requirements for identification for political messages. [1] These requirements do not violate the First Amendment.
[1] https://www.fec.gov/help-candidates-and-committees/advertisi...
Not necessarily, consider the counterexample of devices with parental-controls which--when locked--will always send a "this person is a minor" header. (Or "this person hits the following jurisdictional age-categories", or some blend of enough detail to be internationally useful and little-enough to be reasonably private and not-insane to obey.)
That would mostly puts control into the hands of parents, at the expense of sites needing some kind of code-library that can spit out a "block or not" result.
Depends on the argument being made, on the ideology of the audience, on the current norms, etc.
I had an exchange here on HN some time back (topic was about schools removing certain books from their libraries), and very many people in support of those books, which dealt with gender-identity and sexual orientation, also supported outright porn (the example I used was pornhub) for kids of all ages as long as those books with pictures (not photos) of male-male sexual intercourse could stay in the library.
Right now, if you made the argument "There are some things kids below $AGE shouldn't be exposed to", you'll still get some (vocal) minority disagreeing because:
1. They feel that what $AGE kids get exposed to should be out of the parent's hands ("Should we allow parents to hide evolution from their children?", "Should we allow parents to hide transgenderism from their children?")
2. They know that, especially with young children, they will lose their chance to imprint a norm on the child if they are prevented from distributing specific material to young children.
In the case of sex and sexual education, there is currently a huge push for specific thoughts to be normalised, and unfortunately if it means that graphic sexual depictions are made to children, so be it.
The majority is rarely so vocal about things they consider "common sense", like no access to pornhub for 10 year olds.
Technically you can make that work without issues (You only need to prove your age not your identity, something which can reasonably be archived without leaking your identity).
There are just two practical issues:
- companies, government and state (at least US police & spy agencies) will try to undermine any afford to create a reasonable anonymous
- it only technically works if a "reasonable degree" of proof is "good enough", i.e. it's must be fine that a hacker can create a (illegal?) tool with which a child could pretend to be 16+, e.g. by proxing the age check to a hacked device of an adult. Heck it should be fine if a check can be tricked by using the parents passport or phone. I mean it's an 16+ check, there really isn't much of a reason why it isn't okay to have a system which is only "good enough". But lawmakers will try nonsense.
Interestingly this is more a problem for the US then some other states AFIK due to how 1) you can't expect everyone 18+ to have an id and everyone 16+ to be able to easily get one (a bunch of countries have owing (not carrying) id requirements without it being a privacy issue. 2) Terrible consumer protection making it practically nearly impossible to create a privacy preserving system even if government and state agencies do not meddle.
Similar if there wouldn't be the issue with passports in US it probably wouldn't touch the First Amendment as it in the end protects less then a lot of people believe it does.
Since about half a year I've noticed the German Twitter section getting sunk in hate posts, people disrespecting each other, ranting about politicians or ways of thinking, but being really hateful. It's horrible. I've adblocked the "Trending" section away, because its the door to this horrible place where people don't have anything good to share anymore but disrespect and hate.
This made me think about what we're really in need for, at least here in Germany, is a Twitter alternative, where people register by using their eID and can only post by using their real name. Have something mean to say? Say it, but attach your name to it.
This anonymity in social media is really harming German society, at least as soon as politics are involved.
I don't know exactly how it is in the US but apparently it isn't as bad as here, at least judging from the trending topics in the US and skimming through the posts.
Probably not. Minors have all sorts of restrictions on rights, including first amendment restrictions such as in schools.
"(And if it is, let's start talking about gun ownership first...)"
Are you advocating for removing ID checks for this? If not, it seems that this point actually works against your argument.
Not saying that I agree with a ban, but your arguments against it don't really stand.
COPPA has entered the building. If you're under 13 and a platform finds out, they'll usually ban you until you prove that you're not under 13 (via ID) or can provide signed forms from your parent / legal guardian.
I've seen dozens of people if not more over the years banned from various platforms over this. We're talking Reddit, Facebook, Discord and so on.
I get what you're saying, but it kind of is a thing already, all one has to do is raise the age limit from 13 to say... 16 and voila.
Aren't the really problematic social networks the ones where you've lost your privacy and anonymity long ago and are being tracked and mined like crazy?
No, it isn't. Check out Yivi [1]. Its fundamental premise is to not reveal your attributes. It's based on academic work into (a.o.) attribute-based encryption. The professor then took this a step further and spun off a (no profit) foundation to expand and govern this idea.
[1] https://privacybydesign.foundation/irma-explanation/
Surely not.
Imagine: government sells proof of age cards. They contain your name, and a unique identifier.
Each time you sign up to a service, you enter your name and ID. The service can verify with the government that your age is what it needs to be for the service. There are laws that state that you can't store that ID or use it for any other purpose.
Doesn't seem impossible.
The bulk of the internet has not been anonymous for a while. Facebook requires an id already, Google tracks you using Google and the OS, reddit is tightening to control bots, amazon requires a phone number.
Think about it. What portion of your activities day to day on the Internet are anonymous? Now try to do them anonymously. It isn't practical/possible anymore and the internet of yesteryear is gone.
Anonymity services can still exist without fostering an environment to addict children and young adults to social media or a device... and without your precious "rights" being taken away.
Social media is on the internet. It is not the internet.
We ARE talking about social media.
Like, the least private software on the planet.
Ban portable electronics for children. Demand that law enforcement intervene any time it's spotted in the wild. If you still insist that children be allowed phones, dumb flip phones for them.
It could be done if there was the will to do it, it just won't be done.
Hopefully this will reduce amount of people using social media.
I do. Anything to avoid "the talk", tbh. I grew up Catholic. I never had "the talk". I don't even know where to start. Blow jobs?
Dead Comment
Not at all. Just the social media sites, which are objectively bad for kids. As an adult, you do what you want on the internet.
But with the topic at hand being social media, I think a lot of the same issues and solutions apply. It's harmful to allow kids to interact with anyone and everyone at any given time. Boundaries are healthy.
Aaaaand, finally there's much less destruction of human livelihood by guns than both of the aforementioned topics if we measure "destruction" by "living a significantly impoverished life from the standard of emotional and mental wellbeing". I doubt we could even get hard numbers on the number of marriages destroyed by pornography, which yield broken households, which yield countless emotional and mental problems.
So, no, guns aren't something we should discuss first. Also, guns have utility including but not limited to defending yourself and your family. Porn has absolutely zero utility, and social media is pretty damn close, but not zero utility.
To me, the only way out of this is by changing one nightmare for another giving the gov't the decision of allowing/denying access. Human nature is not a simple thing to regulate since the desire for that regulating is part of human nature
I only changed 2 words for 1.
When I was first getting online, the expectation was that you at least had to be bright enough to lie about your age. Now I have to occasionally prune my timeline after it fills up with "literally a minor." Even an annoyance tax might have some positive effect. Scare the pastel death-threats back into their hole...
Dead Comment
People said the same thing about age restrictions for smoking, alcohol, movies, and on and on and on.
It's not some unsolvable new problem just because it's the ad-tech industry.
My kid, and I have told my wife this, is ready to view whatever he wants on the internet when he can circumvent me.
Do you think simply labelling it as bad is sufficient? Parents have no idea.
I also believe that this is a Big Deal™ that we need to take seriously as a nation. I have yet to see any HN commentator offer a robust pro-social media argument that carries any weight in my opinion. The most common "they'll be isolated from their peers" argument seems pretty superficial and can easily be worked around with even a tiny amount of efforts on the parents' part.
As an added bonus, this latest legislation removes the issue of "everyone is doing it". I mean, sure, a lot still will be—but then it's illegal and you get to have an entirely separate conversation with your kid. :)
This is so incorrect it makes the flat earth theory look good.
Every generation has this freakout about something or another. I expect modern kids will end up better at handling social media than their parents are.
However, especially in Florida, social media may be the only way for some teens to escape political and religious lunacy and I fear for them. I think it's not wise to applaud them taking away means of communication to the "outside", in the context of legislation trends and events there.
Why don’t you take it away from yourself? Just delete your account.
- I'm not the biggest fan of hard cutoff
- addictive dark patterns which cause compulsive use should be general banned or age restricted no matter where they are used, honestly just ban most dark patterns they are intentional always malicious consumer deception not that far away from outright committing fraud. (And age restrict some less dark but still problematic patterns.)
- I think this likely will make all MMORPGs (and Roblox, lol) and similar 16+, I'm quite split about that. I have seen people between 14-18 get addicted to them and mess up their education path. But I have also seen cases of people which might not be alive anymore today if they hadn't fund a refuge and companions in some MMORPG.
- I guess if it can make platforms like YT, Facebook, Instagram, Snapshat etc. implement a "teen" mode with less dark patterns and tracking it would be good.
- The balance between proving your age and making things available and keeping privacy is VERY tricky (especially in the US) and companies, the government and spy agencies will try to abuse the new requirements for age verification to spy more reliable on everyone 16+.
- It's interesting how that affects messengers. Many have less dark patterns, some do not track users, or can easily decide not to track children. The aren't social networks per-se. But most have some social network like features. Even such which do not try to create compulsive use might still end up with it as long as their as is "live" chatting.
How would you write a law that accomplishes your goal?
We need to find a better way, for example (just a quick idea), social websites run and protected by a school-by-school basis. This way, it can be regulated and controlled.
In other words, government should regulate what they already have control over, not impose new control measures over things they don't.
I'm not American, I think it's perfectly reasonable to ban kids from the internet just by applying the logic used for film classification. Even just the thumbnails for YouTube (some) content can go beyond what I'd expect for a "suitable for all audiences" advert.
This isn't an agreement with the specific logic used for film classification: I also find it really weird how 20th century media classification treated murder perfectly acceptable subject in kids shows while the mere existence of functioning nipples was treated as a sign of the end times (non-functional nipples, i.e. those on men, are apparently fine).
Also, I find it hilariously ironic that Florida also passed the "Stop Social Media Censorship Act". No self-awareness at all.
https://www.theguardian.com/film/2023/jul/31/margaret-pomera...
- Make this an ISP level thing? Somehow? They already know the makeup of a household. If they know a house has kids, something something ToS "You as the parents are liable..." Then maybe repeat those scary RIAA letters but "for good" when someone in that household hits a known adult IP?
- Maybe browsers send an "I'm an adult" flag similar to "Do Not Track," and to turn it on, the user has to enter a not-to-be-shared-with-kids PIN? If the browser and OS can coordinate, OSes would be able to tell the browser if the user is an adult, and skip the PIN entering.
- Force kids to use a list of Congress-approved devices that gate access to the wider Internet? YouTube Kids but for everything. Yes, hacker kids will be able to get by, but this being Hacker News, they'd deserve the fruits of that particular labor.
Just spitballing. Anything obvious I'm missing?
PS- I am neither for nor against the Florida-type legislation as of this comment.
There is zero positive to giving kids the ability to access social media sites designed to be addictive when they don't have the mental facilities to determine real from not real. Many adults seem to suffer from this as well. Plus kids don't understand that the internet is forever, really no need for an adult looking for a job or running for office to be crippled by a questionable post they made as an edgy teen.
I'm against a lot of government regulation but in this case I am even more against feeding developing kids to an algorithm
Just remove the temptation and pressure all together.
I would've backed your argument up until a few years ago, but the science is coming down pretty hard now showing that social media use is absolutely detrimental to still-developing minds.
The goal of this is to bring attention to the fact that it's a problem and should be seen as undesirable, like pornography or Joe Camel. The cancellation of Joe didn't prevent kids from getting cigarettes but it did draw attention to the situation and there has been a marked decline in youth smoking since the late 90s when the mascot was removed. It's correlative, for sure, but the outcomes are undeniable. The same happened with the DARE program and class 1 drugs (except for marijuana iirc).
Even just making illegal the promotion of social media toward children would have a huge effect.
It’s dumb policy, because Florida GOP. The smart move is to target advertising for kids. If you attack the ability to advertise to the underage audience using mom’s iPad, social media will self police.
e.g. You could set up a restricted account on a device with a user birthdate specified. Any requests for websites that return an AGE_REQUIRED header that don’t validate would be rejected.
It happened to me and millions of others.
Not even close. I am with you
Dead Comment
Dead Comment
Is something like the bulletin boards we used to have around the late 90s/early 2000s social media? What about chat rooms? Local social web sites for the school or your city? I think a lot of these things can even be beneficial, if I think about my own experiences as a somewhat introverted teenager.
And what about things like Netflix, Youtube, Podcasts? They can be just as harmful as TokTok and Instagram. Especially on Youtube you have a lot of similar content.
I've found accounts that claim to be official accounts of children's shows - maybe they even are - and which are full of nonsensical videos, just randomly cut together action scenes of multiple episodes. It's like crack for children. Of course YouTube doesn't do anything, they want you to pay for YouTube kids. And the rights holders want you to buy the content, so they leave the poor quality stuff up.
The thing is, exploitative content is always going to be created as long as there are incentives to do so. You can ban stuff, but it's whack-a-mole, and you are going to kill a lot of interesting stuff as collateral damage. The alternative is much harder, change the incentives so we can keep our cool technology and people are not awarded for making harmful stuff with it. But that would require economic and political changes, and people don't like to think about it.
It's a bill written by the Florida House of Representatives, so there's a definition there. Mind you, it's the Florida House, which has put out some extremely bad laws in its current session -- from "Parental Rights in Education" to the Disney speech retaliation. But given that this is a less ostensibly partisan issue, there are reasons for hope.
The definition seems narrowly tailored. I think that part (d)1d is a questionable choice, since most social media platforms will probably argue that they are not really "designed" to be addictive (for various definitions of "designed" and "addictive"). It appears that specific exemptions were made for YouTube, Craigslist and LinkedIn (without mentioning those companies by name), and algorithmic content selection is part of the definition. This is one of the better versions of this law I could imagine being written by a state legislature, though it isn't without its faults. It's nice to see my home state in the news for something good for once.
I agree that YouTube is a particularly difficult case. But part of the problem comes from using it as a digital pacifier, rather than peer pressure. There's no particular reason why the technology market should produce a free stream of child-appropriate videos. Ad-supported media has its ups and downs, but when the targets of those ads are young children, it's much harder to defend. And parents have more control over the behavior of their 4-year-olds than their 14-year-olds.
Here's the definition:
>(d) "Social media platform:"
>1. Means an online forum, website, or application offered39 by an entity that does all of the following:
>a. Allows the social media platform to track the activity of the account holder.
>b. Allows an account holder to upload content or view the content or activity of other account holders.
>c. Allows an account holder to interact with or track other account holders.
>d. Utilizes addictive, harmful, or deceptive design features, or any other feature that is designed to cause an account holder to have an excessive or compulsive need to use or engage with the social media platform.
>e. Allows the utilization of information derived from the social media platform's tracking of the activity of an account holder to control or target at least part of the content offered to the account holder.
>2. Does not include an online service, website, or application where the predominant or exclusive function is:
>a. Electronic mail.
>b. Direct messaging consisting of text, photos, or videos that are sent between devices by electronic means whe re messages are shared between the sender and the recipient only, visible to the sender and the recipient, and are not posted publicly.
>c. A streaming service that provides only licensed media in a continuous flow from the service, website, or application to the end user and does not obtain a license to the media from a user or account holder by agreement to its terms of service.
>d. News, sports, entertainment, or other content that is preselected by the provider and not user generated, and any chat, comment, or interactive functionality that is provided incidental to, directly related to, or dependent upon provision of the content.
>e. Online shopping or e-commerce, if the interaction with other users or account holders is generally limited to the ability to upload a post and comment on reviews or display lists or collections of goods for sale or wish lists, or other functions that are focused on online shopping or e-commerce rather than interaction between users or account holders.
> f. Interactive gaming, virtual gaming, or an online service, that allows the creation and uploading of content for the purpose of interactive gaming, edutainment, or associated entertainment, and the communication related to that content.
> g. Photo editing that has an associated photo hosting service, if the interaction with other users or account holders is generally limited to liking or commenting.
> h. A professional creative network for showcasing and discovering artistic content, if the content is required to be non-pornographic.
> i. Single-purpose community groups for public safety if the interaction with other users or account holders is generally limited to that single purpose and the community group has guidelines or policies against illegal content.
> j. To provide career development opportunities, including professional networking, job skills, learning certifications, and job posting and application services.
> k. Business to business software.
> l. A teleconferencing or videoconferencing service that allows reception and transmission of audio and video signals for real time communication.
> m. Shared document collaboration.
> n. Cloud computing services, which may include cloud o. To provide access to or interacting with data visualization platforms, libraries, or hubs.
> p. To permit comments on a digital news website, if the news content is posted only by the provider of the digital news website.
> q. To provide or obtain technical support for a platform, product, or service.
> r. Academic, scholarly, or genealogical research where the majority of the content that is posted or created is posted or created by the provider of the online service, website, or application and the ability to chat, comment, or interact with other users is directly related to the provider's content.
> s. A classified ad service that only permits the sale of goods and prohibits the solicitation of personal services or that is used by and under the direction of an educational entity, including:
> (I) A learning management system;
> (II) A student engagement program; and
> (III) A subject or skill-specific program.
The fact that are well over a dozen exceptions carved out strongly suggests that the definition is anything but narrowly tailored, and the authors of the bill preferred to add in exceptions to everyone who objected rather than rethinking their broad definitions.
1a-c will be trivially satisfied by anything that "has user accounts" and "allow users to comment". 1e is clearly meant to cover "algorithmic" recommendations, but it's worded so broadly that a feature that includes "threads you've commented on" would satisfy this prong. 1d is problematic; it can be interpreted so narrowly that nothing applies, or so broadly that everything applies. IANAL, but I think you'd have a decent shot of going after this for unconstitutionally vague on this prong for sure.
Discounting 1d, this means that virtually every website in existence qualifies as social media sites, at least before you start applying exceptions. Not just Facebook or Twitter, but things like Twitch, Discord, Paradox web forums, Usenet, an MMO game, even news sites and Wikipedia are going to qualify as social media platforms.
Actually, given that it's not covered by any of the exceptions, Wikipedia is a social media platform according to Florida, and I guess would therefore be illegal for kids to use. Even more hilariously, Blackboard (the software I had to use in school for all the online stuff at school) qualifies as a social media platform that would be illegal for kids to use.
> f. Interactive gaming, virtual gaming, or an online service
This bill is already out of date. The new generation's social media are games like Roblox. And these are as addictive as the old social media.
Good luck with this whack-a-mole. A comprehensive bill would stop this at the source: kids owning smartphones. But addressing smarphones would upset too many parents and too much business, so it won't get done.
> Utilizes addictive, harmful, or deceptive design features, or any other feature that is designed to cause an account holder to have an excessive or compulsive need to use or engage with the social media platform.
Many platforms can argue that they're not engaging in this behavior. Do Mastodon and Lemmy count as addicting? They look like Twitter and Reddit on the surface, but they don't have a sorting algorithm that maximizes for engagement. So would they be included in the definition or not?
And if they don't, what's stopping big companies from claiming the same, since you can't actually see their source code for news feed sorting?
1d does stand out. I can guess what they were going for. I wonder if it could be somehow scoped to gamified, or to feed-based algorithmic sites. As a random example, Reddit's site definitely underwent such feed based boosting in the last few years. I'm constantly getting suggested content that is some form of region-based outrage event, Person X doing horrible thing to person Y, etc, and its nauseating. You click one such thing and it knows and it just hits you again and again, and eventually you just have to get out. Which sucks because every time I pick up a new interest, its an easy place to go to find more people that are into it; but you can't get that without the BS.
Though I suppose the real plan here is to pass the law and then have the government selectively prosecute social media companies for having users under 16.
https://fingswotidun.com/images/MelissaQuake3.jpg
Maybe if they use a profile pic that you algorithmically determine is someone underage, you could do some additional checks. The smart ones would learn not to utilize a profile pic of themselves, which would ultimately be better.
Then there'd be even more unintended consequence. Instead of sites you don't want kids creating accounts on you'd have sites selling 5 minutes of ads to create an account for them or increasingly shady stuff. Preventing this kind of site is the same as the original issue.
It gives parents tools and guidelines that can help them direct their children.
Whether this is a good approach or not, is a whole other argument.
"I'm not being mean, it's the law."
The US government is already legally mandated to prosecute companies known to harbor information, collected online, concerning minors less than 13 years old without consent from their parents or legal guardians.[1]
It's why Youtube blocks comments and doesn't personalize ads on videos published for kids, to pick out a prominent example.
[1]: https://en.wikipedia.org/wiki/Children's_Online_Privacy_Prot...
Obligatory IANAL.
Deleted Comment
There's nothing stopping you pouring your youngsters a glass of wine with dinner, but as a society we've made the dangers of alcohol and similar things so well understood that no parent wants to.
Unfortunately, as a society, we have a much harder time grasping social media threat data. I suppose some of that is due to how news orgs consistently+bizarrely+hugely overstating the actual harms in the data.
https://www.techdirt.com/2024/01/08/leading-save-the-kids-ad...
Um, it's simple maths. Guessing you're meaning something else though?
But - that was over 25 years ago. The internet was a much different place.
Another teacher around the time had the kids set up on GoodReads. They were under 13 and there was a TOS at the time restricted to 13+. Mostly adults on that site.
Not happy with the school to say the least.
Google Workspace accounts, especially those for education[0], have Web & App Activity, as well as Location History, automatically turned off. It's just a tool for schools to get free/cheap email, storage, and classroom tools. For your child under 13 to be able to use it compliant with COPPA[1], your school must have either used some level of blanket consent, or the school didn't bother to actually get the parental consent Google requires.
0: https://edu.google.com/intl/ALL_us/workspace-for-education/e...
1: https://cloud.google.com/security/compliance/coppa
I remember joining ebay (well, auctionweb - aw.com/ebay, IIRC) and it not even being an issue that I was around 14, we mostly trusted each other, and just mailed money orders around. A different time.
That being said, I am not strictly opposed to a bill like this. But 16 is way too old. I feel like likely somewhere within the 10 to 13 range since most don't allow for under 13 anyways would be fine. But then if they all block under 13 what is the point of the bill?
As for restricting access to "opposing" (opposition to what?) viewpoints, what children can be exposed to has long been restricted.
But since there isn't a syllabus for what children will be presented on social media, I don't see the viewpoint restriction angle either.
In fact, that position is illogical to the point that it raises the question of whether or not people concerned with it have an agenda to expose kids to "viewpoints" that their parents would disapprove of. Under the radar of supervision.
Going only on my experience with social media, a valid and more plausible reason for this restriction would be that social media seeks to optimize the feed of users for engagement. In a manner that "hacks" psychology in a way that makes it difficult for even adults to disengage. Given that minors do not have fully developed brains, the ability to disengage may be even more hindered.
Television programing has long sought this goal as well and with some success. While that use isn't restricted, there is theoretically a red line. Florida may see it in social media use.
The idea is that social media exposes kids to viewpoints that they wouldn't otherwise be exposed to, so parents who want their kids to be a certain way would not want this, as they cannot easily control what viewpoints their kids are exposed to online.
Of course, every parent wants their kid to be a certain way, whether or not this is negative is dependent on how narrow that certain way is. The same applies to restricting what kids are exposed to: it is good to restrict exposure to some things, but too much restriction becomes bad.
The Florida legislature has recently been restricting the education system's ability to talk about gender and race, and pushing for more Christianity in schools. This makes some people feel there is an implied extension to the apparent "This is to protect kids" message: This is to protect kids (by making them conservative and Christian)
Well, true, TikTok probably has more negatives than positives, but I have a feeling the American Talibans[1] in power don't like teens organizing, and where do they organize? On social media...
[1] Yes this is an apt comparison. Suppression of opposing viewpoints, growing voter suppression and not even accepting results of democratic elections, and then the whole anti-Abortion movement.
This is coming from the state that is trying to ban books based on some backwards concern of a white kid feeling bad. (for the record I am white).
I don't care what side of politics you are on... you know what opposition I am talking about. Whether you are for or against it.
> not people concerned with it have an agenda to expose kids to "viewpoints" that their parents would disapprove of.
Yes! Because otherwise the parents are brainwashing their kids into their viewpoint not allowing them to see the real world.
This isn't a hard concept to understand here.
I mean I am liberal and atheist. But even I have wondered if I ever have kids if I don't expose them to the choice of religion if that is brainwashing of its own. I may try to justify it with the harm that religion has caused, but I am still denying my kid another prospective that is different than my own.
Edit:
To clarify here. If this was a state that was not actively removing opposing viewpoints from their libraries and teachings than I may buy that they actually care about their kids. But it's not, it's Florida. The beacon of being scared of their kids knowing anything about the real world and daring to have compassion for someone different than them.
This is a valid question for pretty much all legislation. It serves by allowing the congress critters to toot their horns as doing something for those that only pay attention to news bites while doing no harm by doing nothing
But in its purest form Social media isn't a bad thing and is a good way to actually keep in contact with friends. Also a good way to keep up on events happening around the world without relying on the news for everything.
Perhaps more relevant to HN, it seems to me that any solution here would be a dramatic loss of agency, privacy, and anonymity.
This is still awful no matter how much crypto you throw at it. The end result of solving this little puzzle of a problem is that everything is worse shortly afterwards. Congratulations to them, I guess.
Small Federal Government, more local government control. Let local communities decide for themselves what is important. Now I know your first thought is to ignore what I'm saying to find examples that prove this wrong. But this is the conservative approach in general.
NY and CA are never going to pass a bill like this, if having your kids on TikTok is important, move somewhere where you are with like minded people.
I can't remember the last time I signed up for a new social network; do they ask age? Is it an ask to Apple / Google to add stronger parental approval? Verify drivers license #?
We heard about this days ago on local news and I've been struggling to figure out short of are you 16 years or older how this is gonna get done and how do you fine someone if it's breached.
If I remember correctly, at one time Google even tried to enforce it and there were usability problems with typos and wrong dates and things - there was no verification and no easy way to fix an error. IE, if a mid-40s adult accidentally entered 1/1/2024, they'd be locked out. And if a kid entered 1/1/1977, they'd have an account (but not way to correct that date when they eventually turned 18).
Put criminal penalties to the directors if no reasonable attempt to keep kids out.
Plus corporate death penalty if they purposely target kids.
Then how they enforce it doesn't really matter as long as there are periodic investigations. The personal risks are too great and the companies will figure it out.
The only way to determine age is to compile a database of gov-issued IDs and related data. Which is an unconstitutional barrier to speech. Which is why this will get struck-down like each similar law.
The part about ID data eventually being shared with 3rd parties, agencies - and/or leaked - is a bonus.
That would indeed allow the site to compile a database of government issues IDs and give that information (willfully or via leaks) to third parties.
Those issues can be fixed by using a three party system. The parties are the user, the site that they need to prove their age to, and a site that already has the information from the user's government ID.
Briefly, the user gets a token from the social media site, presents that token and their government ID to the site that already has their ID information, and that site sign that token if the user meets the age requirements. The user presents that signed token back to the social network which sees that it was signed by the third site which tells it the third site says the user meeds the age requirement.
By using modern cryptographic techniques (blind signatures or zero knowledge proofs) the communication between the user and the third site can be done in a way that keeps the third site from getting any information about which site they are doing the age check for.
With some additional safeguards in the protocol and in what sites are allowed to be the ID checking sites it can even be made so that someone who gets records of both the social media site and the third site can't use timing information to match up social media accounts with verifications and so could work with sites that allow anonymous accounts.
The problem here is that it's pretty much out of the hands of the parents. If your kids' friends have social media, your kids will absolutely need it too in order to not be left out. I've witnessed the pressure, and it's not pretty. Add to that the expectation from society that children shall have access to social media.
Regulation is pretty much the only way to send the right signals to parents, schools, media companies (e.g. Swedish public service TV has a kids app that until recently was called "Bolibompa Baby", but it's now renamed to "Bolibompa Mini"), app designers, and so on.
It's a bit upsetting but I don't harbor the early 2000s naiveté about the free internet where regulation doesn't exist, the data exchange happens over open formats and connecting people from across the world is viewed as an absolute positive.
Govt meddling on social media platforms, the filter bubble, platforms locking data in, teenage depression stats post Instagram, doom scrolling on tiktok have flipped me the other way.
Internet Anonymity is going to die - let's see if that makes this place any better.
And the government having unfettered knowledge of every site you visit - in particular the more salacious ones - is how we solve that? Surely that won't be used as a political cudgel to secure power at any point, nor will it ever be used to target specific demographics or accidentally get leaked.
How do you anonymously verify someone's age?
Private State Tokens enable trust in a user's authenticity to be conveyed from one context to another, to help sites combat fraud and distinguish bots from real humans—without passive tracking.
An issuer website can issue tokens to the web browser of a user who shows that they're trustworthy, for example through continued account usage, by completing a transaction, or by getting an acceptable reCAPTCHA score. A redeemer website can confirm that a user is not fake by checking if they have tokens from an issuer the redeemer trusts, and then redeeming tokens as necessary. Private State Tokens are encrypted, so it isn't possible to identify an individual or connect trusted and untrusted instances to discover user identity.
The issue is, as a parent who is not very technical, how do they _safely_ audit their child's social media use?
I am reasonably confident that I could control my kid's social media habit, but only up to a point. there isn't anything really stopping them getting their own cheap phone/signing in on another person's machine.
The problem is, to safely stop kids getting access requires either strong authentication methods to the ISP. ie, to get an IP you need 2fa to sign in. But thats also how censorship/de-anonymisation happens.
the govt already set the bar at 13, so what's different about setting it at 16?
For kids under 13 to see any of the content , ask them to enter a credit card ?
For one, ease up on the hyperbole if you want to be taken seriously. I'll give you the benefit of the doubt because the news is nothing but hyperbole these days, so it's easy to pick up the habit. Second, most kids aren't having "their youth, innocence and brains destroyed." The news takes the edge cases, amplifies them and presents it as the norm to peddle fear because fear sells. Nothing ever is bad as the news makes it out to be, but they gotta make a dollar, you see how bad the news business is since the internet?
FWIW, my kid uses social media and just connects with her friends. Nothing overly malicious goes on, they just goof off. I've checked.
You really wanna protect the kids from anxiety and whatnot, block the news and all the talking heads trying to manipulate the next generation to their political opinions.