I don't really understand why Youtube won't let me create a profile, on my paid family account that I'm paying $29 NZD a month for, which lets me whitelist channels.
I'm happy for my kids to have free access to certain channels on youtube, but the mind numbing shorts, and shit they find on random channels just does my head in. And it seems to be getting worse, I'm not sure if its that they are getting older and able to search for more content or if the content is just getting worse, maybe both, but I'm probably just going to cancel the sub so they at least have to put up with terrible ads if they try to access it.
> I don't really understand why Youtube won't let me create a profile, on my paid family account that I'm paying $29 NZD a month for, which lets me whitelist channels
The answer is to this question is always: it is too niche a product feature for a giant corporation to prioritize. This product would require constant work to keep in sync as UIs and features change. It would be one more feature to regression test against an ever growing list changes, and an ever growing list of client apps that need to work across an endless list of phones, computers, tvs, etc.
This is why it is important that society normalize third party clients to public web services. We should be allowed to create and use whatever UI we want for the public endpoints that are exposed.
> We should be allowed to create and use whatever UI we want for the public endpoints that are exposed
Having been at a company that tried this: The number of poorly-behaved or outright abusive clients is a huge problem. Having a client become popular with a small group of people and then receive some update that turned it into a DDoS machine because someone made a mistake in a loop or forgot to sleep after an error was a frequent occurrence.
The secondary problem is that when it breaks, the customers blame the company providing the service, not the team providing the client. The volume of support requests due to third party clients became unbearable.
These days there’s also a problem of scraping and botting. The more open the API, the more abuse you get. You can’t have security through obscurity be your only protection, but having a closed API makes a huge difference even though the bad actors can technically constantly reverse engineer it if they really want. In practice, they get tired and can’t keep up.
I doubt this will be a popular anecdote on HN, but after walking the walk I understand why this idealistic concept is much harder in reality.
That feature isn't what I think the parent comment is asking for. What you've linked to is specifically YouTube Kids, and it's groups of channels whitelisted by the YouTube team. What I think the parent comment is asking for, and I want too, is full availability of all YouTube channels, but the ability to block everything except whitelisted channels. I agree, it's too niche a product. But I often think that people whose response to complaints about kids' access to inappropriate content is "you need to parent your kids" is fine, but I need the tools to do that! A tool like this would be a godsend.
Your second paragraph is kind of funny as a solution to your first, but was nonetheless what I was going to suggest: since it would require too much work for a multi-trillion dollar company to be cable of building, you can instead rely on hobbyists and use yt-dlp and jellyfin to make your own whitelisted youtube.
The option (or at least documentation) does not seem to be there for computers. Is it only on mobile devices?
I don’t think this is too niche of a feature. Instead, the issue is that this would decrease the engagement (and profitability) for any customer using it, so they have a disincentive to building it. Same reason that Facebook removed features that helped customers narrow their feeds down to just favorite friends and family.
Electron apps solve the sync problem by redirecting to main site for full UI. Also there's not much need for UI in this case, because the user is not supposed to change or see whitelist, filtering can be implemented on server side.
No, that feature doesn't exist. My son is 4 and I looked. You can approve individual videos, but to my knowledge you can't whitelist channels for your kid. You can subscribe to a bunch of channels, and that will tend to make your kids' feed get steered in that direction, but random outliers are always a possibility.
Which as a parent of a toddler is absolutely mind-numbing.
Which means that every so often he will always end up encountering either some foreign-language content (borderline appropriate if I want him to learn his native tongue first) or something with violence, etc. that is not at all appropriate, or something like some kid playing some dumb but colorful game (non-mind-enriching, pure dopamine garbage) from some rando channel.
PLUS, they seem to be abandoning YouTube Kids in order to merge its functionality into the main YouTube app, and yet...
Like, does ANYONE at Google/Alphabet/whatever actually have fucking kids?!?!?!
I paid for family YouTube just so my kid (and myself) wouldn't be forced to watch ads.
All they'd have to do here is let me ban FUCKING shorts (don't get me started... note that they intentionally made this extremely difficult to impossible, good luck blocking it at the router level), and whitelist some channels, and they'd instantly make every parent 100% happier with YouTube!
And no, I'm sorry but this would NOT be hard to build/maintain. Hell, they can hire me to build it out, I'm LFW!
I haven't tried it myself yet, but I self host my own Jellyfin(1) instance, and I've had it recommended to combine it with pinchflat(2), which will auto download and label entire youtube channels, as they publish new videos. So then you could use it to archive and provide access to the channels you want without worrying about the recommendations and other channels.
I have this workflow with the ytdl-sub docker on my k8 cluster, is pretty powerful at filtering to specific videos and includes sponsorblock - everything is configuration driven, no ui, which can just be dropped into a yaml configmap
I rarely have to touch it unless I'm adding a new playlist or channel
Presumably for the same reason Google doesn't let you block or filter shit sites.
If you genuinely let user's preferences be taken into account, it's incredibly hard to make money from ads if the user's true preferences are not to be shown them.
The entire point of ads is to manipulate and change user preferences and behaviours.
So any preferences or customisation has to be minimal enough that their use can only partially implement user preferences. White listing is a step too far against the purpose of YouTube.
Thus Google will always be biased to not letting you implement full customisability and user control.
Agreed but ElCapitanMarkla is paying for an ad free service so at that point (as far as I can see) there shouldn't be any reason they can't have what they suggest.
Whitelisting—and more user control in general—seems like such a valuable feature, that they could probably charge for it. Heck, I'd pay $10 a year if I could just customise certain aspects of YouTube and remove all the ads and suggested content.
Whether this is viable or not, I don't know. I'm not sure what the average take per person is from the current model.
For windows / linux I've found the freetube app to provide a lot of sane controls. I can block channels as needed, block shorts, hide profile pictures of commenters, and a lot of other quality of life things. You can even set a password for the settings as needed.
Otherwise in the browser (firefox) I've been somewhat succesful in blocking youtube shorts with ublock origin filter rules:
www.youtube.com##ytd-rich-section-renderer.ytd-rich-grid-renderer.style-scope:nth-of-type(1)
www.youtube.com##ytd-rich-section-renderer.ytd-rich-grid-renderer.style-scope:nth-of-type(2)
www.youtube.com##ytd-rich-section-renderer.ytd-rich-grid-renderer.style-scope:nth-of-type(4)
www.youtube.com##ytd-guide-entry-renderer.ytd-guide-section-renderer.style-scope:nth-of-type(2)
It's pretty clear to me that Youtube shoving endless low quality content towards kids is their intended business model. It's what drives the most engagement. It's why they don't let you permanently disable YouTube Shorts. It's why they don't let you block channels easily any more. Or dislike videos. They're AB testing themselves into a low quality slop firehose.
There's some truly great content on the platform, some of it even for kids. But it gets drowned out by mountains of algorithmic slop.
I have stopped giving my kid access to Youtube. instead I set up my own media server, filled it with pirated TV shows and Movies I can curate, and give them access to that on the TV and iPad in their allowed screen times.
If you disable YouTube history, it completely removes shorts. It also breaks functionality in surprising ways (breaks back button behavior - the petty bastards)
Why? Back when I was a kid and TV/radio were the only options, it's the ads that often got me to shut it off and do something else as often as not having anything to watch. I would wager advertiser data reflects this. Conversely I noticed a trend sometime in the 2010s my grandkids would watch shows that didnt break to commercial after rolling the end credits but instead segue to a new episode in a mini-view, and they would never leave.
NewPipe blocks ads, and optionally blocks Shorts. NewPipe does also happen to break YouTube's terms of service.
My opinion is that YouTube should be forced to permit third party clients (interoperate). NewPipe and the various other clients are proof that there is a desire for alternative experiences and more toggles and options. Forcing users to identity themselves online to watch videos (or certain classes of videos) is a privacy nightmare, dystopic even.
Another commenter has just pointed out that this is actually possible in the YT Kids app now. You can select approved channels and Bluey Live is one of them. I still need to see if I can approve other channels though.
Is there perhaps a way to do some kind of person-in-the-middle attack to intercept youtube packets and drop channels you don't whitelist, so that the UI only ever shows the whitelisted channels?
Have you tried creating a YouTube Kids profile? What you’re describing sounds like what they already have. It is not the default but there is a setting that allows you to create a list of allowed channels. The setting is called “Approved Content Only”.
YouTube kids is a wasteland for non English content. And also, there is whole world of content I would be more then happy to encourage my kids to watch that is unavailable there.
While also containing huge amount of unboxing toys crap I would not give to my kids in my own watchiles.
YouTube kids has a feature to only show whitelisted channels and videos. It's been there a few years now. You can share videos to your kids directly from the YouTube app.
Whitelisting and YouTube Kids are not viable solutions for the 12-16 age group, which is the group this legislation is targeting.
Whitelisting: There is way too much appropriate content out there to whitelist it all. It's totally infeasible for a parent, unless you're planning to only approve a handful of channels, which makes YouTube pointless.
YouTube Kids: Teenagers are not "kids" and are not going to go onto YouTube Kids to watch Baby Shark and Mickey Mouse Clubhouse or whatever other kiddie stuff they have there.
Hmmm I'll check it out, I last looked into this about a year ago. I'm pretty sure it still allowed a bunch of crap through that I didn't want them to have access to.
edit: Oh neat they do have a parental approval mode in there now. Last time I was in here they only let you set an age range for the content that you wanted. It still seems a bit weird though, I can select a channel from the list they are presenting me but I can't search for some arbitrary channel to unlock. I'll have another look tonight though
But that also opens all the yt kids content, doesn't it? At least I couldn't find any way to whitelist within the kids app too. And there's just WAY too much brainrot crap in it to allow open access for my kid.
A couple months ago, I saw people everywhere online (including HN) saying they love the idea of social media bans for kids. They love the idea of keeping people under 18 safe from the dangers of porn and mature games and other unclean things as well.
Now governments around the world are acting in unison to happily give those people what they want, and people are suddenly confused and pissed that these laws mean you need to submit proof that you're over 18. And instead of being an annoying checkbox that says "I'm 18. Leave me alone", it's needing to submit a selfie and ID photo to be verified, saved, and permanently bound to your every single action online.
People who asked for social media bans for kids got what they wanted. They'll have to live with the consequences for the rest of their lives. We all will.
The simple answer to these situations is usually that it's not the same people complaining in both instances. I see similar things in places with anonymous posting where people assume everyone was in agreement on x, then later they hear something different and try to frame it like a flip-flop or a gotcha. People are never all in agreement.
To add to that, often no news is good news, or rather people won't bother posting about how they're glad minors can use social media freely, but once restrictions are in place they will quickly complain (because they prefer the old way).
> A couple months ago, I saw people everywhere online (including HN) saying they love the idea of social media bans for kids.
The common theme in these statements is that people see “social media” as something that other people consume.
All of these calls for extreme regulations share the same theme: The people calling for them assume they won’t be impacted. They think only other people consuming other content on other sites will be restricted or inconvenienced, so they don’t care about the details.
Consider how often people on Hacker News object when you explain that Hacker News is a social media site. Many people come up with their own definition of social media that excludes their preferred social sites and only includes sites they don’t use.
I think the concern about how this will be implemented (e.g. selfie and ID submission) is well-founded. I also think that letting tech companies make billions by feeding our youth mental junk food is a problem. I'm not sure where the middle path is, but I think it'll need some real thought to figure out.
If you didn’t realize that making teens verify their age online meant that everyone had to verify their age and identity online, that’s just a dangerous level of stupidity.
The issue is everyone wants some quick and easy solution when the truth is we’re going to need to get much more intentional as a society about this. Take phone bans. Everyone wants to ban phones from schools/classrooms, but the truth is in a lot of places phones are already banned from school. But we’ve spent the last 3 decades taking away any power from teachers to enforce their rules so kids just do it anyway.
> it's needing to submit a selfie and ID photo to be verified, saved, and permanently bound to your every single action online.
And leaked every 6 months, now including your ID photos and real name instead of an internet pseudonym, and lots of other sweet details that make extortion schemes a child's play
It would be cool if the post office could issue you an ID card, but for a pseudonym of your own choosing, so that when the data leaks, you can just trash it and get a new one. You could just show the dude at the post office your real id and he can check the age, but not actually write it down or link the two identities digitally.
Even cooler would be if you create a different identity for each service so when they do leak, you know who leaked it. My first id would be for John Facebook Doe.
From here in Australia, nobody was really asking for this here.
Best I can tell it came from a single but sustained pressure campaign by one of the Murdoch newspapers.
Then the Government gamed some survey polling to make it look like there was support for it (asking questions that assumed an impossible perfect system that could magically block under-16s with no age verification for adults). Still, over 40% of parents said that 15s and under should be able to access Facebook and Instagram, and over 75% of parents said they should be able to access YouTube, but the Government was acting like 95% of people were for blocking them, when it was closer to 50% of parents.
> From here in Australia, nobody was really asking for this here.
> Still, over 40% of parents said that 15s and under should be able to access Facebook and Instagram
>From here in Australia, nobody was really asking for this here.
Government in australia is about being seen to be busy. Give them an idea that cant be morally contested, that the media wont contest, and they go about it.
Much like how we got our eSafety commissioner and internet bans. We protested them for years, but then sneaky scomo used Christchurch as wedge and got it through without protest.
And as ever, our minor parties, especially liberty minded ones are more concerned with whats in kids pants than actual liberty.
As an Australian living overseas, I heard about this on social media from friends / celebrities pushing for this to become a law so I disagree that no-one was asking for it.
Is it this one? How did the government game this poll?
> According to the YouGov poll, seen by the dpa news agency, some 77% of respondents said they would either "fully" or "somewhat" support similar legislation in Germany.
That opinion still stands. But I believe that we should regulate children's access to the internet, and not the internet's access to children. As the prior does not affect adults and their free, open and private internet, while the latter absolutely does.
I believe that there should be a standard, open framework for parental control at the OS level, where parents can see a timeline of actions, and need to whitelist every new action (any new content or contact within any app). The regulation should be that children are only allowed to use such devices. Social media would then be limited to the parent-approved circles only. A minor's TikTok homepage would likely be limited to IRL friends plus some parent-approved creators, and that's exactly how it should be.
Why do you need regulation for any of that? Devices with parental controls exist already. Special browsers with parental controls exist, just for kids. Do you think Jane Smith, L3 civil servant, will do a great job of taking over product management for the entire software industry despite having a BA in English Lit and having never heard of JIRA?
There's no need for any regulations here and never was. It was always a power grab by governments and now the people who trusted the state are making surprised pikachu faces. "We didn't mean like this", they cry, whilst studiously ignoring all the people who predicted exactly this outcome.
An easy solution is to limit their access to the device. If they can only use the devices in your living room when you are sitting next to them you keep full control.
Admitedly at some point they are reaching teenage years and they should have a right to privacy so even having access to a timeline of actions seems like a no go to me. The same way they can wander off in the street on their own, write private letters to people or have private calls with friends.
> A couple months ago, I saw people everywhere online (including HN) saying they love the idea of social media bans for kids.
The funny thing is hearing adult people shouting aloud that kids suffer from social media use and bla bla bla let the same people have been ruining their own relationship with their life partners, family and even their whole life for years by spending way too much time in front of TV, computers and by doomscrolling all day on instagram and tiktok.
I don't understand how these people are all acting as if only children need to be saved. Banning stuff to children won't even work if the only example they have of adulthood are people with a hunchback staring lifelessly at a small screen on the palm of their hand all day.
How is this funny? You make it sound like it's hypocritical or self-unaware. I'm finding the opposite: it's exactly because these people are aware of what social media does to them (and/or close friends and relationships) that they want it to be out of kids hands who'd be even more impacted by the negative aspects of it.
In other words, they're not saying "it's okay when I do it but not kids", they're saying "even as an adult it's impacting me, let's not poison kids"
Until about the age of 12 banning inappropriate media and people that carry such is the sole responsibility of the parents. Between 12 and 16 there is an interraction with the child and afterwards the teen goes by theirself. The same goes for social relations, education, every life choice.
No silly age IDs and selfies, no unstable and unsafe procedures, no permanent damage.
When I mentioned that any attempt at identifying users to access or write content is a trojan horse for a wide surveillance yet HN users downvoted and flagged such comments and were zealously supportive of "prottecct kidz"
In the late 90s and early 2000s we as teenagers had access to unfiltered internet and unregulated. The harm to us were largely moral fanaticism, this was when they also tried to ban video games because of violent content and now we have complete censorship and control over what games can sell or not on steam.
Much of the panic on social media amplified by protestants and religious ppl are greatly exaggerated. Porn isnt the danger its the addictive tendencies of the individual that must be educated upon.
Yep. This feels a lot like a repeat of the moral panic from the early eras, only this time the policies are unfortunately within the overton window instead of outside, and have shown to be popular outside of tech circles.
We beat the moral panic last time and kept our freedoms. This time I'm not so certain that we will prevail, there seems to be a coordinated/unified effort on this wide spread surveillance and my hunch tells me the rise of authoritarianism around the world is the drive - much easier to oppress a population in a surveillance state. The "for the children" argument is as old as time.
> In the late 90s and early 2000s we as teenagers had access to unfiltered internet and unregulated. The harm to us were largely moral fanaticism, this was when they also tried to ban video games because of violent content and now we have complete censorship and control over what games can sell or not on steam.
I get your point but I don't agree.
I mean, politicians back then were actually right in assuming that danger looms on the Internet. They just were completely wrong about what was the danger. Everyone and their dog thought that the danger was porn, violent video games (Columbine and Erfurt certainly didn't help there), gore videos (anyone 'member RottenCom), shocker sites (RIP Goatse), more porn, oh and did I say they were afraid of boobs? Or even of cars "shaking" when you picked up a sex worker in GTA and parked in a bush?
What they all missed though was the propaganda, the nutjobs, the ability of all the village idiots of the entire world that were left to solitude by society to now organize, the drive of monetization. That's how we got 4chan which began decent (Project Chanology!) but eventually led to GamerGate, 8chan and a bunch of far-right terrorists; social media itself fueled lynch mobs, enabled enemy states to distribute propaganda at a scale never before seen in the history of humanity and may or may not have played a pivotal role in many a regime change (early Twitter, that was a time...); and now we got EA and a whole bunch of free to play mobile games shoving microtransactions down our children's throats. Tetris of all things just keeps shoving gambling ads in your face after each level. The kids we're not gonna lose to far-right propaganda, we're gonna lose to fucking casinos.
We should have brought down the hammer hard on all of that crap instead of wasting our energy on trying to prevent teenagers from having a good old fashioned wank.
The case study of Bukele in El Salvador shows why this is naive. Low safety directly caused low liberty, because voters care about safety more than liberty, due to Maslow's Hierarchy of Needs.
Delivering safety is a necessary condition for preserving liberty. It is not a nuisance or a side quest.
> People who asked for social media bans for kids got what they wanted. They'll have to live with the consequences for the rest of their lives. We all will.
This isn't the right way to characterise what happened. Governments are going this is unison, it is a coordinated campaign that has been obviously coming for a couple of years. Remember that governments wanted to act against misinformation? Well, this is it. Deanonymised internet. Aus, UK, US, etc - its on the way.
What you are seeing with certain comments etc is probably a lot of genuine comments primed by stories of cases where id would have apparently prevented something-or-other, along with comments from agents and bots. This is how modern governance actually works.
There is a goal (here, its deanonymised internet) then the excuse (children, porn, terrorists), then the apparent groundswell of support (supportive comments on hn, etc) then actual comments that validly complain this is dystopian but go nowhere (auto-downvoted or memory-holed by mods) which gives the appearance to most that no one really cares and this should be simply accepted. So, a difficult idea managed correctly can get past everyone with the minimum of fuss.
Of course they are angry. People only love govt intervention when they think it won't affect them. Us vs them mentality is everywhere these days. Even when them is our own children.
The issue I have is this is all for naught. All it does is make things more complicated.
Some with kids will praise and use it as intended. Many with kids won't. Those without kids won't. All in return for the ultimate in monitoring.
And then people will work around it in various ways. Use forums or chat-group apps that don't comply with the law as intended. Share videos in other ways.
This whole shebang is pointless for enforcement and scary for authoritarianism - worst of both worlds.
>People who asked for social media bans for kids got what they wanted.
These are also the people who have essentially outsourced a lot of upbringing of their kids to the govt. They couldn't be bothered with the nuances of the lives their kids lead.
I would like to note that I am probably grossly unqualified to talk about such topics, but one idea that I've had rolling around is that inevitably, if you ask people "should kids be able to watch/read/be exposed to [insert adult thing here]" they will inevitably say no of course not. I feel like this is pretty reasonable. For advocates of privacy to succeed I believe that they will need to not just oppose censorship on a global scale, but provide solutions. One thing that technology has not changed is the unit of human relations. From foster care to single or two parents, the idea of a family is still there in society. In my opinion, this group is greatly underserved, and I do not believe it is enough to say "its the parents responsibility" to curate content. That is a full time job. Now, I will be the first to say that children do not at all need to have a smarphone/ipad/etc until they are in their teens, restricting all technology use can be hard. There needs to be tools that allow parents to choose what their children are allowed to be exposed to. Some parents will choose complete freedom, some will choose some "censorship." But I believe the power should rest in the hands of the parents, and I am strongly opposed to the government dictating this choice. I believe one thing the government can be good at is enforcing standards and providing reference implementations that would allow such curation to be possible. Imagine if you walk into an Apple store and say you are buying a phone for your child, and they tell you: would you like a side of censorship with that? Or if companies like youtube that are a platform with children would need to provide a means for them to be curated, for channels and features to be blocked, etc. I am not sure if what I am proposing is the right way forward but I would love to see governments tackle this problem of giving power back to the parents, instead of seeing governments attempt to enforce their worldviews onto others. I am also interested in how there would a handoff, from a "child-friendly" internet to a fully uncensored one within families. I believe that outright rejecting censorship of what children can access will do nothing to assuage the fears of people that do not want their children accessing random websites, and that a solution that keeps the power in the hands of the people and not the government is needed.
I think a practical problem with this is that even if you offer this tech you will inevitably get groups of parents insisting the government use this power enforce their values on the other parents as a matter of course. We see this already with the erosion of cultural norms for free speech.
Apple already provides heaps of tools to help moderate what children can do on the phones, you can limit apps screen times and disable some apps altogether.
People want these laws simply because its hard to say no to your kids, and it's a lot easier to tell your kids its the governments fault they can't use social media any more.
> People who asked for social media bans for kids got what they wanted. They'll have to live with the consequences for the rest of their lives. We all will.
I guess I'm fine with not visiting any of these age-restricted sites. They're not the thing I would miss if the whole internet shut down. (In fact, there's precious little I would miss — maybe just archive.org?)
It's going to be every website. There will be no place they will stop. You think a forum like this one where it's conceivably possible someone in a bad category could interact with someone under the age of 16, however unlikely, won't be regulated?
"But sir! The largest websites on the internet implement Government ID Age Check. Just federate with one of those, why are you complaining so much? Don't you want to protect the children or stop anti-Semitism or something?"
If "save the children" creates enough friction to bring the demise of social media then I'll go lay a flower on Anita Bryant's grave and tell her I'm sorry.
Requiring photo IDs is not the only solution. Things don't have to be implemented that way. I can be both for privacy in this case and limits on social media. Australia already requires you to register for voting and other things, so the trivial solution here is: give out anonymous time-limited tokens from the gov site, with no logging. Essentially a signed timestamp + random number.
> People who asked for social media bans for kids got what they wanted.
So instead of using a photo ID so that everything is logged, register your every interaction online with the government directly. The government which also has your photo ID on record.
There's zero difference. Either way, the government will have you monitoring your every single little comment online and having it forever tied to your person. And that'll have a chilling effect on individual liberties.
> Australia already requires you to register for voting and other things, so the trivial solution here is: give out anonymous time-limited tokens from the gov site
Which “gov site”? Registering for voting does not give you an electronic log in of any kind.
> give out anonymous time-limited tokens from the gov site, with no logging
Awful idea.
This gives the government the power to deny you access to mass communication by deciding that you're no longer allowed to verify with these platforms.
"Been protesting the wrong things? Been talking about the wrong war crimes? Been advocating for the wrong LGBT policies? Failed to pay child support? Failed to pay back-taxes? Sorry you're no longer eligible for authenticating with social media services. You're too dangerous."
That is not beyond the pale for the Australian government.
You're also at the mercy of them to actually adhere to the "no logging" part, with absolutely no mechanism to verify that. And it can be changed at any time, in targeted ways, again with no way for you to know.
A better idea would be to sell anonymous age verification cards at adult stores, liquor stores, tobacco stores, etc. Paid in cash. An even better idea is to not do any of this and spend the money on a campaign to educate parents and institutions on how to use existing parental controls.
> so the trivial solution here is: give out anonymous time-limited tokens from the gov site, with no logging. Essentially a signed timestamp + random number
The trivial workaround is for people to create ad supported websites to hand out those tokens.
If there’s no logging then they can’t determine who’s abusing it or if they’ve even generated a different token recently, so people can generate and hand out all the tokens they want.
So then the goalposts move again, and now there’s some logging in this hypothetical solution to prevent abuse, but of course this means we’ve arrived at the situation where accessing any website first requires everyone to do a nice little logged handshake with the government to determine if they have permission. What could go wrong?
The real workaround is for people (including kids) to buy themselves a VPN subscription for a couple bucks per month and leave all of this behind while the old people are letting jumping through hoops.
YouTube Kids is exempt from the ban, this one should have been banned first because the sheer amount of smoothbrain content.
Channels like cocomelon and AI-generated songs with weird visuals are played on infinite loop with a mobile stand holding the phone in front of the child's pram while the parents pay no attention- and the children are hooked onto it as if they are hypnotized.
These videos in early stage of childhood has a very strong impact on environmental awareness and vocabulary of the children.
Yes, and I agree that bad parenting is the real issue, but I also have to acknowledge that leaving children unsupervised will have better results if they don't have the option of just watching a screen.
That said, I don't see any bans changing this, parents will just give young children access to their own "adult" YouTube account.
> These videos in early stage of childhood has a very strong impact on environmental awareness and vocabulary of the children.
I just managed to navigate the entire preschool age range without my children seeing a single cocomelon video on youtube. Its surprisingly easy, and makes me really wonder why people are complaining. Its as if they feel like they have to show these videos to their kids or something.
Dont people have a slop filter? Or are they just opening the youtube kids app and blindly handing their phone to a preschool child to watch whatever they want?
The argument is, apparently, that keeping your pre-school children away from these things is just too hard in this day and age. Your wee child will be ostracized if they don't have unfettered access to the Internet...
they said the same thing about books, "young people are hypnotised!" And I agree that it has an impact on vocabulary, it widens it. You learn to talk by hearing other people do it, and youtube is full of different accents and ways of talking. How many parents would take them outside to meet that many different people?
> they said the same thing about books, "young people are hypnotised!"
It really doesn't matter what "they" said about books. We are talking about screen time. And screen time has measurably harmful effects on child development.
It leads to worse outcomes across the board. Sleep disorders. Obesity. Mental health disorders. Depression. Anxiety. Decreased ability to interpret emotions. Aggressive conduct. And this is to say nothing of ADHD (7.7 times higher likelihood in the heaviest screen users) or social media's effects on adolescents. [1][2]
Its a little bit of a stretch to call YouTube social media. There are tons of great instructional videos.
The real kicker to me is that the government has passed a law restricting access yet they haven't determined how they're going to enforce an age check. It's wild that they passed a law without consideration to it's mechanics or feasibility.
> It's wild that they passed a law without consideration to it's mechanics or feasibility.
It's not. Much of the world's governments (particularly those that follow the UK system) implement smaller laws and then delegate the implementation to statutory instruments/secondary legislation, written by experts and then adopted by ministers.
It seems suboptimal, but then so does the alternative of a "big beautiful bill" full of absurd detail where you have people voting it into law who not only haven't fucking read it but are now not ashamed that not only have they not fucking read it, nobody on their staff was tasked with fucking reading it and fucking telling them what the fuck is in it.
Lighter weight laws that establish intent and then legally require the creation of statutory instruments tend to make things easier, particularly when parliament can scrutinise the statutory instruments and get them modified to better fit the intent of the law.
It also means if no satisfactory statutory instrument/secondary legislation can be created, the law exists on the books unimplemented, of course, but it allows one parliament to set the direction of travel and leave the implementation to subsequent parliaments, which tends to stop the kind of whiplash we see in US politics.
ETA: for example, the secondary legislation committee in the UK, which is cross-party, is currently scrutinising these:
There is a happy medium. The big beautiful bill stuff is not normal. There are some states that have single issue clauses where the bill must be a single issue, resulting in more concise bills. Enforcement and rules can be made by agencies too. I think the whiplash is more of a two party thing since the bipartisan ones rarely flip-flop. The other stuff barely passes. We would still have whiplash even if implementation were left to another congress because it would still barely pass.
Except in Australia experts don't come into it, except for sham inquirys that are held as a matter of course.
In this case, basically all the tech experts and child safety experts were saying that a blanket ban is not a workable policy, and could create harms in certain marginalised demographics where teens may rely on social media for support, yet the Government ignored them all and ploughed ahead.
The only changes to the legislation came from some political horse trading with the Opposition to get it through the Senate.
> Its a little bit of a stretch to call YouTube social media. There are tons of great instructional videos.
It’s clearly social media. It consists of user-generated content and has discussion features.
There’s a big problem with tech people coming up with their own definition of social media that exclusively includes sites they don’t use (TikTok, Facebook) but conveniently excludes sites they do like (YouTube, Discord, Hacker News). This makes them think extreme regulation and government intervention is a good thing because it will only impact the bad social media sites that they don’t want other people accessing. Then when the laws come out and they realize it impacts social media regardless of whether you like it or use it, they suddenly realize how bad of an idea it was to call for that regulation.
What I like has no bearing on what I consider social media. When I use YouTube, I don't connect or interact with others. I'm just consuming content similar to Netflix. Maybe the comments section could be considered that. However, that means any site with comments is social media - news sites, stores with reviews, etc.
It's not too much effort to find µBlock Origin filter lists that hide them. The only time I see YouTube shorts is when I deliberately navigate to the shorts tab on a channel page.
> The real kicker to me is that the government has passed a law restricting access yet they haven't determined how they're going to enforce an age check. It's wild that they passed a law without consideration to its mechanics or feasibility.
I predict it won't even matter. This law is unenforceable in practice. There is nothing that a bored and highly-motivated teenager who has hours after school to fuck around, won't be able to circumvent. I think back to my teenage years: None of the half-assed attempts made to keep teenagers away from booze, cigarettes, drugs, or porn even remotely worked. These things were readily available to anyone who wanted them. If there is an "I am an adult" digital token, teenagers will easily figure out how to mint them. If the restrictions can be bypassed with VPNs, that's what they will do.
> Its a little bit of a stretch to call YouTube social media.
Is it? As far as I can tell, the definition of social media is a platform where it is trivial to publish to it. That definitely fits YouTube.
The fact that there is great educational content on it (and I 100% agree that there is great educational content) I pretty much solely due to a passionate community, not really anything YouTube itself does to prioritize that kind of content. In fact, as far as I can tell it's harder
Even a very 'light' definition would catch YouTube, I'm convinced of this. The UK's definition is—broadly—any site that a user can take an action on that would affect other users. This would definitely catch a forum like HN, any site with comments, etc. Personally, I feel that, combined with draconian identity requirements, that goes way too far, but I think I'd struggle to draw a line that better fits the alleged intent of these political moves.
I was in Melbourne Central the other day and there were big ads up for identity verification platforms, where consumer brands normally put up their ads. That'll prime the brand recognition for everyone so that when the identity checks come in, people will feel more comfortable complying.
They aren't banning viewing videos, they're banning kids having an account I believe.
I'm sure their approach to enforcement will be something along the lines of relying on the websites to sort it out and fining them if they don't. The govt doesn't need to enforce the age check themselves or even provide or suggest a mechanism.
I imagine any smaller players in this market will just stay away from having an official presence in Australia.
This ban includes watching videos. The law says they must take action to prevent underage persons from accessing their services. This means they will likely have to require login and age verify any accounts. The carve out in the article is talking about teachers and parents being allowed to show the content to the kids.
"The govt doesn't need to enforce the age check themselves or even provide or suggest a mechanism."
I suppose it will be up to the courts to decide what is reasonable as an age check. However, the government has said that they don't want to include full ID checks, which is why one would assume they would provide guidance on how to comply.
Yes, but its also unregulated and full of shit, Moreover its designed to feed you more stuff that you like, regardless of the consequences.
For adults, thats probably fine (I mean its not, but thats out of scope) for kids, it'll fuck you up. Especially as there isnt anything else to counteract it. (think back to when you had that one mate who was into conspiracy theories. They'd get book from the library, or some dark part of the web. But there was always the rest of society to re-enforce how much its all bollocks. That coesn't exist now, as there isn't a canonical source, its all advertising clicks)
They passed the law without considering the past decades of attempts to prevent minors from accessing all kinds of content on the internet. Anyone who grew up with internet access knows it won't work. Even if you put up a country-level firewall it's basically impossible to stop people from finding what they want on the internet without spending way too much effort to be politically viable.
YT still has the great instructional videos, but teens today (my son included) are mostly just scrolling the shorts just like TikTok. YT is heavily orienting itself as social media.
Completely banning all of YouTube feels like throwing out the baby—valuable educational content—with the bathwater—everything else. It seems more effective for YouTube to offer a dedicated educational platform, like education.youtube.com, with content filters built in. That way, students could access channels like 3blue1brown without exposure to unrelated or less appropriate content like MrBeast or Jubilee. Heck, I might personally prefer to use that version of YT myself.
As a parent (who also btw uses Google products every single effin day) I just can't agree.
This is entirely Google's issue to fix. Yes, YouTube has amazing educational content. I'd really like to make it available for my kids to see.
YouTube, however, makes it completely impossible to permanently filter/hide/disable the bane that is YouTube Shorts. I don't let my kids on TikTok not because it's Chinese, but because it's trash. I don't allow them near Instagram either.
The chances of kids growing an attention span by seeing interesting stuff in installments of 30 seconds approaches zero really, really fast. Yes there's the possibility telling a fun joke, demonstrating an optical illusion, or some interesting curiosity in under a minute. But it's far more likely that it's trash, and teaching kids (and adults) that if they don't get a kick of something within the first 10 seconds, it should be skipped.
And it's not necessarily age/quality rating of content; UX matters. It's totally different to find that your kid wasted an hour of their life doom scrolling over 150 videos of which they didn't even complete half, or that they spent it seeing half a dozen things videos of dubious quality: if it's half a dozen it's at least feasible to discuss with them why some are better than others.
So, I'm very close to just banning YouTube (at the DNS level if required). Which is a shame, because I then can't share the interesting stuff with them, and neither can their teachers.
You can completely disable Shorts by turning off your YouTube history.
No idea why, but it works and it's blissful. Plus you can still like videos, subscribe to channels and curate your own lists if you want to bookmark stuff to come back to.
So then block google/YT and call it a day? It's absolutely not Google's problem.
This isn't a "real life" thing - it's not like there's a strip club with open windows next door to your house for your children to look into. We're talking about a computer/iPad/mobile phone - block YT at the DNS level or better yet, don't even give one to your kids. Problem solved.
Other people shouldn't have to be punished with breaches to their privacy because people can't manage their childs online time.
I have been able to somewhat reasonably block youtube shorts with the following custom filter ublock origin rules (on firefox at least).
Note that it might accidentally hide some legitimate stuff but from my experience it should be pretty minimal if any. I think to hide the shorts from the left sidebar it hides one of your subscribed channels but that's all I've noticed so far.
I feel you about short content, I've taken to using uBlock origin with a custom filter to eliminate shorts from the front page. On the other hand when a youtuber makes a video 10-40 minutes long when the brunt of the information could be 1-5 minutes that gets my goat as well. My children do benefit from the amazing assortment of educational and entertaining options, but we watch together and talk about what we see, they're becoming media savvy and complain when sponsor block misses a segment. If we all skipped the ads we would see a new internet emerge.
3. Have a "show me more content like this" button, but again, no auto algorithmic feeds
4. Filter out age inappropriate content.
would be great for teenagers. I think the problem for YouTube is that it would be great for everyone else, too, so they'd get bombarded by "Hey, I want that version" requests, which would clearly make them less money.
There is no moral high ground with basically any online platforms, it's all solely based on financials, and people should realize this.
I think Google/YouTube would slow-walk the hell out of this only because they are making a ton off of the worst, basest of content and more filters = less eyeballs.
See also: Facebook "efforts" to stop scam advertisements and Marketplace fuckery
But this is basically the way for Australian government to try to make YouTube do that isn't it? There's already YouTube Kids, so maybe this makes YouTube think ok we need YouTube Teenz, or YouTube Educational or whatever.
YouTube Kids is also full of garbage. The bar to get content into YouTube Kids is substantially higher than YouTube but still the average video's educational quality is abysmal.
There are people at YouTube/Google/Alphabet who care but at the end of the day we get what the invisible hand gives us. Market forces have not yielded a well-curated educational video experience on YouTube.
Oh, is that the majority of their content, traditional educational content? I must be mistaken in thinking they were funneling their audience into “shorts” and that kids obviously naturally recoil from “shorts” as much as they do green veggies and chores…
The amount of bathwater is increasing rapidly, whilst the baby is about the same size.
And it's almost purely bathwater that gets put in my face on the YT front page. The occasional baby pops up.
(as someone who rarely logs in, and only with a couple of throw away-ish accounts because I don't like being tracked and don't like YT/Google - so this will affect my perception of the baby:bathwater ratio)
YouTube has so much good content with sub-5000 views. Lectures or interviews with quality thinkers who avoid the podcast bro drama circuit. Hard to discover with Youtube's junk-food recommendation engine.
Putting that aside, the reality is that kids are bored, highly motivated, and networked with each other across the planet. Even more than porn, which is only going to appeal to a subset of kids, "all of Youtube" is definitely a bit more universal.
The major outcome of this legislation should be nothing more than Australian kids being the most familiar with VPN's and very little else, along with other tricks to bypass this.
Youtube is optimized for engagement and ad revenue. In my experience, there's more click/rage bait and entertainment than educational content (perhaps that reflects my algorithm haha). Unless there's improved content moderation or media training, I can see how this would ultimately benefit teens as they're minds are still developing.
The bathwater is not any specific piece of content but the YouTube discovery and recommendation algorithm. As long as that's in place, there will be incentive to create terrible "slop" content to get into "education.youtube.com" and collect ad revenue. The same thing happened with kids.youtube.com[1] and I don't see a solution other than hand-curating channels for inclusion.
Well put. I do not agree with the clumsy approach taken by countries like Australia, UK, and Texas, but I absolutely consider youtube and social media problems responsible for the tsunami of lowest-common-denominator slop. Free market/user choice idealists need to face up to the fact that slops is bad and lowers standards rather than elevating them, because the economic incentives tilt in favor of low quality, sensationalism, and so on. To some extent that's a reflection of the viewing/clicking population, but that doesn't mean that you should always just give people more of what they want. We tried that with high fructose corn syrup and the result is whole populations ravaged by obesity and diabetes.
Youtube has gotten so much worse in the last 6 months tho, introduction of shorts has devalued the platform terribly and it seems like all the good educational creators are moving off it anyway and now its just ripped crap that is often AI produced. Hopefully this move makes some actual competition show up for Youtube, because it sorely needs it.
Yeah but how do you decide who's educational content and who isn't? Mr Beast does tons of "educational" videos in the context of "$1 vs $10,000,000 house" or "living in Antarctica for a week". Same with Jubilee.
The real big-brain move is understanding this isn't about protecting kids, and there isn't really anything YouTube can do long-term. Australia has been going after US big tech for a long time
Why are so many countries like Australia, UK, EU, etc suddenly pro censorship. Aren’t these all liberal democracies? I would think these policies would be very unpopular. Is there some analysis of how this came to be normalized?
I'm an everyday Australian, I'll take a few guesses. (I don't support these new laws)
1. we don't have as an antagonistic relationship with our government and we trust that most of what will be banned will be gross stuff we don't want weirdos watching.
2. I think most people feel social media really is breaking young people, and its easier if all kids are banned than just trying to ban your own kids. It's really hard to explain to a kid why they are not allowed to watch you tube when every other kid is.
Update: Also, the only thing this law is going to do is to force every parent in Australia to create accounts for their kids.
To use the Donald Horne quote "Australia is a lucky country run mainly by second rate people who share its luck. It lives on other people's ideas, and, although its ordinary people are adaptable, most of its leaders (in all fields) so lack curiosity about the events that surround them that they are often taken by surprise."
Unfortunately, this has propagated down to a lot of the people. They want the government to be the parent instead.
As Jordan Shanks once said - "I have 6 investment properties" is the entire personality of a lot of Aussies. Many others are the same they just don't have the opportunity.
This whole situation appears to be a failing on all angles. From government over reach, corporate greed by forgoing morals to the people who are so worn down they just don't have anything left to give.
There are multiple studies showing the negative impact of social media on teen's health.
It's not about censorship but about forcing companies that don't care at all to be held accountable.
I'm not sure the approach taken by Australia will be effective (i'm not sure how it can be implemented), but i don't see the problem with doing something against harmful companies like meta, tiktok, x/twitter
Australia used to have energy for protesting this sort of shit, but its all spent.
We used to have a pretty decently funded anti internet censorship lobby. It died in the 2010s.
Since then its just been hit after hit after hit. Any minute justification is seized upon to wind up internet freedoms.
Former PM Turncoat said “The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia,” That was 2017. And so far its been a bipartisan position.
The truth is that industry used to also oppose censorship. But its been completely captured. Every time one of these censorship proposals come through, Ausnog gets the usual "Should we act this time?" emails, and nothing comes of it.
Its over. Freedom of Communication is dead in this country, instead of our politicians.
I think a key point of it is that those in power know that if there is bipartisan support, they can ignore all protests.
All the campaigns I was involved in for well over a decade achieved absolutely nothing because of this. It is worse than that now, seeing the screws slowly get tightened on peaceful protests makes this even worse. They cant just ignore it but actively suppress it and get away with it.
A few years back I wrote an essay about the passing of Ted Kaczynski, it was never published as they said to be a topic you do not touch. However my conclusion was that I fear the "children of Ted", those that end up being so silenced, end up radicalized by their own oppression that violence becomes their only answer. I suspect we are only a decade or two away from this on a lot of issues.
this is why "think of the children" is always used in these instances, it gets right past peoples defenses and if you try to argue against privacy invasive/life invasive/completely useless regulations/regulations ripe for abuse (by design) then you are somehow the "bad guy"
A majority of adult GenZs who've grown up with this stuff agree it was bad for their childhood and most older adults use social media and feel the negative effects too. Using some sophistry to argue it's all made up by the government is like Democrats arguing Biden was fit to run a second term when everyone can see with their own eyes he was not.
Sharpening contradictions of capitalism leading to an impasse, forcing its servile governments to clamp down preventively on workers rebellions around the western world. So yes, there are analysis for this and analytical/scientific tools to understand those phenomenon
Mass Internet censorship in the Western world started in 2020 with restrictions on Covid information and discussion. This is just the logical conclusion.
I'm an australian who completed the esafety survey which helped guide this policy. I pushed for anonymous temporary age verification tokens generated through a government app.
Social media is undermining the fabric of our societies and destroying a whole generations emotional development. I support this- in part because I know those who want to get around enough or be private will always find a way, but it has a positive, reality affirming effect on the public.
Watch the press conference from our PM and comms minister from yesterday to understand that this is coming from a place of compassion or control. They have said repeatedly they will always ensure a non id method is ensured. I know there are flaws in that though. https://youtu.be/SCSMQUmrh38?feature=shared
I think homomorphic encryption through a third party would be better. Gov app could be one side of it, blinded evidence provision to the identity, to the intermediary.
Maybe this is what you meant? it's what the CSIRO and the Privacy Commissioner said was their recommended method to do proofs of age/identity through government issued documents, without revealing what the URL was being accessed.
Leaving aside the merits of the ban for a moment...
This is politically beneficial because Google and Facebook squandered historically broad and strong goodwill, and they made themselves a target in the culture wars.
Google would have survived just fine with its historically light touch on ads.
Both would have been ok without monetizing data collected from users.
Both would be successful allowing users to pick aspects they wanted (e.g., shorts or not), rather than coercing them.
Unfortunately, there's no market feedback for missed future opportunities, and weak positive benefits from PR that dampens and side-steps negative sentiment, so there's no correction.
Had Google taken the privacy tack that Apple did, we might all be storing our most critical data on their servers (given their high data center standards), and thus inclined to do most business on Google cloud.
Both companies have founders still directing a majority of shares. There's no excuse of corruption by short-sighted shareholders.
Page and Brin have consistently escaped blame for things that Zuckerberg and Musk are excoriated for. It turns out that you just have to lie low, instead of jumping up and down for attention in the press or on social media, and people will obligingly forget that you exist and have effectively full control over your giant, society-dominating company.
This is it 100%. Zuckerberg and Musk aren't doing anything more awful than Page or Brin, but they're smart enough to understand that it's WAY easier to get away with sketchy stuff when you're not someone that the average person knows about. There's a huge amount of value in keeping your mouth shut.
I think there's a lot of wishful thinking in your post. Alphabet is the 5th biggest and richest company in the world. From a capitalistic perspective, they made everything right and the point you bring are negligible.
Perhaps. A hypothesis that their business would be bigger could be seen as a wish. But Alphabet's stock multiples are below comparable companies. From a capitalist perspective, that means they're being poorly run.
So I think you can do more than minimize ("wishful", "negligible"). Also, what you seem to be saying is that being among the biggest makes everything they did right, so one should just accept that.
If I were a Google principal hiring any leader, I'd want to select candidates who had concrete plans for improvement -- opportunities to grow the business or to correct mistakes. Isn't that a better approach?
If instead Google principals and managers were hiring for those who accept their decisions - loyalty -- I'd run in the opposite direction.
Our government intends to spruik this at the UN and get other countries on board.
Our government has said there will always be a non id method
Youtube will still be accessible it is just the account making/usership which will be banned
Posting my threaded comment higher up:
I'm an australian who completed the esafety survey which helped guide this policy. I pushed for anonymous temporary age verification tokens generated through a government app.
Social media is undermining the fabric of our societies and destroying a whole generations emotional development and institutionalising a culture of infectious insecurity. I support this- in part because I know those who want to get around enough or be private will always find a way, but it has a positive, reality affirming effect on the public.
Watch the press conference from our PM and comms minister from yesterday to make up your mind on if this is coming from a place of compassion or control. They have said repeatedly they will always ensure a non id method is ensured. I know there are flaws in that though. https://youtu.be/SCSMQUmrh38?feature=shared
It's interesting to see that the press conference felt so uniquely grounded in reality and authentically emotional- maybe that's because they are directly challenging the delegitimising impermanent reality of social media-
Yes they did bring families with children who had passed from social media abuse on stage but it felt genuine. Doesn't mean your privacy concerns aren't real but they don't always trump protecting a childs emotional development.
And the kicker is that the above app doesn't even need to exist since myGov could just use industry standard TOTP two-factor auth like the dozens of other services I use.
Aussie politicians once again conforming to their lucky country stereotype:
"Australia is a lucky country run mainly by second rate people who share its luck. It lives on other people's ideas, and, although its ordinary people are adaptable, most of its leaders (in all fields) so lack curiosity about the events that surround them that they are often taken by surprise."
I'm not saying aus government online portals and services aren't top tier dog shit- but doing an age token through mygov is the best approach, hopefully with enough pressure to make it non shit.
The alternative is an acceleration of the negative cultural trends and atomisation we have now.
You don't get to cry about the negative effects of social media but also cry about censoring it/protecting an impressionable population from it at the same time.
Which the government doesn't care about. This may have something to do that people don't criticise the government when they are just losing their life's savings.
It's worth scrutinizing the philosophical mental model implicit in your opinion.
Do you wait for conclusive empirical evidence before doing anything? Or do you run an experiment in one country based on an informed opinion and see what happens?
I am more inclined to pursue the latter model for this question.
The case against youth social media makes logical sense, there is circumstantial evidence that it's having a negative impact, and I have enough experience with data to know how difficult it is to demonstrate that it's true empirically without a large-scale natural experiment like the one that's about to happen when this law passes.
A lack of evidence should not paralyze you on questions where conclusive evidence is very hard to assemble. Especially when action will create evidence.
Last I checked it was the parents' primary job to protect a child's emotional development. And yes some kids might not be fortunate enough to have caring parents but I'm pretty sure that alone would fuck them up more than social media. But hey let us continue to make the world a safe space lest Western parents actually parent their children.
" lest Western parents actually parent their children."
I don't understand this argument I keep hearing. What is your understanding of parenting that doesn't involve controlling what they are exposed to? It sounds like you want to say, parents should parent in any way that doesn't burden non-parents. Why would that be in a democracy?
Western parents currently spend way more time and effort directly "parenting" then used to be a historical standard. This jab is completely ridiculous.
Also, relatedly, it is uniquely modern western idea that parent has to control everything alone by himself and have the kid under perfect control every moment.
I wish it wasn't the case but have you seen how emotionally retarded (correct use of the word) this generation of children is? Compare it to even 20 years before. We wouldn't need to do this if more parents actually did their job. By the nature of the social media monoculture it's harder than ever to shield kids from anti intellectualism. Each school basically has the same culture- good and the bad.
> Youtube will still be accessible it is just the account making/usership which will be banned
Then what difference will it make in practice? Do the legislators really think that kids being able to comment on videos was the most harmful thing about the platform? YouTube will still be able to give you suggestions and send you down a rabbit hole of smoothbrain content even if you use it without an account.
> I support this- in part because I know those who want to get around enough or be private will always find a way, but it has a positive, reality affirming effect on the public.
It sounds like you admit that this has mostly signal value.
I really don't understand how you can support this.
It has some signal value, but it's also a part of creating a culture where social media isn't mandatory in the greater society and the workforce- which has a lot of benefits- and not exposing the next generation to the crushing double demoralisation of AI/Machine learning and social media hyperconnectivity which no generation has had before and could ruin them imo.
I do believe in this so ask whatever tricky questions you want.
Not when the parents whose children lost their lives are the ones who organised the campaigns. These aren't the hypothetical wolves in sheeps clothing that emotionally vacuous and selfish digital libertarian types love to salivate over but are real people who have suffered real consequences.
So frustrating spending years fighting against censorship, people protested on the streets when SOPA and ACTA were a thing and now they are advocating for even more dangerous censorship. ACTA hasn't become law but internet censorship is on an unprecedented level in Europe (see Spain).
> Social media is undermining the fabric of our societies and destroying a whole generations emotional development and institutionalising a culture of infectious insecurity. I support this-
YOU are undermining the fabric of society.
With the excuse of "protecting children" you're trying to destroy the last semblances of privacy and the ability to dissent.
Fuck your using children as a shield. You're hurting them like you did supporting covid policies.
You don't help children isolating them and censoring them and their parents.
Disgusting Propaganda of the lowest form. War on terror. War on drugs. War on disinformation.
I don't want privacy to be gone. I want a free internet to still exist for those who are educated enough to bypass firewalls and monitors, I would kill to have a knowledge gated internet again, but I want barriers to harm for children. Why do we card kids for R18+ games but not the internet. It's fundamentally stupid and unhealthy for our society.
If everyone moved back to non algorithmically addictive forums and self segregated by age I would have no issues with that and wouldn't see the need for regulation. That is not the world we live in and we have so obviously seen people self select a terrible and damaging digital world that gives idiocracy a run for its money. Hysteria is sometimes a warranted reaction.
I think it is an important step making social media illegal for children to them reclaiming reality, and re seperating the adult and child social worlds like they used to be. The implementation is the main part for many and I get that.
I'm happy for my kids to have free access to certain channels on youtube, but the mind numbing shorts, and shit they find on random channels just does my head in. And it seems to be getting worse, I'm not sure if its that they are getting older and able to search for more content or if the content is just getting worse, maybe both, but I'm probably just going to cancel the sub so they at least have to put up with terrible ads if they try to access it.
The answer is to this question is always: it is too niche a product feature for a giant corporation to prioritize. This product would require constant work to keep in sync as UIs and features change. It would be one more feature to regression test against an ever growing list changes, and an ever growing list of client apps that need to work across an endless list of phones, computers, tvs, etc.
This is why it is important that society normalize third party clients to public web services. We should be allowed to create and use whatever UI we want for the public endpoints that are exposed.
PS: this particular feature exists though.
https://support.google.com/youtubekids/answer/6172308?hl=en&...
Having been at a company that tried this: The number of poorly-behaved or outright abusive clients is a huge problem. Having a client become popular with a small group of people and then receive some update that turned it into a DDoS machine because someone made a mistake in a loop or forgot to sleep after an error was a frequent occurrence.
The secondary problem is that when it breaks, the customers blame the company providing the service, not the team providing the client. The volume of support requests due to third party clients became unbearable.
These days there’s also a problem of scraping and botting. The more open the API, the more abuse you get. You can’t have security through obscurity be your only protection, but having a closed API makes a huge difference even though the bad actors can technically constantly reverse engineer it if they really want. In practice, they get tired and can’t keep up.
I doubt this will be a popular anecdote on HN, but after walking the walk I understand why this idealistic concept is much harder in reality.
The option (or at least documentation) does not seem to be there for computers. Is it only on mobile devices?
2ish billion people, well known for their indirect spending power, are not worth figuring out a simple whitelist system for.
The answer is even shorter: money. Our society prioritizes "giant corporation makes money" over good things happening.
But why does the UI need to change? Nobody would miss having to relearn it every couple of months.
Which as a parent of a toddler is absolutely mind-numbing.
Which means that every so often he will always end up encountering either some foreign-language content (borderline appropriate if I want him to learn his native tongue first) or something with violence, etc. that is not at all appropriate, or something like some kid playing some dumb but colorful game (non-mind-enriching, pure dopamine garbage) from some rando channel.
PLUS, they seem to be abandoning YouTube Kids in order to merge its functionality into the main YouTube app, and yet...
Like, does ANYONE at Google/Alphabet/whatever actually have fucking kids?!?!?!
I paid for family YouTube just so my kid (and myself) wouldn't be forced to watch ads.
All they'd have to do here is let me ban FUCKING shorts (don't get me started... note that they intentionally made this extremely difficult to impossible, good luck blocking it at the router level), and whitelist some channels, and they'd instantly make every parent 100% happier with YouTube!
And no, I'm sorry but this would NOT be hard to build/maintain. Hell, they can hire me to build it out, I'm LFW!
1. https://jellyfin.org/
2. https://github.com/kieraneglin/pinchflat
I rarely have to touch it unless I'm adding a new playlist or channel
https://ytdl-sub.readthedocs.io/en/latest/introduction.html
It's been great, the kid can watch any channels on there she wants on her ipad with no ads or sponsored segments
- Can it limit the time range of video to download? Some channels may have ten thousand of video.
- Can it auto include the CC to video, that's one of main selling points of youtube to me.
If you genuinely let user's preferences be taken into account, it's incredibly hard to make money from ads if the user's true preferences are not to be shown them.
The entire point of ads is to manipulate and change user preferences and behaviours.
So any preferences or customisation has to be minimal enough that their use can only partially implement user preferences. White listing is a step too far against the purpose of YouTube.
Thus Google will always be biased to not letting you implement full customisability and user control.
Whether this is viable or not, I don't know. I'm not sure what the average take per person is from the current model.
Once they started masquerading ads as results, yeah any ability for user down or upranking became unworkable.
There's some truly great content on the platform, some of it even for kids. But it gets drowned out by mountains of algorithmic slop.
I have stopped giving my kid access to Youtube. instead I set up my own media server, filled it with pirated TV shows and Movies I can curate, and give them access to that on the TV and iPad in their allowed screen times.
My opinion is that YouTube should be forced to permit third party clients (interoperate). NewPipe and the various other clients are proof that there is a desire for alternative experiences and more toggles and options. Forcing users to identity themselves online to watch videos (or certain classes of videos) is a privacy nightmare, dystopic even.
Ublock origin and Sponserblock on Firefox. I also have an extension (forget the name) that blocks recommendations after a video. Disable autoplay.
There are also extensions that replace the home page with the subscriptions page.
But really, if BS exists on the internet, either your kids will find it or it will be shown to them. There's nothing you can do.
Deleted Comment
https://www.youtube.com/live/cN4EPsfBnq0?feature=shared
While also containing huge amount of unboxing toys crap I would not give to my kids in my own watchiles.
Whitelisting: There is way too much appropriate content out there to whitelist it all. It's totally infeasible for a parent, unless you're planning to only approve a handful of channels, which makes YouTube pointless.
YouTube Kids: Teenagers are not "kids" and are not going to go onto YouTube Kids to watch Baby Shark and Mickey Mouse Clubhouse or whatever other kiddie stuff they have there.
Something else entirely is needed here.
edit: Oh neat they do have a parental approval mode in there now. Last time I was in here they only let you set an age range for the content that you wanted. It still seems a bit weird though, I can select a channel from the list they are presenting me but I can't search for some arbitrary channel to unlock. I'll have another look tonight though
Now governments around the world are acting in unison to happily give those people what they want, and people are suddenly confused and pissed that these laws mean you need to submit proof that you're over 18. And instead of being an annoying checkbox that says "I'm 18. Leave me alone", it's needing to submit a selfie and ID photo to be verified, saved, and permanently bound to your every single action online.
People who asked for social media bans for kids got what they wanted. They'll have to live with the consequences for the rest of their lives. We all will.
To add to that, often no news is good news, or rather people won't bother posting about how they're glad minors can use social media freely, but once restrictions are in place they will quickly complain (because they prefer the old way).
I just learned a brand-new term for this: It's called the "Goomba Fallacy"[1]
[1]https://en.m.wiktionary.org/wiki/Goomba_fallacy
It is something worth pointing out.
The common theme in these statements is that people see “social media” as something that other people consume.
All of these calls for extreme regulations share the same theme: The people calling for them assume they won’t be impacted. They think only other people consuming other content on other sites will be restricted or inconvenienced, so they don’t care about the details.
Consider how often people on Hacker News object when you explain that Hacker News is a social media site. Many people come up with their own definition of social media that excludes their preferred social sites and only includes sites they don’t use.
The issue is everyone wants some quick and easy solution when the truth is we’re going to need to get much more intentional as a society about this. Take phone bans. Everyone wants to ban phones from schools/classrooms, but the truth is in a lot of places phones are already banned from school. But we’ve spent the last 3 decades taking away any power from teachers to enforce their rules so kids just do it anyway.
Bans on recommendation systems. Doesn't need much thought to figure out. Instant 90% harm reduction.
And leaked every 6 months, now including your ID photos and real name instead of an internet pseudonym, and lots of other sweet details that make extortion schemes a child's play
Even cooler would be if you create a different identity for each service so when they do leak, you know who leaked it. My first id would be for John Facebook Doe.
Best I can tell it came from a single but sustained pressure campaign by one of the Murdoch newspapers.
Then the Government gamed some survey polling to make it look like there was support for it (asking questions that assumed an impossible perfect system that could magically block under-16s with no age verification for adults). Still, over 40% of parents said that 15s and under should be able to access Facebook and Instagram, and over 75% of parents said they should be able to access YouTube, but the Government was acting like 95% of people were for blocking them, when it was closer to 50% of parents.
So a whopping 60% were asking for it!!!
Government in australia is about being seen to be busy. Give them an idea that cant be morally contested, that the media wont contest, and they go about it.
Much like how we got our eSafety commissioner and internet bans. We protested them for years, but then sneaky scomo used Christchurch as wedge and got it through without protest.
And as ever, our minor parties, especially liberty minded ones are more concerned with whats in kids pants than actual liberty.
FWIW I'm personally happy it's becoming a law
> According to the YouGov poll, seen by the dpa news agency, some 77% of respondents said they would either "fully" or "somewhat" support similar legislation in Germany.
I believe that there should be a standard, open framework for parental control at the OS level, where parents can see a timeline of actions, and need to whitelist every new action (any new content or contact within any app). The regulation should be that children are only allowed to use such devices. Social media would then be limited to the parent-approved circles only. A minor's TikTok homepage would likely be limited to IRL friends plus some parent-approved creators, and that's exactly how it should be.
There's no need for any regulations here and never was. It was always a power grab by governments and now the people who trusted the state are making surprised pikachu faces. "We didn't mean like this", they cry, whilst studiously ignoring all the people who predicted exactly this outcome.
Admitedly at some point they are reaching teenage years and they should have a right to privacy so even having access to a timeline of actions seems like a no go to me. The same way they can wander off in the street on their own, write private letters to people or have private calls with friends.
The funny thing is hearing adult people shouting aloud that kids suffer from social media use and bla bla bla let the same people have been ruining their own relationship with their life partners, family and even their whole life for years by spending way too much time in front of TV, computers and by doomscrolling all day on instagram and tiktok.
I don't understand how these people are all acting as if only children need to be saved. Banning stuff to children won't even work if the only example they have of adulthood are people with a hunchback staring lifelessly at a small screen on the palm of their hand all day.
In other words, they're not saying "it's okay when I do it but not kids", they're saying "even as an adult it's impacting me, let's not poison kids"
No silly age IDs and selfies, no unstable and unsafe procedures, no permanent damage.
Government: lol, every HTTP request must include your government ID, period :)
In the late 90s and early 2000s we as teenagers had access to unfiltered internet and unregulated. The harm to us were largely moral fanaticism, this was when they also tried to ban video games because of violent content and now we have complete censorship and control over what games can sell or not on steam.
Much of the panic on social media amplified by protestants and religious ppl are greatly exaggerated. Porn isnt the danger its the addictive tendencies of the individual that must be educated upon.
We beat the moral panic last time and kept our freedoms. This time I'm not so certain that we will prevail, there seems to be a coordinated/unified effort on this wide spread surveillance and my hunch tells me the rise of authoritarianism around the world is the drive - much easier to oppress a population in a surveillance state. The "for the children" argument is as old as time.
The internet was somewhat social in the 90's and early 2000's.
The institutions largely being affected here did not exist then.
I get your point but I don't agree.
I mean, politicians back then were actually right in assuming that danger looms on the Internet. They just were completely wrong about what was the danger. Everyone and their dog thought that the danger was porn, violent video games (Columbine and Erfurt certainly didn't help there), gore videos (anyone 'member RottenCom), shocker sites (RIP Goatse), more porn, oh and did I say they were afraid of boobs? Or even of cars "shaking" when you picked up a sex worker in GTA and parked in a bush?
What they all missed though was the propaganda, the nutjobs, the ability of all the village idiots of the entire world that were left to solitude by society to now organize, the drive of monetization. That's how we got 4chan which began decent (Project Chanology!) but eventually led to GamerGate, 8chan and a bunch of far-right terrorists; social media itself fueled lynch mobs, enabled enemy states to distribute propaganda at a scale never before seen in the history of humanity and may or may not have played a pivotal role in many a regime change (early Twitter, that was a time...); and now we got EA and a whole bunch of free to play mobile games shoving microtransactions down our children's throats. Tetris of all things just keeps shoving gambling ads in your face after each level. The kids we're not gonna lose to far-right propaganda, we're gonna lose to fucking casinos.
We should have brought down the hammer hard on all of that crap instead of wasting our energy on trying to prevent teenagers from having a good old fashioned wank.
Delivering safety is a necessary condition for preserving liberty. It is not a nuisance or a side quest.
This isn't the right way to characterise what happened. Governments are going this is unison, it is a coordinated campaign that has been obviously coming for a couple of years. Remember that governments wanted to act against misinformation? Well, this is it. Deanonymised internet. Aus, UK, US, etc - its on the way.
What you are seeing with certain comments etc is probably a lot of genuine comments primed by stories of cases where id would have apparently prevented something-or-other, along with comments from agents and bots. This is how modern governance actually works.
There is a goal (here, its deanonymised internet) then the excuse (children, porn, terrorists), then the apparent groundswell of support (supportive comments on hn, etc) then actual comments that validly complain this is dystopian but go nowhere (auto-downvoted or memory-holed by mods) which gives the appearance to most that no one really cares and this should be simply accepted. So, a difficult idea managed correctly can get past everyone with the minimum of fuss.
Some with kids will praise and use it as intended. Many with kids won't. Those without kids won't. All in return for the ultimate in monitoring.
And then people will work around it in various ways. Use forums or chat-group apps that don't comply with the law as intended. Share videos in other ways.
This whole shebang is pointless for enforcement and scary for authoritarianism - worst of both worlds.
These are also the people who have essentially outsourced a lot of upbringing of their kids to the govt. They couldn't be bothered with the nuances of the lives their kids lead.
Deleted Comment
People want these laws simply because its hard to say no to your kids, and it's a lot easier to tell your kids its the governments fault they can't use social media any more.
I guess I'm fine with not visiting any of these age-restricted sites. They're not the thing I would miss if the whole internet shut down. (In fact, there's precious little I would miss — maybe just archive.org?)
"But sir! The largest websites on the internet implement Government ID Age Check. Just federate with one of those, why are you complaining so much? Don't you want to protect the children or stop anti-Semitism or something?"
> People who asked for social media bans for kids got what they wanted.
This is BS and not productive. We can do better.
There's zero difference. Either way, the government will have you monitoring your every single little comment online and having it forever tied to your person. And that'll have a chilling effect on individual liberties.
Which “gov site”? Registering for voting does not give you an electronic log in of any kind.
Awful idea.
This gives the government the power to deny you access to mass communication by deciding that you're no longer allowed to verify with these platforms.
"Been protesting the wrong things? Been talking about the wrong war crimes? Been advocating for the wrong LGBT policies? Failed to pay child support? Failed to pay back-taxes? Sorry you're no longer eligible for authenticating with social media services. You're too dangerous."
That is not beyond the pale for the Australian government.
You're also at the mercy of them to actually adhere to the "no logging" part, with absolutely no mechanism to verify that. And it can be changed at any time, in targeted ways, again with no way for you to know.
A better idea would be to sell anonymous age verification cards at adult stores, liquor stores, tobacco stores, etc. Paid in cash. An even better idea is to not do any of this and spend the money on a campaign to educate parents and institutions on how to use existing parental controls.
The trivial workaround is for people to create ad supported websites to hand out those tokens.
If there’s no logging then they can’t determine who’s abusing it or if they’ve even generated a different token recently, so people can generate and hand out all the tokens they want.
So then the goalposts move again, and now there’s some logging in this hypothetical solution to prevent abuse, but of course this means we’ve arrived at the situation where accessing any website first requires everyone to do a nice little logged handshake with the government to determine if they have permission. What could go wrong?
The real workaround is for people (including kids) to buy themselves a VPN subscription for a couple bucks per month and leave all of this behind while the old people are letting jumping through hoops.
Channels like cocomelon and AI-generated songs with weird visuals are played on infinite loop with a mobile stand holding the phone in front of the child's pram while the parents pay no attention- and the children are hooked onto it as if they are hypnotized.
These videos in early stage of childhood has a very strong impact on environmental awareness and vocabulary of the children.
That said, I don't see any bans changing this, parents will just give young children access to their own "adult" YouTube account.
I just managed to navigate the entire preschool age range without my children seeing a single cocomelon video on youtube. Its surprisingly easy, and makes me really wonder why people are complaining. Its as if they feel like they have to show these videos to their kids or something.
Dont people have a slop filter? Or are they just opening the youtube kids app and blindly handing their phone to a preschool child to watch whatever they want?
Yet their kid demands attention. So they put the phone in front of them to be able to do whatever they needed to do.
I don’t really blame them, in today’s economic climate there are a lot of people who have to struggle every waking second to get by.
It really doesn't matter what "they" said about books. We are talking about screen time. And screen time has measurably harmful effects on child development.
It leads to worse outcomes across the board. Sleep disorders. Obesity. Mental health disorders. Depression. Anxiety. Decreased ability to interpret emotions. Aggressive conduct. And this is to say nothing of ADHD (7.7 times higher likelihood in the heaviest screen users) or social media's effects on adolescents. [1][2]
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC10353947/
[2] https://www.webmd.com/add-adhd/childhood-adhd/childhood-adhd...
The real kicker to me is that the government has passed a law restricting access yet they haven't determined how they're going to enforce an age check. It's wild that they passed a law without consideration to it's mechanics or feasibility.
It's not. Much of the world's governments (particularly those that follow the UK system) implement smaller laws and then delegate the implementation to statutory instruments/secondary legislation, written by experts and then adopted by ministers.
https://en.wikipedia.org/wiki/Primary_and_secondary_legislat...
(Australia included)
It seems suboptimal, but then so does the alternative of a "big beautiful bill" full of absurd detail where you have people voting it into law who not only haven't fucking read it but are now not ashamed that not only have they not fucking read it, nobody on their staff was tasked with fucking reading it and fucking telling them what the fuck is in it.
Lighter weight laws that establish intent and then legally require the creation of statutory instruments tend to make things easier, particularly when parliament can scrutinise the statutory instruments and get them modified to better fit the intent of the law.
It also means if no satisfactory statutory instrument/secondary legislation can be created, the law exists on the books unimplemented, of course, but it allows one parliament to set the direction of travel and leave the implementation to subsequent parliaments, which tends to stop the kind of whiplash we see in US politics.
ETA: for example, the secondary legislation committee in the UK, which is cross-party, is currently scrutinising these:
https://committees.parliament.uk/committee/255/secondary-leg...
In this case, basically all the tech experts and child safety experts were saying that a blanket ban is not a workable policy, and could create harms in certain marginalised demographics where teens may rely on social media for support, yet the Government ignored them all and ploughed ahead.
The only changes to the legislation came from some political horse trading with the Opposition to get it through the Senate.
It’s clearly social media. It consists of user-generated content and has discussion features.
There’s a big problem with tech people coming up with their own definition of social media that exclusively includes sites they don’t use (TikTok, Facebook) but conveniently excludes sites they do like (YouTube, Discord, Hacker News). This makes them think extreme regulation and government intervention is a good thing because it will only impact the bad social media sites that they don’t want other people accessing. Then when the laws come out and they realize it impacts social media regardless of whether you like it or use it, they suddenly realize how bad of an idea it was to call for that regulation.
I predict it won't even matter. This law is unenforceable in practice. There is nothing that a bored and highly-motivated teenager who has hours after school to fuck around, won't be able to circumvent. I think back to my teenage years: None of the half-assed attempts made to keep teenagers away from booze, cigarettes, drugs, or porn even remotely worked. These things were readily available to anyone who wanted them. If there is an "I am an adult" digital token, teenagers will easily figure out how to mint them. If the restrictions can be bypassed with VPNs, that's what they will do.
Is it? As far as I can tell, the definition of social media is a platform where it is trivial to publish to it. That definitely fits YouTube.
The fact that there is great educational content on it (and I 100% agree that there is great educational content) I pretty much solely due to a passionate community, not really anything YouTube itself does to prioritize that kind of content. In fact, as far as I can tell it's harder
I'm sure their approach to enforcement will be something along the lines of relying on the websites to sort it out and fining them if they don't. The govt doesn't need to enforce the age check themselves or even provide or suggest a mechanism.
I imagine any smaller players in this market will just stay away from having an official presence in Australia.
"The govt doesn't need to enforce the age check themselves or even provide or suggest a mechanism."
I suppose it will be up to the courts to decide what is reasonable as an age check. However, the government has said that they don't want to include full ID checks, which is why one would assume they would provide guidance on how to comply.
Yes, but its also unregulated and full of shit, Moreover its designed to feed you more stuff that you like, regardless of the consequences.
For adults, thats probably fine (I mean its not, but thats out of scope) for kids, it'll fuck you up. Especially as there isnt anything else to counteract it. (think back to when you had that one mate who was into conspiracy theories. They'd get book from the library, or some dark part of the web. But there was always the rest of society to re-enforce how much its all bollocks. That coesn't exist now, as there isn't a canonical source, its all advertising clicks)
Deleted Comment
This is entirely Google's issue to fix. Yes, YouTube has amazing educational content. I'd really like to make it available for my kids to see.
YouTube, however, makes it completely impossible to permanently filter/hide/disable the bane that is YouTube Shorts. I don't let my kids on TikTok not because it's Chinese, but because it's trash. I don't allow them near Instagram either.
The chances of kids growing an attention span by seeing interesting stuff in installments of 30 seconds approaches zero really, really fast. Yes there's the possibility telling a fun joke, demonstrating an optical illusion, or some interesting curiosity in under a minute. But it's far more likely that it's trash, and teaching kids (and adults) that if they don't get a kick of something within the first 10 seconds, it should be skipped.
And it's not necessarily age/quality rating of content; UX matters. It's totally different to find that your kid wasted an hour of their life doom scrolling over 150 videos of which they didn't even complete half, or that they spent it seeing half a dozen things videos of dubious quality: if it's half a dozen it's at least feasible to discuss with them why some are better than others.
So, I'm very close to just banning YouTube (at the DNS level if required). Which is a shame, because I then can't share the interesting stuff with them, and neither can their teachers.
Imagine you're the one running a business where you keep repeatedly trying to shove some feature down your user's throat.
What's that called in business school? I don't know, I never took any Business courses.
That I have no where else to go to see the content I want to see smells like a de-facto monopoly.
No idea why, but it works and it's blissful. Plus you can still like videos, subscribe to channels and curate your own lists if you want to bookmark stuff to come back to.
We could actually mandate that certain types of filtering features be implemented and available to users.
You can absolutely write laws which are aimed at ensuring user choice and agency are preserved.
This legislation and the broader idea of bans are none of that.
This isn't a "real life" thing - it's not like there's a strip club with open windows next door to your house for your children to look into. We're talking about a computer/iPad/mobile phone - block YT at the DNS level or better yet, don't even give one to your kids. Problem solved.
Other people shouldn't have to be punished with breaches to their privacy because people can't manage their childs online time.
www.youtube.com##ytd-rich-section-renderer.ytd-rich-grid-renderer.style-scope:nth-of-type(1) www.youtube.com##ytd-rich-section-renderer.ytd-rich-grid-renderer.style-scope:nth-of-type(2) www.youtube.com##ytd-rich-section-renderer.ytd-rich-grid-renderer.style-scope:nth-of-type(4) www.youtube.com##ytd-guide-entry-renderer.ytd-guide-section-renderer.style-scope:nth-of-type(2)
1. Had no opaque algorithmic feeds
2. No comment sections
3. Have a "show me more content like this" button, but again, no auto algorithmic feeds
4. Filter out age inappropriate content.
would be great for teenagers. I think the problem for YouTube is that it would be great for everyone else, too, so they'd get bombarded by "Hey, I want that version" requests, which would clearly make them less money.
There is no moral high ground with basically any online platforms, it's all solely based on financials, and people should realize this.
What kind of content would you envision to be shown? Says if I want to watch more car review videos
See also: Facebook "efforts" to stop scam advertisements and Marketplace fuckery
There are people at YouTube/Google/Alphabet who care but at the end of the day we get what the invisible hand gives us. Market forces have not yielded a well-curated educational video experience on YouTube.
1. https://www.3blue1brown.com/#lessons
Deleted Comment
But you don't need an account to watch most videos on youtube, so this isn't banning all of youtube.. right?
And it's almost purely bathwater that gets put in my face on the YT front page. The occasional baby pops up.
(as someone who rarely logs in, and only with a couple of throw away-ish accounts because I don't like being tracked and don't like YT/Google - so this will affect my perception of the baby:bathwater ratio)
The major outcome of this legislation should be nothing more than Australian kids being the most familiar with VPN's and very little else, along with other tricks to bypass this.
[1] https://en.wikipedia.org/wiki/Elsagate
The real big-brain move is understanding this isn't about protecting kids, and there isn't really anything YouTube can do long-term. Australia has been going after US big tech for a long time
1. we don't have as an antagonistic relationship with our government and we trust that most of what will be banned will be gross stuff we don't want weirdos watching.
2. I think most people feel social media really is breaking young people, and its easier if all kids are banned than just trying to ban your own kids. It's really hard to explain to a kid why they are not allowed to watch you tube when every other kid is.
Update: Also, the only thing this law is going to do is to force every parent in Australia to create accounts for their kids.
Unfortunately, this has propagated down to a lot of the people. They want the government to be the parent instead.
As Jordan Shanks once said - "I have 6 investment properties" is the entire personality of a lot of Aussies. Many others are the same they just don't have the opportunity.
This whole situation appears to be a failing on all angles. From government over reach, corporate greed by forgoing morals to the people who are so worn down they just don't have anything left to give.
They have one with us.
Laws created based on parent's inability to explain something to their kids are invariably shit.
I'm not sure the approach taken by Australia will be effective (i'm not sure how it can be implemented), but i don't see the problem with doing something against harmful companies like meta, tiktok, x/twitter
One of the study https://pmc.ncbi.nlm.nih.gov/articles/PMC10476631/
Australia used to have energy for protesting this sort of shit, but its all spent.
We used to have a pretty decently funded anti internet censorship lobby. It died in the 2010s.
Since then its just been hit after hit after hit. Any minute justification is seized upon to wind up internet freedoms.
Former PM Turncoat said “The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia,” That was 2017. And so far its been a bipartisan position.
The truth is that industry used to also oppose censorship. But its been completely captured. Every time one of these censorship proposals come through, Ausnog gets the usual "Should we act this time?" emails, and nothing comes of it.
Its over. Freedom of Communication is dead in this country, instead of our politicians.
All the campaigns I was involved in for well over a decade achieved absolutely nothing because of this. It is worse than that now, seeing the screws slowly get tightened on peaceful protests makes this even worse. They cant just ignore it but actively suppress it and get away with it.
A few years back I wrote an essay about the passing of Ted Kaczynski, it was never published as they said to be a topic you do not touch. However my conclusion was that I fear the "children of Ted", those that end up being so silenced, end up radicalized by their own oppression that violence becomes their only answer. I suspect we are only a decade or two away from this on a lot of issues.
Not in the idealistic sense that you imply, so this has always been normalized, and variations of such policies have always been implemented
Particularly highly religious parents, like those in Utah.
Social media is undermining the fabric of our societies and destroying a whole generations emotional development. I support this- in part because I know those who want to get around enough or be private will always find a way, but it has a positive, reality affirming effect on the public.
Watch the press conference from our PM and comms minister from yesterday to understand that this is coming from a place of compassion or control. They have said repeatedly they will always ensure a non id method is ensured. I know there are flaws in that though. https://youtu.be/SCSMQUmrh38?feature=shared
Maybe this is what you meant? it's what the CSIRO and the Privacy Commissioner said was their recommended method to do proofs of age/identity through government issued documents, without revealing what the URL was being accessed.
Deleted Comment
Next time dont do that.
This is politically beneficial because Google and Facebook squandered historically broad and strong goodwill, and they made themselves a target in the culture wars.
Google would have survived just fine with its historically light touch on ads.
Both would have been ok without monetizing data collected from users.
Both would be successful allowing users to pick aspects they wanted (e.g., shorts or not), rather than coercing them.
Unfortunately, there's no market feedback for missed future opportunities, and weak positive benefits from PR that dampens and side-steps negative sentiment, so there's no correction.
Had Google taken the privacy tack that Apple did, we might all be storing our most critical data on their servers (given their high data center standards), and thus inclined to do most business on Google cloud.
Both companies have founders still directing a majority of shares. There's no excuse of corruption by short-sighted shareholders.
So I think you can do more than minimize ("wishful", "negligible"). Also, what you seem to be saying is that being among the biggest makes everything they did right, so one should just accept that.
If I were a Google principal hiring any leader, I'd want to select candidates who had concrete plans for improvement -- opportunities to grow the business or to correct mistakes. Isn't that a better approach?
If instead Google principals and managers were hiring for those who accept their decisions - loyalty -- I'd run in the opposite direction.
Our government intends to spruik this at the UN and get other countries on board.
Our government has said there will always be a non id method
Youtube will still be accessible it is just the account making/usership which will be banned
Posting my threaded comment higher up:
I'm an australian who completed the esafety survey which helped guide this policy. I pushed for anonymous temporary age verification tokens generated through a government app.
Social media is undermining the fabric of our societies and destroying a whole generations emotional development and institutionalising a culture of infectious insecurity. I support this- in part because I know those who want to get around enough or be private will always find a way, but it has a positive, reality affirming effect on the public.
Watch the press conference from our PM and comms minister from yesterday to make up your mind on if this is coming from a place of compassion or control. They have said repeatedly they will always ensure a non id method is ensured. I know there are flaws in that though. https://youtu.be/SCSMQUmrh38?feature=shared
It's interesting to see that the press conference felt so uniquely grounded in reality and authentically emotional- maybe that's because they are directly challenging the delegitimising impermanent reality of social media-
Yes they did bring families with children who had passed from social media abuse on stage but it felt genuine. Doesn't mean your privacy concerns aren't real but they don't always trump protecting a childs emotional development.
Cute. Let's see the reviews for an existing Australian government auth app: https://play.google.com/store/apps/details?id=au.gov.mygov.m...
And the kicker is that the above app doesn't even need to exist since myGov could just use industry standard TOTP two-factor auth like the dozens of other services I use.
Aussie politicians once again conforming to their lucky country stereotype:
"Australia is a lucky country run mainly by second rate people who share its luck. It lives on other people's ideas, and, although its ordinary people are adaptable, most of its leaders (in all fields) so lack curiosity about the events that surround them that they are often taken by surprise."
The alternative is an acceleration of the negative cultural trends and atomisation we have now.
You don't get to cry about the negative effects of social media but also cry about censoring it/protecting an impressionable population from it at the same time.
https://www.conspicuouscognition.com/p/the-case-against-soci...
Meanwhile Australia has the largest per capita losses on gambling in the world.
https://www.aihw.gov.au/reports/australias-welfare/gambling
Which the government doesn't care about. This may have something to do that people don't criticise the government when they are just losing their life's savings.
It's worth scrutinizing the philosophical mental model implicit in your opinion.
Do you wait for conclusive empirical evidence before doing anything? Or do you run an experiment in one country based on an informed opinion and see what happens?
I am more inclined to pursue the latter model for this question.
The case against youth social media makes logical sense, there is circumstantial evidence that it's having a negative impact, and I have enough experience with data to know how difficult it is to demonstrate that it's true empirically without a large-scale natural experiment like the one that's about to happen when this law passes.
A lack of evidence should not paralyze you on questions where conclusive evidence is very hard to assemble. Especially when action will create evidence.
I don't understand this argument I keep hearing. What is your understanding of parenting that doesn't involve controlling what they are exposed to? It sounds like you want to say, parents should parent in any way that doesn't burden non-parents. Why would that be in a democracy?
Also, relatedly, it is uniquely modern western idea that parent has to control everything alone by himself and have the kid under perfect control every moment.
This is basically the village stepping up albeit in the dumbest way imaginable.
Then what difference will it make in practice? Do the legislators really think that kids being able to comment on videos was the most harmful thing about the platform? YouTube will still be able to give you suggestions and send you down a rabbit hole of smoothbrain content even if you use it without an account.
> I support this- in part because I know those who want to get around enough or be private will always find a way, but it has a positive, reality affirming effect on the public.
It sounds like you admit that this has mostly signal value.
I really don't understand how you can support this.
I do believe in this so ask whatever tricky questions you want.
There is a name for this tactic - emotional blackmail
Ghoulish
YOU are undermining the fabric of society.
With the excuse of "protecting children" you're trying to destroy the last semblances of privacy and the ability to dissent.
Fuck your using children as a shield. You're hurting them like you did supporting covid policies.
You don't help children isolating them and censoring them and their parents.
Disgusting Propaganda of the lowest form. War on terror. War on drugs. War on disinformation.
If everyone moved back to non algorithmically addictive forums and self segregated by age I would have no issues with that and wouldn't see the need for regulation. That is not the world we live in and we have so obviously seen people self select a terrible and damaging digital world that gives idiocracy a run for its money. Hysteria is sometimes a warranted reaction.
I think it is an important step making social media illegal for children to them reclaiming reality, and re seperating the adult and child social worlds like they used to be. The implementation is the main part for many and I get that.