Not just TikTok, I checked out SnapChat because my kid is the last one in his class to not have it (according to him). First movie I see 2 people falling off an e-bike, pretty painful, then someone making fun of someone with down syndrome, then some weirdly squirming middle aged women with duck faces, and then some very young ones (pretending to?) * off someone off screen while staring into the camera.
Also I denied all access but it still suggested all my sons friends? How? Oh, and it won't even start without access to cameras.
I was pretty shocked. Still, friend off mine, a teacher tells me: You can't let your kid not have SnapChat, it's very important to them.
The Chinese apparently say: Just regulate! TikTok in our country is fun, educational even with safeguards against addiction. Because they mandate it. Somehow we don't want that here? We see it as overreach? Well I'm ready for some overreach (not ChatControl overreach, but you get what I mean). We leave it all up to the parents here, and all parents say: "Well my kid can't be the only one to not have it."
Meanwhile the kids I speak to tell me they regularly have vapeshops popping up in SnapChat, some dudes sell vapes with candy flavors (outlawed here) until the cops show up.
Yeah, we also did stupid things, I know, we grew up, found pron books in the park (pretty gross in retrospect), drank alcohol as young as 15, etc. I still feel this is different. We're just handing it to them.
Edit: Idk if you ever tried SnapChat but it is TikTok, chat, weird AI filters and something called "stories" which for me features a barely dressed girl in a Sauna.
>I was pretty shocked. Still, friend off mine, a teacher tells me: You can't let your kid not have SnapChat, it's very important to them.
Yeah, it's OK to say no.
If the kid wants a phone and snapchat, there's nothing wrong with saying you simply won't be supplying that and if they want it they'd best figure out how to mow lawns. If you're old enough to "need" a phone you're old enough to hustle some yardwork and walk to the T-Mobile store yourself.
It's an unfortunate situation where they will be ostracized for lack of participation in social media like Snapchat or TikTok. Children ostracizing those who don't fit in has been a thing forever, but has been thrown into overdrive by ubiquitous social media usage by children.
I don't think making a kid work for the phone is the solution here. The problem is intentionally addictive algorithms being given to children, not a lack of work ethic regarding purchasing a phone.
> I still feel this is different. We're just handing it to them.
I think you are right to be worried, and I think you are correct that it is different:
IIRC, there were some Kremlin leaks some years ago indicating they knew how to "infect" a population with certain propaganda and have the disinformation live on or linger. Together with Meta's/Facebook's (illegal?) study where they experimented on people to try to make them sad by showing them certain types of posts.
So, I think it stands to reason that controlling what you consume means being in control of what you think; in other words: we are what we watch.
We know there are some feedback loops occurring, but I think that it is easier to get desensitized and start becoming accustomed to very extreme content due to the pressure to fit in; perhaps — once one has participated, it might be even harder to be deprogrammed (it requires facing the fact one behaved wrongly towards others).
There's also the fact that being a good person takes a lot of willpower, dedication, is inconvenient and is notoriously difficult to market as "fun".
It is more palatable for an impressionable kid to watch cheap foreign-state-backed radicalizing-propaganda than it is to learn about injustices being perpetuated in our behalf by the state apparatus.
We have developed the habit of being wary of what we consume in order to police our emotions (i.e. minding our mind so no desensitization happens in our watch).
We have seen what the "baddies" can do: the indifference to the suffering they cause, and the cruelty and pettiness they are capable of.
But I digress,... I think you are right to be worried, but I am unsure about how to train kids to not fall into the pipelines.
Or it's our state apparatus that doesn't want teenagers seeing the injustices they're perpetrating, and the think of the children argument is being pushed right now to hide videos of what's going on in Chicago with ICE, and elsewhere.
"Safety" is how it was originally billed: your kids can call you if they get in trouble. They also created apps that let parents spy on where there kids were.
Why would kids just not immediately switch to something else? This reads like a parent saying video games should only be educational because of course the kid only cares about it being a video game and not the content.
It works in China because they have chat control to the extreme.
> After a “small number of clicks” the researchers encountered pornographic content ranging from women flashing to penetrative sex.
I (40m) don't think I've ever seen literal flashing or literal porn on TikTok, and my algorithm does like to throw in thirst content between my usual hobby stuff.
Are they making the claim that showing porn is a normal behavior for TikTok's algorithm overall, or are they saying that this is something that specifically pervasive with child accounts?
Does TikTok direct what you see based on what other accounts you interact with are interested in? I would expect teenagers to have a different interest profile than your average 40 year old. I would expect algorithms to more or less unwittingly direct you to the kind of stuff your peers were interested in.
TikTok’s recommendations are based off as much info as it can get, really.
Approximate location, age, mobile OS/browser, your contacts, which TikTok links you open, who generated the links you open, TikTok search history, how long it takes you to swipe to the next video on the for you page, etc.
I don’t think it's really possible to say what TikTok’s algorithm does “naturally”. There’s so many influencing factors to it. (Beyond the promoted posts and ads which people pay TikTok to put in your face)
If you sign up to TikTok with an Android and tell it you’re 16, you’re gonna get recommended what the other 16 year olds with Androids in your nearby area (based on IP address) are watching.
You think thirst traps are okay for kids? If we rewind time, the Girls Gone Wild commercial is not supposed to be even remotely possible on certain channels.
We’re a derelict society that has become numb, “it’s just a thirst trap”.
We’re in the later innings of a hyper-sexualized society.
Why it’s bad:
1) You shift male puberty into overdrive
2) You continue warping young female concepts of lewdness and body image, effectively “undefining” it (lewdness? What is lewdness?).
3) You also continue warping male concepts of body image
This content isn't as overt as it may seem, maybe you did come across it and just didn't notice flashing. Those "in the know", generally younger people whose friends told them about flashtok, know what to look for
Also: kids click on links adult ignore without thinking. Our brains have built in filters for avoiding content we don't want; for kids everything is novel.
I wonder when this study happened? FWIW, there was some pretty intense bombing of full-on nudity content to TikTok a month or two ago--it all looked like very automated bot accounts that were suddenly posting scenes with fully nude content cut out of movies--that I saw a number of people surprised were showing up in their feeds. It felt... weaponized? (And it did not last long at all, FWIW: TikTok figured it out. But it was intense and... confusing?)
>Are they making the claim that showing porn is a normal behavior for TikTok's algorithm overall, or are they saying that this is something that specifically pervasive with child accounts?
the latter is what they tested, but they didn't say specifically pervasive.
you quote the article so it seems like you looked at it, but questions you are curious/skeptical about are things they talked about in the opening paragraphs. it's fine to be skeptical, but they explain their methodology and it is different than the experience you are relying on:
>Global Witness set up fake accounts using a 13-year-old’s birth date and turned on the video app’s “restricted mode”, which limits exposure to “sexually suggestive” content.
>Researchers found TikTok suggested sexualised and explicit search terms to seven test accounts that were created on clean phones with no search history.
>
The terms suggested under the “you may like” feature included “very very rude skimpy outfits” and “very rude babes” – and then escalated to terms such as “hardcore pawn [sic] clips”. For three of the accounts the sexualised searches were suggested immediately.*
>After a “small number of clicks” the researchers encountered pornographic content ranging from women flashing to penetrative sex. Global Witness said the content attempted to evade moderation, usually by showing the clip within an innocuous picture or video. For one account the process took two clicks after logging on: one click on the search bar and then one on the suggested search.
Yeah, I (50m) have never encountered literal porn on TikTok. Suggestive stuff, thirst traps, sex ed, sex jokes, yes, but no literal porn or even nudity.
A soccer mom I know shared that she once tried TikTok. Within seconds of installing the app, the algorithm was showing nsfw content. She uninstalled it.
I assume that the offending content was popular but hadn’t been flagged yet and that the algorithm was just measuring her interest in a trending theme; it seems like it would be bad for business to intentionally run off mainstream users like that.
after reading some of the article it seems to me that they’re saying that on a restricted account thats got the bday of a 13 year old with the suggested search terms tiktok shows and a few clicks you can see actual porn.
Really? I've signed up to bluesky and tiktok and on both have seen literal porn extremely early without engaging directly (such as liking or responding, speed of scrolling could be something).
All of these apps are 100% using your scroll speed/how long you spend engaging with the content as a data point. After all, "time spent engaging with the content" is the revenue driver.
Looks like basic thirst trap content , probably not included to maximize their rage baiting. I don't understand why these puritans don't just move to the middle east. They instead ruin the internet for the rest of us.
If you consider "skimpy outfits" pornographic that both Facebook and X are worse than TikTok for me. I've seen a few pieces of content I had to report before but not many.
X, on the other hand, has literal advertisements for adult products on my feed and I get followed by "adult" bot accounts several times a week that when I click through to block them often shows me literal porn. Same with spam facebook friend requests.
I think it boils down to a simple fact that trying to police user-generated content is always going to be an up-hill battle and it doesn't necessarily reflect on the company itself.
> Global Witness claimed TikTok was in breach of the OSA, which requires tech companies to prevent children from encountering harmful content...
Ok, that is noble goal but I feel that the gap between "reasonable measures" and "prevent" is vast.
> I think it boils down to a simple fact that trying to police user-generated content is always going to be an up-hill battle and it doesn't necessarily reflect on the company itself.
I think it boils down to the simple fact that policing user-generated content is completely possible, it just requires identity verification, which is a very unpopular but completely effective idea. Almost like we rediscovered, for the internet, the same problems that need identity in other areas of life.
I think you will also see a push for it in the years ahead. Not necessarily because of some crazy new secret scheme, but because robots will be smart enough to beat most CAPTCHAs or other techniques, and AI will be too convincing, causing websites to be overrun. Reddit is already estimated to be somewhere between 20% and 40% robots. Reddit was also caught with their pants down by a study recently, with an AI robot on r/changemymind racking up ridiculous amounts of karma undetected.
I'm not convinced that will fix the problem. Even in situations where identity is well known such as work or school, we commonly have bad actors.
It's also pretty unpopular for a good reason.
There is a chilling effect that would go along with it. Like it or not, a lot of people use these social platforms to be their true selves when they can't in their real life for safety reasons. Unfortunately for some people their "true self" is pretty trashy. But it's a slippery slope to put restrictions (like ID verification) on everyone just because of a few bad actors.
Granted I'm sure there's some way we could do that while maintaining moderate privacy but it's technologically challenging and I'm not alone in wanting tech companies to have less of my personal information not more.
Heh on Facebook you don't even need any clicks. I logged in after a few years and the first video among the facebook shorts or whatever it's called was a woman removing her underwear.
You weren't ostracized for not engaging on social media because it didn't exist.
In 10-15 years Gen Z will be complaining about how their generation didn't need to have the most expensive AI boyfriend/girlfriend to avoid getting bullied, or something ridiculous like that.
Also I denied all access but it still suggested all my sons friends? How? Oh, and it won't even start without access to cameras.
I was pretty shocked. Still, friend off mine, a teacher tells me: You can't let your kid not have SnapChat, it's very important to them.
The Chinese apparently say: Just regulate! TikTok in our country is fun, educational even with safeguards against addiction. Because they mandate it. Somehow we don't want that here? We see it as overreach? Well I'm ready for some overreach (not ChatControl overreach, but you get what I mean). We leave it all up to the parents here, and all parents say: "Well my kid can't be the only one to not have it."
Meanwhile the kids I speak to tell me they regularly have vapeshops popping up in SnapChat, some dudes sell vapes with candy flavors (outlawed here) until the cops show up.
Yeah, we also did stupid things, I know, we grew up, found pron books in the park (pretty gross in retrospect), drank alcohol as young as 15, etc. I still feel this is different. We're just handing it to them.
Edit: Idk if you ever tried SnapChat but it is TikTok, chat, weird AI filters and something called "stories" which for me features a barely dressed girl in a Sauna.
Yeah, it's OK to say no.
If the kid wants a phone and snapchat, there's nothing wrong with saying you simply won't be supplying that and if they want it they'd best figure out how to mow lawns. If you're old enough to "need" a phone you're old enough to hustle some yardwork and walk to the T-Mobile store yourself.
I don't think making a kid work for the phone is the solution here. The problem is intentionally addictive algorithms being given to children, not a lack of work ethic regarding purchasing a phone.
It can make calls. It can send/receive basic text messages.
The most addictive thing on that type of phone, if it can even be installed, is Snake.
Utilitarianism wins. Social media companies loose. I'm fine with that. The kid can still communicate with their parents at a moment's notice.
I think you are right to be worried, and I think you are correct that it is different:
IIRC, there were some Kremlin leaks some years ago indicating they knew how to "infect" a population with certain propaganda and have the disinformation live on or linger. Together with Meta's/Facebook's (illegal?) study where they experimented on people to try to make them sad by showing them certain types of posts.
So, I think it stands to reason that controlling what you consume means being in control of what you think; in other words: we are what we watch.
We know there are some feedback loops occurring, but I think that it is easier to get desensitized and start becoming accustomed to very extreme content due to the pressure to fit in; perhaps — once one has participated, it might be even harder to be deprogrammed (it requires facing the fact one behaved wrongly towards others).
There's also the fact that being a good person takes a lot of willpower, dedication, is inconvenient and is notoriously difficult to market as "fun".
It is more palatable for an impressionable kid to watch cheap foreign-state-backed radicalizing-propaganda than it is to learn about injustices being perpetuated in our behalf by the state apparatus.
We have developed the habit of being wary of what we consume in order to police our emotions (i.e. minding our mind so no desensitization happens in our watch).
We have seen what the "baddies" can do: the indifference to the suffering they cause, and the cruelty and pettiness they are capable of.
But I digress,... I think you are right to be worried, but I am unsure about how to train kids to not fall into the pipelines.
Sure it's no violation but it's pedo central in that app I bet, and many kids even share their location with anyone. Parents have no clue.
Giving them smartphones? Moronic idea at best.
It works in China because they have chat control to the extreme.
Not really, they just tell the company to behave in this case. But yes they do have ChatControl to the extreme as well.
I (40m) don't think I've ever seen literal flashing or literal porn on TikTok, and my algorithm does like to throw in thirst content between my usual hobby stuff.
Are they making the claim that showing porn is a normal behavior for TikTok's algorithm overall, or are they saying that this is something that specifically pervasive with child accounts?
Approximate location, age, mobile OS/browser, your contacts, which TikTok links you open, who generated the links you open, TikTok search history, how long it takes you to swipe to the next video on the for you page, etc.
I don’t think it's really possible to say what TikTok’s algorithm does “naturally”. There’s so many influencing factors to it. (Beyond the promoted posts and ads which people pay TikTok to put in your face)
If you sign up to TikTok with an Android and tell it you’re 16, you’re gonna get recommended what the other 16 year olds with Androids in your nearby area (based on IP address) are watching.
It might be because I always block anyone with an OF link in their bio, but then that policy doesn't work on Insta.
We’re a derelict society that has become numb, “it’s just a thirst trap”.
We’re in the later innings of a hyper-sexualized society.
Why it’s bad:
1) You shift male puberty into overdrive
2) You continue warping young female concepts of lewdness and body image, effectively “undefining” it (lewdness? What is lewdness?).
3) You also continue warping male concepts of body image
the latter is what they tested, but they didn't say specifically pervasive.
you quote the article so it seems like you looked at it, but questions you are curious/skeptical about are things they talked about in the opening paragraphs. it's fine to be skeptical, but they explain their methodology and it is different than the experience you are relying on:
>Global Witness set up fake accounts using a 13-year-old’s birth date and turned on the video app’s “restricted mode”, which limits exposure to “sexually suggestive” content.
>Researchers found TikTok suggested sexualised and explicit search terms to seven test accounts that were created on clean phones with no search history.
>
The terms suggested under the “you may like” feature included “very very rude skimpy outfits” and “very rude babes” – and then escalated to terms such as “hardcore pawn [sic] clips”. For three of the accounts the sexualised searches were suggested immediately.*>After a “small number of clicks” the researchers encountered pornographic content ranging from women flashing to penetrative sex. Global Witness said the content attempted to evade moderation, usually by showing the clip within an innocuous picture or video. For one account the process took two clicks after logging on: one click on the search bar and then one on the suggested search.
They are in the business of whipping up outrage, and should not be given any oxygen.
Clicking on thirst trap videos?
I assume that the offending content was popular but hadn’t been flagged yet and that the algorithm was just measuring her interest in a trending theme; it seems like it would be bad for business to intentionally run off mainstream users like that.
Dead Comment
https://globalwitness.org/en/campaigns/digital-threats/tikto...
I don't know why news sites don't link to the source, but that's another discussion.
X, on the other hand, has literal advertisements for adult products on my feed and I get followed by "adult" bot accounts several times a week that when I click through to block them often shows me literal porn. Same with spam facebook friend requests.
I think it boils down to a simple fact that trying to police user-generated content is always going to be an up-hill battle and it doesn't necessarily reflect on the company itself.
> Global Witness claimed TikTok was in breach of the OSA, which requires tech companies to prevent children from encountering harmful content...
Ok, that is noble goal but I feel that the gap between "reasonable measures" and "prevent" is vast.
I think it boils down to the simple fact that policing user-generated content is completely possible, it just requires identity verification, which is a very unpopular but completely effective idea. Almost like we rediscovered, for the internet, the same problems that need identity in other areas of life.
I think you will also see a push for it in the years ahead. Not necessarily because of some crazy new secret scheme, but because robots will be smart enough to beat most CAPTCHAs or other techniques, and AI will be too convincing, causing websites to be overrun. Reddit is already estimated to be somewhere between 20% and 40% robots. Reddit was also caught with their pants down by a study recently, with an AI robot on r/changemymind racking up ridiculous amounts of karma undetected.
It's also pretty unpopular for a good reason.
There is a chilling effect that would go along with it. Like it or not, a lot of people use these social platforms to be their true selves when they can't in their real life for safety reasons. Unfortunately for some people their "true self" is pretty trashy. But it's a slippery slope to put restrictions (like ID verification) on everyone just because of a few bad actors.
Granted I'm sure there's some way we could do that while maintaining moderate privacy but it's technologically challenging and I'm not alone in wanting tech companies to have less of my personal information not more.
The world is hostile and full of exploitation. It is no different on the internet.
In 10-15 years Gen Z will be complaining about how their generation didn't need to have the most expensive AI boyfriend/girlfriend to avoid getting bullied, or something ridiculous like that.
The internet is full of "bad" (or at least undesirable under some circumstances) content. It's there on the interent, we kinda accept its existence.
Then we train AI on it, and we're upset if it regurgitates it, so we have to add "safety".
Meanwhile social media sends you right to it ...