I think this is... fine? Am I just totally naive. I think it's fine to say "You don't really have privacy on this app" - as long as there are relatively good options of apps that do have privacy (and I think there are). TikTok is really a public by default type of social media, there's not much idea of mutual following or closed groups. So sure, you don't have privacy on tiktok, if you want it you can move to snapchat or signal or whatever platform of your choice.
Like, it's literally a platform that was run under the watchful eye of the CCP, and now the US version is some kleptocratic nightmare, so I just don't see the point in expecting some sort of principled stance out of them.
In some ways I think it's worse for places like Facebook to "care about privacy" and use E2EE but then massively under-resource policing of CSAM on their platform. If you're going to embrace 'privacy' I do think it's on you to also then put additional resources into tackling the downsides of that.
In my experience most forums have private messaging.
Additionally I think it is fine to say "we don't support e2ee". I prefer honesty to a bad (leaky) e2ee implementation, at least the user can make an informed choice.
Lots of other consumer services such as Strava have direct messaging without e2e encryption. No privacy is guaranteed. This is fine, they're not deceiving anyone about how it works.
Adding that private self hosted forums can permit uploads of encrypted files, encrypted with a pre-shared secret or a secret shared over a private self hosted Mumble voice chat server.
> as long as there are relatively good options of apps that do have privacy (and I think there are)
Once you have enormous network effect like TikTok has, you don't really have any free selection of alternative apps. You are free to use one, but you will be the only sad user over there.
Regulations are needed that would force large platforms like TikTok and Instagram to enable federation, opening them up to actual competition. This way platforms would be able to compete on monetisation and usability, instead of competing on locking in their precious users more strictly.
> MySpace is well on the way to becoming what economists call a "natural monopoly". Users have invested so much social capital in putting up data about themselves it is not worth their changing sites, especially since every new user that MySpace attracts adds to its value as a network of interacting people.
> "In social networking, there is a huge advantage to have scale. You can find almost anyone on MySpace and the more time that has been invested in the site, the more locked in people are".
federation would never work. How would it work here? Either you are forcing tiktok to give pageviews to federations of spam, or you are letting tiktok decide which federations to work with, which essentially results in no federation.
Lolololol. No, not regulations. Regulators. With the people we currently have voted into office in the US the only regulations we are going to get are ones saying Sam and Peter must look at everything you do all the time.
Until we stop voting for more authoritarianism, expect ever increasing amounts of authoritarianism.
Fine with me too. I think many other apps (WhatsApp, FB, etc.) are using E2EE for PR purposes and are not actually good implementations of E2EE.
Good implementations of E2EE:
1. Generate the key pairs on device, and the private key is never seen by the server nor accessible via any server push triggered code.
2. If an encrypted form of the private key is sent to the server for convenience, it needs to be encrypted with a password with enough bits of entropy to prevent people who have access to the server from being able to brute force decode it.
3. Have an open-source implementation of the client app facilitating verifiability of (1) and (2)
4. Permit the users to self-compile and use the open-source implementation
If company isn't willing to do this, I'd rather they not call it E2EE and dupe the public into thinking they're safe from bad actors.
I am fine TikTok remaining that 'we watch what you are doing' platforms. Those do not care can gave that if they wish, I do not mind.
But bullshitting about it is making users more safe, that is ... bullshit! Worse that that, distorting public opinion, intentionally fooling the gullible.
It might be fine if they presented an honest choice.
They are lying straight off though... police and safety team don't read messages only "if they needed to" to keep people safe. They do so for a large variety of other reasons, such as suppressing political dissent and asserting domination and control.
I don't think we can expect most people to understand TikTok's BS here either. I notice even a skeptic like you is uncritically echoing the dubious conflation of privacy and CSAM.
Anyone who doubts the requirement for e2e messaging should not be considered a skeptic, they are fully buying into whatever narrative LEO would like you to believe.
>I think it's fine to say "You don't really have privacy on this app"
Disagree. To analogize why: privacy isn't heated seats, *its seat belts*. Comfort features and preferences are fine to tailor to your customers and your business model. Jaguar targets a different market than Ford, and that's just fine.
Safety features should be non-negotiable for all. Both Jaguar and Ford drivers merit the utmost protection against injury in crashes. Likewise, all applications that offer user messaging functionality should offer non-defective, non-harmful versions of it. To do that, e2e privacy is absolutely necessary.
>I just don't see the point in expecting some sort of principled stance out of them.
This is the defeatism that adds momentum to a downhill trajectory. Exactly the opposite approach arrests the slide - users expecting their applications and providers to behave in principled ways, and punishing those who do not, are what keeps principles alive. Failing to expect lawful and upright behavior out of those you depend on, be they political leaders or software solutions providers, guarantees that tomorrow's behavior will be less lawful and upright than yesterday's. Stop writing these people a pass for this horrible behavior, and start holding them unreasonably accountable for it, then we'll see behavior start to change in the direction that we mostly all agree that it needs to.
The most effective protests against internet censorship came from massive grass roots movements, with users drawing a line in the sand that they will not tolerate further impositions on their freedom.
>In some ways I think it's worse for places like Facebook to "care about privacy" and use E2EE but then massively under-resource policing of CSAM on their platform.
The irony is so manifest of billions of people having their privacy stripped by politicians and business elites in the name of protecting our children, while those politicians and business elites conspire en masse to prey on and sex traffick our children. If these forces actually took those concerns seriously, rather than sensing them as an opportunity to push ulterior motives, they'd be eating each other alive, right now. Half of DC, half of Hollywood, and at least a tenth of most major college administrations would ALL be at the docket.
Tesla doesn't have parking sensors. They're a safety feature. There's lots of safety features in cars that are optional, we've got an entire rating system for the safety of cars.
We're talking about an app that's controlled by the CCP, I do expect them to take a principled stance - stances like Taiwan is a part of China and you can't be openly critical of the leader of the party. They don't have the same principles as you. You can force them to put in E2EE, but you can't force them to be honest about it or competent about it. I would rather know what we're getting than to push them to lie.
This is the same thing as the OpenAI/Anthropic thing. You've got Anthropic taking a principled stance and getting pain for it, and you've got OpenAI claiming to take the same stance, but somehow agreeing to the terms of the DoW. Do you think it's more likely that Anthropic carelessly caused themselves massive trouble, or do you think OpenAI is claiming to have got the concessions that clearly won't work in practice. I think it's naive to think the former.
Depends on your definition of "safe". Imagine an adult DMs a nude photo to a minor (or other kinds of predation).
If it's E2EE, no one except the sender and receiver know about this conversation. You want an MITM in this case to detect/block such things or at least keep record of what's going on for a subpoena.
I agree that every messaging platform in the world shouldn't be MITM'd, but every messaging platform doesn't need to be E2EE'd either.
well having no e2e encryption is safer than having a half-baked e2e encryption that have backdoor and can be decrypted by the provider.
and for tiktok's stance, I think they just don't want to get involved with the Chinese government related with encryption (and give false sense of privacy to user)
It makes certain users less safe in certain situations.
E2E makes political activists and anti-chinese dissidents safer, at the cost of making children less safe. Whether this is a worthwhile tradeoff is a political, not technical decision, but if we claim that there are any absolutes here, we just make sure that we'll never be taken seriously by anybody who matters.
That it’s fine because it’s the CCP (commies see all) is a new one.
It’s at best subpar for the same reasons as if it was the usual Silicon Valley spyware.
I could leave well enough alone. But why? Because there are choices? There are five other brands of cereal that do not have 25% sugar? I’d rather be a negative nancy towards these on-purpose addictive, privacy-leaking attention pimp apps.
It's fine except for their argument that it makes people less safe. If they want to disallow encryption they don't need to lie to people while they're at it.
Children are just too effect of a tool when building a surveillance state. We should have banned children from owning open computers a long time ago just like we do with Alcohol, Driving licenses, etc.
Instead children would own special devices that are locked down and tagged with a "underage" flag when interacting with online services, while adults could continue as normal. We already heavily restrict the freedom of children so there is plenty of precedent for this. Optionally we could provide service points to unlock devices when they turn 18 to avoid E-waste as well.
This way it's the point of sale where you provide your ID, instead of attaching it to the hardware itself and sending it out to every single SaaS on the planet to do what they wish.
Would be a nightmare to implement and achieve the goal, but I have to say I think it’s more right than wrong. All of the data is very clear about the harms.
China has restrictions for social media and screen time for kids — how do they implement this?
At the same time, I remember growing up in the internet's wild west and bad encounters weren't an issue for me because of the golden rule I was taught from the start: you don't give your personal information and you don't interact with complete strangers. Learning to navigate the web instead of being in a walled garden was helpful in many ways.
The better question to ask ourselves is, does the capability to gather more information also lead to more power to act on this information? If the investigative resources are spread thin already it's not like they're gonna catch more criminals with investing more there. Repelling questionable individuals off the platform with lots transparancy -is- an effective way, but just a specific tool for a symptom.
I think a part of a better solution is to give parents and children better tools to manage their social graph themselves. Essentially the real problem is discovery and warding off of social outliers in a way that doesnt out all responsibility on opaque algos or corporations.
A part of their e2e keys could be shared using an intentionally obtuse way like mailing an item or a physical "friend code". That way parents and vetted friends can have their privacy.
You don't need to tie an id to someone's person to get positive confirmation on someone's poor behaviour. If someone crossed the line then parents can see it and escalate. In additon, what would happen to a child with abusive parents who can then arbitrarily restrict and deny a childs freedom to communicate? I did not have this myself, but without free access to other minds and information I would have been duller. Does a large information dragnet really serve our collective interests or are more precise tools needed?
Locking down children’s devices doesn’t stop adults sharing illegal content with other adults though, so there would still be pressure to monitor communications between adults.
Indeed way past time. Though no CEO would admin publicly what the addiction to attention/social media, gaming, and general screen use, causes to children. Of course this should've been regulated similarly to Alcohol, but billions would dry and it's much easier to witch-hunt marijuana, and illegal raves, right?
> Instead children would own special devices that are locked down and tagged with a "underage" flag when interacting with online services, while adults could continue as normal.
California is mandating OSes provide ages to app stores, and HN lost their mind because it's a ban on Linux.
The most important principle in the modern age is the freedom to prey on wallets. You can’t give parents tools to conveniently restrict what their children do. Impressionable minds ought to live in a lord of the flies state where they are bombarded with stuff to nag to their parents about and give them FOMO about what their friends have that they don’t have.
I don't understand why all teh child safety systems require age verification. Why not have a single setting on a smartphone that sends a 'child' flag to every single app or website, which then reacts accordingly? As long as you ensure that the browser can't be changed or modifed, it should be fine.
Does it matter. It's just some arbitrary company. They do have the freedom to decide those things however they want, right? The customer can then decide whether to switch or not.
It matters because if it works and people continue using the platform then other providers will follow and the only remaining E2EE providers will be niche.
Ultimately your neighbors must buy the argument. The reason why this argument wins is not because framing is so tricky, but because it connects with the values of your neighbors. Trying to convince people that these aren't actually their values is swimming upriver.
This might be off-topic but on-topic about child safety... but I'm surprised people are being myopic about age verification. Age verification should be banned, but people ignore that nowadays most widely used online services already ask for your age and act accordingly: twitter, youtube, google in general, any online marketplace. They already got so much data on their users and optimize their algorithms for those groups in an opaque way.
So yeah, age verification should be taken down, as well as the datamining these companies do and the opaque tunning of their algorithms. It baffles me: people are concerned about their children's DMs but are not concerned about what companies serves them and what they do with their data.
Monitoring children's DMs is the responsibility of the parents, not megacorps. If a parent wants to install a keylogger or screen recorder on their child's PC, that's their decision. But Google should not be able to. Neither should... literally anyone else except maybe an employer on a work-provided device.
> Monitoring children's DMs is the responsibility of the parents, not megacorps
Absolutely. But what responsibilities do megacorps have? Right now, everyone seems to avoid this question, and make do with megacorps not being responsible. This means: "we'll allow megacorps to be as they are and not take any responsibilities for the effects they cause to society". Instead of them taking responsibilities, we're collecting everyone's data and calling it a day by banning children from social networks... and this is because there are many interests involved (not related to child development and safety).
> Monitoring children's DMs is the responsibility of the parents, not megacorps.
Yup, but the tools provided make that easy or hard.
But putting that emotive bit to one side, Megacorps have a vested interest in not being responsible to children. They need children's eye balls to drive advertising revenue. If that means sending them corrosive shit, then so be it.
Its a bigger issue than encryption, its editorial choice.
I'm all for helping parents to do this. Any site requiring age verification should indicate this as a http header or whatever, and the browser I allow my child to use should respect that and the parental controls should be easy for me to engage with
Many parental controls are massive pains to get working. Apple does fairly well (although I don't get a parental pin number to unlock the phone, which is normally fine as my child will tell me, but in some circumstances it wouldn't be), but does require the parent to be on the apple ecosystem too.
EA and Microsoft however are terrible, especially as it's likely the child will be playing fortnite/minecraft and the parent won't have ever touched it. I think with minecraft we had to make something like 5 or 6 accounts across three different sites to allow online minecraft play from a nintendo switch.
> Where are these mythical people who aren’t concerned with both?
People don't care about "what companies serve them". They only care if the children see sexual content (or things considered deviant). Once sexual and deviant content is filtered, they're happy to give away their children's development to the company's algos.
In effect, the people don't want to concern themselves with what their children consume, unless they're outraged by things normally taboo in their age group. Besides, if everyone is in it "it's not that wrong". They seek reactive entertainment rather than proactive engagement in their children's development.
There are a variety of ways (see "Verifiable Credentials") that ages can be verified without handing over any data other than "Is old enough" to social media services.
The problem with this discussion is that this is a wonk solution for wonkish times. You're trying to thread the needle between various reasonable compromises. Ironically due to social media, that is simply not how politics and lawmaking works any more. Instead it's an emotionally driven fight between various different sorts of moral panic, and the only option is to get people more mad about surveillance than "think of the children".
You might be able to get somewhere by getting a tech company on your side, but they generally also hate adult content and don't mind banning it entirely.
(people are not going to get age verification _banned_ any time soon! That's simply not going to happen!)
> They should just have no DM feature at all, then; make all messages publicly visible.
This makes no sense.
I can discuss something in a bar which is not a very private conversation, I wouldn't care if someone else hear what I'm saying. But I also don't want someone to record it and post it on the internet to be seen by the whole world.
In a bar you're not speaking directly into a microphone that is permanently saving everything you say for later instant access by every government and advertising agency that wants to prosecute you or invade your privacy to sell you something
I suppose they mean that apps should brand their non e2ee chat features as private or personal, which is what users take as the default assumption when interacting in one to one chat.
Isn't that something we asked for? We keep asking for parents to parent their children instead of getting age verification laws, and that is what that looks like.
I fail to see the link between private conversations/DM and E2EE.
To quote a comment I made some time ago:
- You can call your service e2e encrypted even if every client has the same key bundled into the binary, and rotate it from time to time when it's reversed.
- You can call your service e2e encrypted even if you have a server that stores and pushes client keys. That is how you could access your message history on multiple devices.
- You can call your service e2e encrypted and just retrieve or push client keys at will whenever you get a government request.
E2EE only prevents naive middlemen from reading your messages.
Fundamentally actual E2EE is complicated problem. And probably not very user friendly. It is full of technical trade-offs. And mistakes are very common. Or they lead to situations that people do not want. Like if you lost your phone or it break how do you get history back... What if you also forgot password? Or it was stored in local manager...
It is phrase that sounds good. But actually doing it effectively in way that average user understand and can use system with it with minimal effort is very hard.
You could have reasonable legal system where privacy is guaranteed. But you do not need end to end encryption for that to be thing. It really is orthogonal issue.
Sure, however kids these days often can't socialize irl - should kids be isolated from friends because they're unable to have any private conversations at all?
During times in which I was unable to socialize irl (eg school holidays), and unable to talk to my friends online, I can confirm that the isolation was not good for my mental health.
The government are able to access your conversations, data and connections with e2ee in place already. I don't see how not having e2ee would have an effect on that ability in any way.
You say that like the typical 18 year old has any idea what they're doing when it comes to proper encryption and communication safety. That is never going to be the case.
It's a communication channel attached to the most popular social network for young people. Obviously they're going to use it a lot. They use it for the extreme convenience.
I don’t really understand how we are supposed to believe in e2ee in closed proprietary apps. Even if some trusted auditor confirms they have plumbed in libsignal correctly, we have no way of knowing that their rendering code is free of content scanning hooks.
We know the technology exists. Apple had it all polished and ready to go for image scanning. I suppose the only thing in which we can place our faith is that it would be such an enormous scandal to be caught in the act that WhatsApp et al daren’t even try it.
(There is something to be said for e2ee: it protects you against an attack on Meta’s servers. Anyone who gets a shell will have nothing more than random data. Anyone who finds a hard drive in the data centre dumpster will have nothing more than a paperweight.)
The unfortunate fact about E2EE messaging is that it is hard to do. Even if you do have reproducible builds, the user is likely to make some critical mistake. What proportion of, say, Signal users actually compare any "safety numbers" for example? There is no reason to worry about software integrity if the system is already insecure due to poor usability.
Sure, we should all be doing PGP on Tails with verified key fingerprints. But how many people can actually do that?
Same, my default MO is assuming 'e2ee' is broken and unsafe by default. Anything that I truly don't want sent over the wire would be in person, in the dark, in a root cellar, underwater. Not that I've ever been in the position to relay juicy info like that. Hyperbole I know, but my trust begins at zero.
There are good reasons to not trust signal. The very first line of their privacy & terms page says "Signal is designed to never collect or store any sensitive information" but then they started collecting and permanently storing sensitive user data in the cloud and never updated that page. Much more recently they started collected and storing message content in the cloud for some users, but they still refuse to update that page. I'm pretty sure it's big fat dead canary warning users away from Signal. Any service that markets itself to whistleblowers and activists then also outright lies to them about the risks they take when using it can't be trusted for anything.
With e2ee please remember that it is important to define who are the ends.
Perhaps your e2ee is only securing your data in travel if their servers are considered the other end.
Also one thing people seem to misunderstand is that for most applications the conversation itself is not very interesting, the metadata (who to who, when, how many messages etc.) is 100x more valuable.
They don't believe that. It makes it more difficult to deal with governments, is all. Big Brother needs your messages from time to time, and TikTok is not willing to risk getting shut down to argue against that. We can't have pesky principles getting in the way of money.
Like, it's literally a platform that was run under the watchful eye of the CCP, and now the US version is some kleptocratic nightmare, so I just don't see the point in expecting some sort of principled stance out of them.
In some ways I think it's worse for places like Facebook to "care about privacy" and use E2EE but then massively under-resource policing of CSAM on their platform. If you're going to embrace 'privacy' I do think it's on you to also then put additional resources into tackling the downsides of that.
IMO no consumer service should have private 1:1 messaging without e2e. Either only do public messaging (ie. Like a forum), or implement e2e.
It's better that they're honest about this, nobody should believe for a second that WhatsApp or FB messages are truly E2EE.
DM on social media shouldn't be used for anything remotely private. It's a convenience feature, nothing more.
Additionally I think it is fine to say "we don't support e2ee". I prefer honesty to a bad (leaky) e2ee implementation, at least the user can make an informed choice.
For a long time we lived with private messages over SMS that were easily readable by third parties.
Once you have enormous network effect like TikTok has, you don't really have any free selection of alternative apps. You are free to use one, but you will be the only sad user over there.
Regulations are needed that would force large platforms like TikTok and Instagram to enable federation, opening them up to actual competition. This way platforms would be able to compete on monetisation and usability, instead of competing on locking in their precious users more strictly.
> MySpace is well on the way to becoming what economists call a "natural monopoly". Users have invested so much social capital in putting up data about themselves it is not worth their changing sites, especially since every new user that MySpace attracts adds to its value as a network of interacting people.
> "In social networking, there is a huge advantage to have scale. You can find almost anyone on MySpace and the more time that has been invested in the site, the more locked in people are".
https://www.theguardian.com/technology/2007/feb/08/business....
Lolololol. No, not regulations. Regulators. With the people we currently have voted into office in the US the only regulations we are going to get are ones saying Sam and Peter must look at everything you do all the time.
Until we stop voting for more authoritarianism, expect ever increasing amounts of authoritarianism.
Good implementations of E2EE:
1. Generate the key pairs on device, and the private key is never seen by the server nor accessible via any server push triggered code.
2. If an encrypted form of the private key is sent to the server for convenience, it needs to be encrypted with a password with enough bits of entropy to prevent people who have access to the server from being able to brute force decode it.
3. Have an open-source implementation of the client app facilitating verifiability of (1) and (2)
4. Permit the users to self-compile and use the open-source implementation
If company isn't willing to do this, I'd rather they not call it E2EE and dupe the public into thinking they're safe from bad actors.
But bullshitting about it is making users more safe, that is ... bullshit! Worse that that, distorting public opinion, intentionally fooling the gullible.
They are lying straight off though... police and safety team don't read messages only "if they needed to" to keep people safe. They do so for a large variety of other reasons, such as suppressing political dissent and asserting domination and control.
I don't think we can expect most people to understand TikTok's BS here either. I notice even a skeptic like you is uncritically echoing the dubious conflation of privacy and CSAM.
Disagree. To analogize why: privacy isn't heated seats, *its seat belts*. Comfort features and preferences are fine to tailor to your customers and your business model. Jaguar targets a different market than Ford, and that's just fine.
Safety features should be non-negotiable for all. Both Jaguar and Ford drivers merit the utmost protection against injury in crashes. Likewise, all applications that offer user messaging functionality should offer non-defective, non-harmful versions of it. To do that, e2e privacy is absolutely necessary.
>I just don't see the point in expecting some sort of principled stance out of them.
This is the defeatism that adds momentum to a downhill trajectory. Exactly the opposite approach arrests the slide - users expecting their applications and providers to behave in principled ways, and punishing those who do not, are what keeps principles alive. Failing to expect lawful and upright behavior out of those you depend on, be they political leaders or software solutions providers, guarantees that tomorrow's behavior will be less lawful and upright than yesterday's. Stop writing these people a pass for this horrible behavior, and start holding them unreasonably accountable for it, then we'll see behavior start to change in the direction that we mostly all agree that it needs to.
The most effective protests against internet censorship came from massive grass roots movements, with users drawing a line in the sand that they will not tolerate further impositions on their freedom.
>In some ways I think it's worse for places like Facebook to "care about privacy" and use E2EE but then massively under-resource policing of CSAM on their platform.
The irony is so manifest of billions of people having their privacy stripped by politicians and business elites in the name of protecting our children, while those politicians and business elites conspire en masse to prey on and sex traffick our children. If these forces actually took those concerns seriously, rather than sensing them as an opportunity to push ulterior motives, they'd be eating each other alive, right now. Half of DC, half of Hollywood, and at least a tenth of most major college administrations would ALL be at the docket.
We're talking about an app that's controlled by the CCP, I do expect them to take a principled stance - stances like Taiwan is a part of China and you can't be openly critical of the leader of the party. They don't have the same principles as you. You can force them to put in E2EE, but you can't force them to be honest about it or competent about it. I would rather know what we're getting than to push them to lie.
This is the same thing as the OpenAI/Anthropic thing. You've got Anthropic taking a principled stance and getting pain for it, and you've got OpenAI claiming to take the same stance, but somehow agreeing to the terms of the DoW. Do you think it's more likely that Anthropic carelessly caused themselves massive trouble, or do you think OpenAI is claiming to have got the concessions that clearly won't work in practice. I think it's naive to think the former.
The logic of "anything is better than before" is also fallacious.
If it's E2EE, no one except the sender and receiver know about this conversation. You want an MITM in this case to detect/block such things or at least keep record of what's going on for a subpoena.
I agree that every messaging platform in the world shouldn't be MITM'd, but every messaging platform doesn't need to be E2EE'd either.
Users get used to the argument with TikTok and then apply it to other platforms.
Put it this way: why wouldn't those same arguments apply to any platform (if you believed them)?
and for tiktok's stance, I think they just don't want to get involved with the Chinese government related with encryption (and give false sense of privacy to user)
E2E makes political activists and anti-chinese dissidents safer, at the cost of making children less safe. Whether this is a worthwhile tradeoff is a political, not technical decision, but if we claim that there are any absolutes here, we just make sure that we'll never be taken seriously by anybody who matters.
It’s at best subpar for the same reasons as if it was the usual Silicon Valley spyware.
I could leave well enough alone. But why? Because there are choices? There are five other brands of cereal that do not have 25% sugar? I’d rather be a negative nancy towards these on-purpose addictive, privacy-leaking attention pimp apps.
Instead children would own special devices that are locked down and tagged with a "underage" flag when interacting with online services, while adults could continue as normal. We already heavily restrict the freedom of children so there is plenty of precedent for this. Optionally we could provide service points to unlock devices when they turn 18 to avoid E-waste as well.
This way it's the point of sale where you provide your ID, instead of attaching it to the hardware itself and sending it out to every single SaaS on the planet to do what they wish.
China has restrictions for social media and screen time for kids — how do they implement this?
The better question to ask ourselves is, does the capability to gather more information also lead to more power to act on this information? If the investigative resources are spread thin already it's not like they're gonna catch more criminals with investing more there. Repelling questionable individuals off the platform with lots transparancy -is- an effective way, but just a specific tool for a symptom.
I think a part of a better solution is to give parents and children better tools to manage their social graph themselves. Essentially the real problem is discovery and warding off of social outliers in a way that doesnt out all responsibility on opaque algos or corporations.
A part of their e2e keys could be shared using an intentionally obtuse way like mailing an item or a physical "friend code". That way parents and vetted friends can have their privacy. You don't need to tie an id to someone's person to get positive confirmation on someone's poor behaviour. If someone crossed the line then parents can see it and escalate. In additon, what would happen to a child with abusive parents who can then arbitrarily restrict and deny a childs freedom to communicate? I did not have this myself, but without free access to other minds and information I would have been duller. Does a large information dragnet really serve our collective interests or are more precise tools needed?
California is mandating OSes provide ages to app stores, and HN lost their mind because it's a ban on Linux.
After providing their identities to prove they are adults, and having all their activities tracked wherever they go and whatever they do.
The first 18 years aren't freedom either, just the system prepping you for what's ahead.
Deleted Comment
That’s why children must be free.
I see you Mr Quaker Oats
So yeah, age verification should be taken down, as well as the datamining these companies do and the opaque tunning of their algorithms. It baffles me: people are concerned about their children's DMs but are not concerned about what companies serves them and what they do with their data.
Absolutely. But what responsibilities do megacorps have? Right now, everyone seems to avoid this question, and make do with megacorps not being responsible. This means: "we'll allow megacorps to be as they are and not take any responsibilities for the effects they cause to society". Instead of them taking responsibilities, we're collecting everyone's data and calling it a day by banning children from social networks... and this is because there are many interests involved (not related to child development and safety).
Yup, but the tools provided make that easy or hard.
But putting that emotive bit to one side, Megacorps have a vested interest in not being responsible to children. They need children's eye balls to drive advertising revenue. If that means sending them corrosive shit, then so be it.
Its a bigger issue than encryption, its editorial choice.
Many parental controls are massive pains to get working. Apple does fairly well (although I don't get a parental pin number to unlock the phone, which is normally fine as my child will tell me, but in some circumstances it wouldn't be), but does require the parent to be on the apple ecosystem too.
EA and Microsoft however are terrible, especially as it's likely the child will be playing fortnite/minecraft and the parent won't have ever touched it. I think with minecraft we had to make something like 5 or 6 accounts across three different sites to allow online minecraft play from a nintendo switch.
Kids should be able to write a journal or talk to friends with total trust that this information will not reach their parents.
That said, these platforms are making it impossible for parents to monitor anything. They're literally designed to profit off addiction in children.
The children yearn for the mines(?).
Hogwash.
Where are these mythical people who aren’t concerned with both?
People don't care about "what companies serve them". They only care if the children see sexual content (or things considered deviant). Once sexual and deviant content is filtered, they're happy to give away their children's development to the company's algos.
In effect, the people don't want to concern themselves with what their children consume, unless they're outraged by things normally taboo in their age group. Besides, if everyone is in it "it's not that wrong". They seek reactive entertainment rather than proactive engagement in their children's development.
They're called politicians.
Why?
> They already got so much data on their users
There are a variety of ways (see "Verifiable Credentials") that ages can be verified without handing over any data other than "Is old enough" to social media services.
Allowing for more effective propaganda, electrol control, and lights a fire on the concept of a government _representing_ anyone.
This is the next two steps into 1984.
Once you start mandating this, there's no going back.
The next generation will start associating wrongthink with government IDs. (Wait, we already do that, right?)
You might be able to get somewhere by getting a tech company on your side, but they generally also hate adult content and don't mind banning it entirely.
(people are not going to get age verification _banned_ any time soon! That's simply not going to happen!)
Deleted Comment
It’s ok for a platform to not feature private conversations. They should just have no DM feature at all, then; make all messages publicly visible.
Private conversations are indeed not for all ages. Parents should be able to grant access to that on individual basis.
This makes no sense.
I can discuss something in a bar which is not a very private conversation, I wouldn't care if someone else hear what I'm saying. But I also don't want someone to record it and post it on the internet to be seen by the whole world.
Privacy is not just boolean you toggle somewhere.
To quote a comment I made some time ago:
- You can call your service e2e encrypted even if every client has the same key bundled into the binary, and rotate it from time to time when it's reversed.
- You can call your service e2e encrypted even if you have a server that stores and pushes client keys. That is how you could access your message history on multiple devices.
- You can call your service e2e encrypted and just retrieve or push client keys at will whenever you get a government request.
E2EE only prevents naive middlemen from reading your messages.
It is phrase that sounds good. But actually doing it effectively in way that average user understand and can use system with it with minimal effort is very hard.
There are parents out there who would record and AI-analyze every single private conversation their kids have if only the technology enabled it.
During times in which I was unable to socialize irl (eg school holidays), and unable to talk to my friends online, I can confirm that the isolation was not good for my mental health.
Once it gets big enough in your location you buy it for that sweet sweet intel.
Dead Comment
Fixed a bit.
I'm mindful that it's less secure than other apps, but for a lot of chats it doesn't matter.
It's a communication channel attached to the most popular social network for young people. Obviously they're going to use it a lot. They use it for the extreme convenience.
And in a perfect world essentially shouldn’t have to be, at least inside expensive walled garden app stores.
We know the technology exists. Apple had it all polished and ready to go for image scanning. I suppose the only thing in which we can place our faith is that it would be such an enormous scandal to be caught in the act that WhatsApp et al daren’t even try it.
(There is something to be said for e2ee: it protects you against an attack on Meta’s servers. Anyone who gets a shell will have nothing more than random data. Anyone who finds a hard drive in the data centre dumpster will have nothing more than a paperweight.)
Sure, we should all be doing PGP on Tails with verified key fingerprints. But how many people can actually do that?
People want to believe in E2EE, it's almost like religion at this point.
Protecting people is synonymous with E2EE, even if you cant verify it, and it can be potentially broken.
I was even more controversial and singled out Signal as an example: https://blog.dijit.sh/i-don-t-trust-signal/
Perhaps your e2ee is only securing your data in travel if their servers are considered the other end.
Also one thing people seem to misunderstand is that for most applications the conversation itself is not very interesting, the metadata (who to who, when, how many messages etc.) is 100x more valuable.
They don't believe that. It makes it more difficult to deal with governments, is all. Big Brother needs your messages from time to time, and TikTok is not willing to risk getting shut down to argue against that. We can't have pesky principles getting in the way of money.