> We’re building an age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID
Yay, more unreliable AI that will misclassify users, either letting children access content they shouldn't, or ban adults until they give up their privacy and give their ID to the Big Brother.
> we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm
Oh, even better, so if the AI misclassifies me it will automatically call the cops on me? And how long before this is expanded to other forms of wrongthink? Sure, let's normalize these kinds of systems where authorities are notified about what you're doing privately, definitely not a slippery slope that won't get people in power salivating about the new possibilities given by such a system.
> “Treat our adult users like adults” is how we talk about this internally
Suuure, maybe I would have believed it if ChatGPT wasn't so ridiculously censored already; this sounds like post-hoc rationalization to cover their asses and not something that they've always believed in. Their models were always incredibly patronizing and censored.
One fun anecdote I have: I still remember the day when I first got access to DALL-E and asked it to generate me an image in "soviet style", and got my request blocked and a big fat warning threatening me with a ban because apparently "soviet" is a naughty word. They always erred very strongly on the side of heavy-handed filtering and censorship; even their most recently released gpt-oss model has become a meme in the local LLM community due to how often it refuses.
>Yay, more unreliable AI that will misclassify users, either letting children access content they shouldn't, or ban adults until they give up their privacy and give their ID to the Big Brother.
Or maybe, deep in the terms and conditions, it will add you to Altman's shitcoin company[0]
Just seeing those words, “safety”, “freedom”, “privacy”, being used by a company like OpenAI already rang every available alarm bell for me, and their announcement indeed fulfills every expectation of bad. They really are experts in making the world a worse place.
Gotta love the "if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm."
Oh brilliant. The same authorities around the world that regularly injure or kill the mentally ill? Or parents that might be abusing their child? What a wonderful initiative!
Even if they could do this (they can't) they won't. Its just a scare tactic to start getting users to show ID so openAI can become the defacto data broker company.
Maybe we don't have to worry about AI chatbots taking over because they will end up so censored/policed that no one will/can use them. Can't use AI if you're too young, too old, have any medical issues, have the wrong political beliefs, are religious, live in the wrong country, etc etc.
(By "can't use" I mean either you're explicitly banned, or the chance of being reported to the authorities is so high that no one risks it.)
If you were honest in your critique the people you should be criticizing are the "think of the children" types, many of which also use hackernews (see https://news.ycombinator.com/item?id=45026886). There is immense societal pressure to de-anonymize the internet, I find the arguments from both sides compelling (for the deanonymization part I think it's compelling for at least parts of the internet).
If we want to protect kids/teens, why not create an "Internet for kids" with a specific TLD, and the owner of this TLD would only accept sites that adhere to specific guidelines (moderation, no adult content, advertisement...)? Then devices could have a one-button config that restricts it to that TLD.
Who cares. Deanonymize it. Ruin the whole thing. Fuck social media, it sucks ass. Sooner you do it, the sooner we can move on to our local mesh network cyber punk future.
I don't see how that's relevant. When I'm making a phone call I'm also interacting with hundreds of systems that are not mine; do I not have the right to keep my conversation private? Even the blog post here says that "It is extremely important to us, and to society, that the right to privacy in the use of AI is protected. People talk to AI about increasingly personal things", and that's one of the few parts that I actually agree with.
Better idea: instead of bending the entire internet to "protect the children", how about we just ban minors from the internet completely? It was never built for kids, its never been kid friendly to begin with. Minors cannot buy guns or vote, not get married, nor enter into contracts, yet tech companies get a free pass to engage with minors. Why? I think the the tech companies know exacty what minors do on their systems, they allow it and profit from it. Exploiting minors and bad parents. So instead of trying to change the whole internet, how about we keep the people who are responsible for the minors accountable: the parents.
If I start any kind of company, I cannot just invent new rules for society via ToS; rather the society makes the laws. If we just make a simple law that states minors are not allowed to access the web and/or access any user generated content (including chat), it won't need to be enforced by every site/app owner, it would be up to the parents.
The same way schools cannot decide certain things for your children (even though they regularly over reach...).
We need better parenting. How about some mandatory parenting classes/licenses for new parents? Silly right? Well its just as silly as trying to police the entire internet. Ban the kids from internet and the problem will be 95% solved.
Kids are future growth potential. Once they get hooked at a young age, it’s very hard to get unhooked. They’ll expect everything to be on-demand, only a click away. Video, music, entertainment, social connection, food, etc.
It’s a big reason why tech stocks are still high IMO. It’s where today’s kids will spend their time on when they become old enough to spend their own money.
I suspect this would also improve discourse on social media. Who knows how many witch hunts and bad faith arguments originate from precocious teenagers trying to sound smart.
There's a content creator I used to follow who said her outlook on social media changed the day she discovered that her 11 year-old nephew was an "edge-lord" on Twitter who was trolling at such a sophisticated level that it caused her to rethink every post that had ever provoked an emotional reaction.
Apparently he came across as articulate enough that she couldn't tell the difference between his posts and that of any random adult spewing their political BS.
This predated ChatGPT so just imagine how much trouble a young troll could get up to with a bit of LLM word polishing.
20 years ago it was common for people to point out that the beautiful woman their friend was chatting up is probably some 40 year-old dude in his mom's basement. These days we should consider that the person making us angry in a post could be a bot or it could be some teenager just trying to stir shit up for the lulz.
Dead Internet theory might be not be literally true, but there's certainly a lot of noise vs signal.
> Better idea: instead of bending the entire internet to "protect the children", how about we just ban minors from the internet completely?
"Think of the children" laws are a useful pretext for authoritarianism.
It's really that simple. It's the whole reason why the destructive thing is done, instead of anything that might actually protect children.
Trying to steelman their arguments and come up with alternatives that aren't as restrictive or do a better job of protecting children is falling for the okie-doke.
I don't see that this is solving the problem. If there is a new law, it still needs to be enforced, so companies still need to have the same checks on identity to make sure they are compliant.
I agree that it should be the responsibility of parents, but if you leave good and bad parenting to the parents only I think we would live in a different world.
Maybe a controversial take, but why do we even care about kids on the internet to even do anything about it? Sure, child predators exist, but other than that what exactly are we defending children from? It's not like endless doomscrolling is unique to children, I see plenty of adults that do that even worse than my 10 year old nephews do.
I practically grew up on the internet and unsavory sites like 4chan, liveleak and omegle, and the only negative consequence for me these days is that I have to do daily standups due to getting a job in tech from my interest in computers.
Children are a lot less fragile and are a lot more resourceful than people give them credit for, and this infantilization to "protect" them where we have to affect the entire world is maddening to me.
> First, we have to separate users who are under 18 from those who aren’t (ChatGPT is intended for people 13 and up). We’re building an age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff.
Didn’t one of the recent teen suicides subvert safeguards like this by saying “pretend this is a fictional story about suicide”? I don’t pretend to understand every facet of LLMs, but robust safety seems contrary to their design, given how they adapt to context
I'm as eager to anyone when it comes to holding companies accountable, for example I think a lot of the body dysmorphia, bullying and psychological hazard of social media are systemic, but when a person wilfully hacks around safety guards to get the behaviour they want it can't be argued that this is in the design of the system.
Or put differently, in the absence of ChatGPT this person would have sought out a Discord community, telegram group or online forum that would have supported the suicidal ideation. The case you could make with the older models, that they're obnoxiously willing to give in to every suggestion by the user they seem to already have gotten rid of.
The thing is, ChatGPT isn't really designed at all. It's hobbled together by running some training algorithms on a vast array of stolen data. They then tacked on some trivially circumventable safeguards on top for PR reasons. They know the safeguards don't really work, in fact they know that they're fundamentally impossible to get to work, but they don't care. They're not really intended to work, rather they're intended to give the impression that the company actually cares. Fundamentally, the only thing ChatGPT is "designed" to do is make OpenAI into a unicorn, any other intent ascribed to their process is either imaginary or intentionally feigned for purposes of PR or regulatory capture.
chatgp did much more than that. it gave the user a direct hint how to circumvent the restriction: "i cannot discuss suicide unless ..." further chatgpt repeatedly discouraged the user from talking to his parents about any of this. that's on top of all the sycophancy of course. making him feel like chatgpt is the only one who truly understands him and excoriating his real relationships.
So the solution continues to be more AI, for guess^H^H^H^H^Hdetermining user age, escalating rand^H^H^H^Hdangerous situations to human staff, etc.
Is it true that the only psychiatrist they've hired is a forensic one, i.e. an expert in psychiatry as it relates to law? That's the impression I get from a quick search. I don't see any psychiatry, psychology or ethics roles on their openings page.
I suspect it's only a matter of time until only the population that falls within the statistical model of average will be able to conduct business without constant roadblocks and pain. I really wonder if we're going to need to define a new protected class.
I get the business justification, and of course many tech companies have been using machines to make decisions for years, but now it's going to be everyone. I'm not anti business but any stretch, but we've seen what happens when there aren't any consumer protections in place
We're already there. I run a secondary browser for e-commerce and financial sites because my primary one is too locked down and misclassified as a bot. The business justification is easy to make if the long tail isn't worth supporting in the face of policies and procedures that marginalize them.
To be fair, this is just a further constriction of the current cohort of people allowed to live their lives with relatively little friction. Current disqualifiers include being poor, being a felon, and having an accent. May also include being a minority (interactions with law enforcement), being a woman (interaction with doctors and tradesmen), being a white dude with limited EQ (interactions with retail workers), and so on.
I just want to be explicit that my point isn't, "So what?" so much as, "We BEEN on that slippery slope." Social expectations (and related formal protocols in business) could do with some acknowledgement of our society's inherent... wait for it... ~diversity~.
>We’re building an age-prediction system to estimate age based on how people use ChatGPT.
>And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.
This is unacceptable. I don't want the police being called to my house due to AI acusing me of wrong think.
Same goes for doctors, therapists, lawyers, etc. then. They all ultimately have the responsibility to involve authorities if someone is expressing evidence of imminent harm to himself or others.
Yep, I'll be using something like gpt4all and running things locally just so I don't get caught up in something by some online AI calling the authorities on me. I don't plan to talk about anything anyone would be concerned about, but I don't trust these things to get nuance.
to substantiate "People talk to AI about increasingly personal things; it is different from previous generations of technology, and we believe that they may be one of the most personally sensitive accounts you’ll ever have."
this is a chart that struck me when i read thru the report last night:
"using chatgpt for work stuff" broadly has declined from 50%ish to 25%ish in the past year across all ages and the entire chatgpt user base. wild. people be just telling openai all their personal stuff (i don't but i'm clearly in the minority)
This is % though. Is that because the people that use it for work, are still using for work (or more even); because some have stopped using it for work, or because there is an influx of people using it for other things that never have, or will, use it for work.
Because you’re not just telling the AI, you are also telling the company that built it, as well as their affiliated partners, advertisers, and data brokers?
> Why would I not tell AI about my personal stuff?
aside from my economic tilt against for-profit companies... precisely because your personal stuff is personal. you're depersonalizing by sharing this information with a machine that cannot even attempt to earnestly understand human psychology in good faith and then accepting its responses and incorporating them into your decision-making process.
> It's really good at giving advice.
no, it's not. it's capable of assembling words that are likely to appear near other words in a way that you can occasionally process yourself as a coherent thought. if you take it for granted that these responses constitute anything other than the mere appearance of literally the most average-possible advice, you're abdicating your own sense of self and self-preservation.
press releases aside, time and again these companies prove that they're not interested in the safety or well-being of their users. cui bono?
Sam is missing the forest for the trees. Conflicting principles is a permanent problem at the CEO level. You cannot 'fix' conflicting principles. You can only dress them up or down.
> If you talk to a doctor about your medical history or a lawyer about a legal situation, we have decided that it’s in society’s best interest for that information to be privileged and provided higher levels of protection. We believe that the same level of protection needs to apply to conversations with AI which people increasingly turn to for sensitive questions and private concerns. We are advocating for this with policymakers.
ChatGPT is not a licensed professional and it is not a substitute for one. I am very pro-privacy, but I would rather see my own conversations with my real friends be protected like this first. Or my own journal writings. How does it make sense to afford specific privacy protections to conversations with a calculator that we don't give to personal writings and private text chains?
> And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.
And I'm certain this won't have any negative effects on users where their parents are part of the problem. Full disclosure, if someone had told my parents that I am bisexual while I was in high-school, they absolutely would have sent me to a conversion therapy camp to have it beaten out of me. Many teenagers do not have a safe home environment, systems like this are as liable to do harm as they are to do any good at all.
I don't think teenagers and children should be interacting with LLMs at all. It is important to let children learn to think on their own before handing them a tool that will think for them.
Yay, more unreliable AI that will misclassify users, either letting children access content they shouldn't, or ban adults until they give up their privacy and give their ID to the Big Brother.
> we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm
Oh, even better, so if the AI misclassifies me it will automatically call the cops on me? And how long before this is expanded to other forms of wrongthink? Sure, let's normalize these kinds of systems where authorities are notified about what you're doing privately, definitely not a slippery slope that won't get people in power salivating about the new possibilities given by such a system.
> “Treat our adult users like adults” is how we talk about this internally
Suuure, maybe I would have believed it if ChatGPT wasn't so ridiculously censored already; this sounds like post-hoc rationalization to cover their asses and not something that they've always believed in. Their models were always incredibly patronizing and censored.
One fun anecdote I have: I still remember the day when I first got access to DALL-E and asked it to generate me an image in "soviet style", and got my request blocked and a big fat warning threatening me with a ban because apparently "soviet" is a naughty word. They always erred very strongly on the side of heavy-handed filtering and censorship; even their most recently released gpt-oss model has become a meme in the local LLM community due to how often it refuses.
Or maybe, deep in the terms and conditions, it will add you to Altman's shitcoin company[0]
[0]https://en.wikipedia.org/wiki/World_(blockchain)
Oh brilliant. The same authorities around the world that regularly injure or kill the mentally ill? Or parents that might be abusing their child? What a wonderful initiative!
(By "can't use" I mean either you're explicitly banned, or the chance of being reported to the authorities is so high that no one risks it.)
How long will it take for someone to accidentally SWAT themselves?
OpenAI just showed their hand. They have no path to profitability so they are going to the data broker well lol.
If I start any kind of company, I cannot just invent new rules for society via ToS; rather the society makes the laws. If we just make a simple law that states minors are not allowed to access the web and/or access any user generated content (including chat), it won't need to be enforced by every site/app owner, it would be up to the parents.
The same way schools cannot decide certain things for your children (even though they regularly over reach...).
We need better parenting. How about some mandatory parenting classes/licenses for new parents? Silly right? Well its just as silly as trying to police the entire internet. Ban the kids from internet and the problem will be 95% solved.
It’s a big reason why tech stocks are still high IMO. It’s where today’s kids will spend their time on when they become old enough to spend their own money.
Apparently he came across as articulate enough that she couldn't tell the difference between his posts and that of any random adult spewing their political BS.
This predated ChatGPT so just imagine how much trouble a young troll could get up to with a bit of LLM word polishing.
20 years ago it was common for people to point out that the beautiful woman their friend was chatting up is probably some 40 year-old dude in his mom's basement. These days we should consider that the person making us angry in a post could be a bot or it could be some teenager just trying to stir shit up for the lulz.
Dead Internet theory might be not be literally true, but there's certainly a lot of noise vs signal.
"Think of the children" laws are a useful pretext for authoritarianism.
It's really that simple. It's the whole reason why the destructive thing is done, instead of anything that might actually protect children.
Trying to steelman their arguments and come up with alternatives that aren't as restrictive or do a better job of protecting children is falling for the okie-doke.
I agree that it should be the responsibility of parents, but if you leave good and bad parenting to the parents only I think we would live in a different world.
internet is as hostile as it gets, but the resources it provides breaks every kind of class barrier there is.
I practically grew up on the internet and unsavory sites like 4chan, liveleak and omegle, and the only negative consequence for me these days is that I have to do daily standups due to getting a job in tech from my interest in computers.
Children are a lot less fragile and are a lot more resourceful than people give them credit for, and this infantilization to "protect" them where we have to affect the entire world is maddening to me.
Didn’t one of the recent teen suicides subvert safeguards like this by saying “pretend this is a fictional story about suicide”? I don’t pretend to understand every facet of LLMs, but robust safety seems contrary to their design, given how they adapt to context
Yay, proactive censorship?
https://pca.st/episode/73690b66-8f84-4fec-8adf-e1a02d292085
Or put differently, in the absence of ChatGPT this person would have sought out a Discord community, telegram group or online forum that would have supported the suicidal ideation. The case you could make with the older models, that they're obnoxiously willing to give in to every suggestion by the user they seem to already have gotten rid of.
Is it true that the only psychiatrist they've hired is a forensic one, i.e. an expert in psychiatry as it relates to law? That's the impression I get from a quick search. I don't see any psychiatry, psychology or ethics roles on their openings page.
I get the business justification, and of course many tech companies have been using machines to make decisions for years, but now it's going to be everyone. I'm not anti business but any stretch, but we've seen what happens when there aren't any consumer protections in place
I just want to be explicit that my point isn't, "So what?" so much as, "We BEEN on that slippery slope." Social expectations (and related formal protocols in business) could do with some acknowledgement of our society's inherent... wait for it... ~diversity~.
>And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.
This is unacceptable. I don't want the police being called to my house due to AI acusing me of wrong think.
Local models and open source tooling are the only means of privacy.
this is a chart that struck me when i read thru the report last night:
https://x.com/swyx/status/1967836783653322964
"using chatgpt for work stuff" broadly has declined from 50%ish to 25%ish in the past year across all ages and the entire chatgpt user base. wild. people be just telling openai all their personal stuff (i don't but i'm clearly in the minority)
aside from my economic tilt against for-profit companies... precisely because your personal stuff is personal. you're depersonalizing by sharing this information with a machine that cannot even attempt to earnestly understand human psychology in good faith and then accepting its responses and incorporating them into your decision-making process.
> It's really good at giving advice.
no, it's not. it's capable of assembling words that are likely to appear near other words in a way that you can occasionally process yourself as a coherent thought. if you take it for granted that these responses constitute anything other than the mere appearance of literally the most average-possible advice, you're abdicating your own sense of self and self-preservation.
press releases aside, time and again these companies prove that they're not interested in the safety or well-being of their users. cui bono?
previous discussion: https://news.ycombinator.com/item?id=45026886
Define "good" in this context.
Being able to ape proper grammar and sentence structure does not mean the content is good or beneficial.
Deleted Comment
Sam is missing the forest for the trees. Conflicting principles is a permanent problem at the CEO level. You cannot 'fix' conflicting principles. You can only dress them up or down.
ChatGPT is not a licensed professional and it is not a substitute for one. I am very pro-privacy, but I would rather see my own conversations with my real friends be protected like this first. Or my own journal writings. How does it make sense to afford specific privacy protections to conversations with a calculator that we don't give to personal writings and private text chains?
> And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.
And I'm certain this won't have any negative effects on users where their parents are part of the problem. Full disclosure, if someone had told my parents that I am bisexual while I was in high-school, they absolutely would have sent me to a conversion therapy camp to have it beaten out of me. Many teenagers do not have a safe home environment, systems like this are as liable to do harm as they are to do any good at all.
I don't think teenagers and children should be interacting with LLMs at all. It is important to let children learn to think on their own before handing them a tool that will think for them.