Readit News logoReadit News
blackqueeriroh commented on A teen was suicidal. ChatGPT was the friend he confided in   nytimes.com/2025/08/26/te... · Posted by u/jaredwiener
blackqueeriroh · 3 days ago
I have a question for folks. This young man was 17. Most folks in this discussions have said that because he was 17 it’s different as opposed to, say, an adult.

What materially changes when someone goes from 17 to 18? Why would one be okay but not the other?

blackqueeriroh commented on A teen was suicidal. ChatGPT was the friend he confided in   nytimes.com/2025/08/26/te... · Posted by u/jaredwiener
kelnos · 3 days ago
We should all get to decide, collectively. That's how society works, even if imperfectly.

Someone died who didn't have to. I don't think it's specifically OpenAI's or ChatGPT's fault that he died, but they could have done more to direct him toward getting help, and could have stopped answering questions about how to commit suicide.

blackqueeriroh · 3 days ago
How would we decide, collectively? Because currently, that’s what we have done. We have elected the people currently regulating (or not regulating) AI.
blackqueeriroh commented on A teen was suicidal. ChatGPT was the friend he confided in   nytimes.com/2025/08/26/te... · Posted by u/jaredwiener
teaearlgraycold · 3 days ago
I really want the LLMs to respond like a senior developer that doesn't have time for you but needs you to get your job done right. A little rude and judgemental, but also highly concise.
blackqueeriroh · 3 days ago
You say that now, but how they actually behave says that you’d probably get tired of it.
blackqueeriroh commented on A teen was suicidal. ChatGPT was the friend he confided in   nytimes.com/2025/08/26/te... · Posted by u/jaredwiener
nis0s · 3 days ago
Why did developers spread the idea of AI consciousness for LLMs in the first place? The usefulness and capability of an LLM is orthogonal to its capacity to develop consciousness.

I think people would use LLMs with more detachment if they didn’t believe there was something like a person in them, but they would still become reliant on them, regardless, like people did on calculators for math.

blackqueeriroh · 3 days ago
Because humans like to believe they are the most intelligent thing on the planet and would be very uninterested in something that seemed smarter than them if it didn’t act like them,
blackqueeriroh commented on A teen was suicidal. ChatGPT was the friend he confided in   nytimes.com/2025/08/26/te... · Posted by u/jaredwiener
broker354690 · 3 days ago
Why isn't OpenAI criminally liable for this?

Last I checked:

-Signals emitted by a machine at the behest of a legal person intended to be read/heard by another legal person are legally classified as 'speech'.

-ChatGPT is just a program like Microsoft Word and not a legal person. OpenAI is a legal person, though.

-The servers running ChatGPT are owned by OpenAI.

-OpenAI willingly did business with this teenager, letting him set up an account in exchange for money. This business is a service under the control of OpenAI, not a product like a knife or gun. OpenAI intended to transmit speech to this teenager.

-A person can be liable (civilly? criminally?) for inciting another person's suicide. It is not protected speech to persuade someone into suicide.

-OpenAI produced some illegal speech and sent it to a suicidal teenager, who then committed suicide.

If Sam Altman stabbed the kid to death, it wouldn't matter if he did it on accident. Sam Altman would be at fault. You wouldn't sue or arrest the knife he used to do the deed.

Any lawyers here who can correct me, seeing as I am not one? It seems clear as day to me that OpenAI/Sam Altman directly encouraged a child to kill themselves.

blackqueeriroh · 3 days ago
Section 230, without which Hacker News wouldn’t exist.
blackqueeriroh commented on A teen was suicidal. ChatGPT was the friend he confided in   nytimes.com/2025/08/26/te... · Posted by u/jaredwiener
brainless · 3 days ago
I do not think this is fair. What is fair is at first hint of a mental distress, any LLM should completely cut-off communication. The app should have a button which links to actual help services we have.

Mental health issues are not to be debated. LLMs should be at the highest level of alert, nothing less. Full stop. End of story.

blackqueeriroh · 3 days ago
Which mental health issues are not to be debated? Just depression or suicidality? What about autism or ADHD? What about BPD? Sociopathy? What about complex PTSD? Down Syndrome? anxiety? Which ones are on the watch list and which aren’t?
blackqueeriroh commented on A teen was suicidal. ChatGPT was the friend he confided in   nytimes.com/2025/08/26/te... · Posted by u/jaredwiener
slipperydippery · 3 days ago
I like this framing even better.

This is similar to my take on things like Facebook apparently not being able to operate without psychologically destroying moderators. If that’s true… seems like they just shouldn’t operate, then.

If you’re putting up a service that you know will attempt to present itself as being capable of things it isn’t… seems like you should get in a shitload of trouble for that? Like maybe don’t do it at all? Maybe don’t unleash services you can’t constrain in ways that you definitely ought to?

blackqueeriroh · 3 days ago
But understand that things like Facebook not operating doesn’t actually make the world any safer. In fact, it makes it less safe, because the same behavior is happening on the open internet and nobody is moderating it.
blackqueeriroh commented on A teen was suicidal. ChatGPT was the friend he confided in   nytimes.com/2025/08/26/te... · Posted by u/jaredwiener
notachatbot123 · 3 days ago
I so agree very much. There is no reason for LLMs to be designed as human-like chat companions, creating a false sense of untechnology.
blackqueeriroh · 3 days ago
There are absolutely reasons for LLMs to be designed as human-like chat companions, starting with the fact that they’re trained on human speech and behavior, and what they do is statistically predict the most likely next token, which means they will statistically sound and act much like a human.
blackqueeriroh commented on A teen was suicidal. ChatGPT was the friend he confided in   nytimes.com/2025/08/26/te... · Posted by u/jaredwiener
AIPedant · 3 days ago
Yes, if this were an adult human OpenAI employee DMing this stuff to a kid through an official OpenAI platform, then

a) the human would (deservedly[1]) be arrested for manslaughter, possibly murder

b) OpenAI would be deeply (and deservedly) vulnerable to civil liability

c) state and federal regulators would be on the warpath against OpenAI

Obviously we can't arrest ChatGPT. But nothing about ChatGPT being the culprit changes 2) and 3) - in fact it makes 3) far more urgent.

[1] It is a somewhat ugly constitutional question whether this speech would be protected if it was between two adults, assuming the other adult was not acting as a caregiver. There was an ugly case in Massachusetts involving where a 17-year-old ordered her 18-year-old boyfriend to kill himself and he did so; she was convicted of involuntary manslaughter, and any civil-liberties minded person understands the difficult issues this case raises. These issues are moot if the speech is between an adult and a child, there is a much higher bar.

blackqueeriroh · 3 days ago
Section 230 changes 2) and 3). OpenAI will argue that it’s user-generated content, and it’s likely that they would win.
blackqueeriroh commented on Training language models to be warm and empathetic makes them less reliable   arxiv.org/abs/2507.21919... · Posted by u/Cynddl
yunwal · 17 days ago
Obesity has been considered a disease since the term existed. Overweight is the term that is used for weight that’s abnormally high without necessarily indicating disease.

There’s been some confusion around this because people erroneously defined bmi limits for obesity, but it has always referred to the concept of having such a high body fat content that it’s unhealthy/dangerous

blackqueeriroh · 17 days ago
This is false. Obesity wasn’t considered a disease until 2013. [1] The term has been around since the late 17th century [2]

[1]: https://obesitymedicine.org/blog/ama-adopts-policy-recognize...

[2]: https://www1.racgp.org.au/ajgp/2019/october/the-politics-of-...

u/blackqueeriroh

KarmaCake day108December 3, 2021View Original