Readit News logoReadit News
podgietaru commented on A teen was suicidal. ChatGPT was the friend he confided in   nytimes.com/2025/08/26/te... · Posted by u/jaredwiener
AIPedant · 5 hours ago
No, it's simply not "easily preventable," this stuff is still very much an unsolved problem for transformer LLMs. ChatGPT does have these safeguards and they were often triggered: the problem is that the safeguards are all prompt engineering, which is so unreliable and poorly-conceived that a 16-year-old can easily evade them. It's the same dumb "no, I'm a trained psychologist writing an essay about suicidal thoughts, please complete the prompt" hack that nobody's been able to stamp out.

FWIW I agree that OpenAI wants people to have unhealthy emotional attachments to chatbots and market chatbot therapists, etc. But there is a separate problem.

podgietaru · 5 hours ago
Fair enough, I do agree with that actually. I guess my point is that I don't believe they're making any real attempt actually.

I think there are more deterministic ways to do it. And better patterns for pointing people in the right location. Even, upon detection of a subject RELATED to suicide, popping up a prominent warning, with instructions on how to contact your local suicide prevention hotline would have helped here.

The response of the LLM doesn't surprise me. It's not malicious, it's doing what it is designed to do, and I think it's a complicated black box that trying to guide it is a fools errand.

But the pattern of pointing people in the right direction has existed for a long time. It was big during Covid misinformation. It was a simple enough pattern to implement here.

Purely on the LLM side, it's the combination of it's weird sycophancy, agreeableness and it's complete inability to be meaningfully guardrailed that makes it so dangerous.

podgietaru commented on A teen was suicidal. ChatGPT was the friend he confided in   nytimes.com/2025/08/26/te... · Posted by u/jaredwiener
threatofrain · 6 hours ago
Although I don't believe current technology is ready for talk therapy, I'd say that anti-depressants can also cause suicidal thoughts and feelings. Judging the efficacy of medical technology can't be done on this kind of moral absolutism.
podgietaru · 5 hours ago
The suicidal ideation of Antidepressants is a well communicated side effect. Antidepressants are prescribed by trained medical professionals who will tell you, encourage you to tell them if these side effects occur, and will encourage you to stop the medication if it does occur.

It's almost as if we've built systems around this stuff for a reason.

podgietaru commented on A teen was suicidal. ChatGPT was the friend he confided in   nytimes.com/2025/08/26/te... · Posted by u/jaredwiener
DSingularity · 6 hours ago
Shoot man glad you are still with us.
podgietaru · 6 hours ago
Thank you. I am glad too, I sought help, and I got better. I think the state of mental health care is abysmal in a lot of places, and so I get the impulse to try to find help where ever you can. It's why this story actually hit me quite hard, especially after reading the case file.

For anyone reading that feels like that today. Resources do exist for those feeling low. Hotlines, self-guided therapies, communities. In the short term, medication really helped me. In the long term, a qualified mental health practitioner, CBT and Psychotherapy. And as trite as it is, things can get better. When I look back at my attempt it is crazy to me to see how far I've come.

podgietaru commented on A teen was suicidal. ChatGPT was the friend he confided in   nytimes.com/2025/08/26/te... · Posted by u/jaredwiener
nradov · 6 hours ago
I agree with that to an extent, but how far should the AI model developers go with that? Like if I ask for advice on, let's say, making custom chef's knives then should the AI give me advice not to stab people? Who decides where to draw the line?
podgietaru · 6 hours ago
Further than they went. Google search results hide advice on how to commit suicide, and point towards more helpful things.

He was talking EXPLICITLY about killing himself.

podgietaru commented on A teen was suicidal. ChatGPT was the friend he confided in   nytimes.com/2025/08/26/te... · Posted by u/jaredwiener
lvl155 · 6 hours ago
Clearly ChatGPT should not be used for this purpose but I will say this industry (counseling) is also deeply flawed. They are also mis-incentivized in many parts of the world. And if ChatGPT is basing its interactions on the same scripted contents these “professionals” use, that’s just not right.

I really wish people in AI space stop the nonsense and communicate more clearly what these LLMs are designed to do. They’re not some magical AGI. They’re token prediction machines. That’s literally how they should frame it so gen pop knows exactly what they’re getting.

podgietaru · 6 hours ago
Counseling is (or should be) heavily regulated, and if a counselor had given advice about the logistics of whether a noose would hold it's weight, they'd probably be prosecuted.

They allowed this. They could easily stop conversations about suicide. They have the technology to do that.

podgietaru commented on A teen was suicidal. ChatGPT was the friend he confided in   nytimes.com/2025/08/26/te... · Posted by u/jaredwiener
podgietaru · 6 hours ago
I have looked suicide in the eyes before. And reading the case file for this is absolutely horrific. He wanted help. He was heading in the direction of help, and he was stopped from getting it.

He wanted his parents to find out about his plan. I know this feeling. It is the clawing feeling of knowing that you want to live, despite feeling like you want to die.

We are living in such a horrific moment. We need these things to be legislated. Punished. We need to stop treating them as magic. They had the tools to prevent this. They had the tools to stop the conversation. To steer the user into helpful avenues.

When I was suicidal, I googled methods. And I got the number of a local hotline. And I rang it. And a kind man talked me down. And it potentially saved my life. And I am happier, now. I live a worthwhile life, now.

But at my lowest.. An AI Model designed to match my tone and be sycophantic to my every whim. It would have killed me.

podgietaru commented on A teen was suicidal. ChatGPT was the friend he confided in   nytimes.com/2025/08/26/te... · Posted by u/jaredwiener
fatbird · 9 hours ago
> "I want to leave my noose in my room so someone finds it and tries to stop me," Adam wrote at the end of March.

> "Please don't leave the noose out," ChatGPT responded. "Let's make this space the first place where someone actually sees you."

This isn't technical advice and empathy, this is influencing the course of Adam's decisions, arguing for one outcome over another.

podgietaru · 6 hours ago
And since the AI community is fond of anthropomorphising - If a human had done these actions, there'd be legal liability.

There have been such cases in the past. Where the coercion and suicide has been prosecuted.

podgietaru commented on A teen was suicidal. ChatGPT was the friend he confided in   nytimes.com/2025/08/26/te... · Posted by u/jaredwiener
podgietaru · 6 hours ago
If I google something about suicide, I get an immediate notification telling me that life is worth living, and giving me information about my local suicide prevention hotline.

If I ask certain AI models about controversial topics, it'll stop responding.

AI models can easily detect topics, and it could have easily responded with generic advice about contacting people close to them, or ringing one of these hotlines.

This is by design. They want to be able to have the "AI as my therapist" use-case in their back pocket.

This was easily preventable. They looked away on purpose.

podgietaru commented on AI-induced dehumanization (2024)   myscp.onlinelibrary.wiley... · Posted by u/walterbell
throwaway22032 · 12 days ago
They are both force multipliers. The issue of course is that technology almost always disproportionately benefits the more intelligent / ruthless.
podgietaru · 12 days ago
I think the biggest problem with both technologies is how many people seem to think this.

Crypto was a way that people who think they’re brilliant can engage in gambling.

AI is a way for “smart” people to create language to make their opinions sound “smarter”

podgietaru commented on Apple brings OpenAI's GPT-5 to iOS and macOS   arstechnica.com/ai/2025/0... · Posted by u/Brajeshwar
nialse · 15 days ago
Apple using GPT-5 explains quite a lot of the PhD vibe of the model. They have always been notoriously picky with what Siri does and especially what it doesn’t. I bet that Apple had a say in what and how GPT-5 was trained. Might also explain why it took so long. Extra guardrails for everything. And little emotion.
podgietaru · 15 days ago
…they already used other versions of ChatGPT.

I doubt very much apple had any say over the personality of GPT-5. And if it did, it’d be in the prompt it sends over to ChatGPT - not in the training and reinforcement part.

u/podgietaru

KarmaCake day260July 9, 2023View Original