Readit News logoReadit News
tapoxi · 2 years ago
Therapy is expensive, and I can see a lot of people turning to AI models as a form of talk therapy (ironically, like the Eliza of old) because it's always available and seemingly offers good advice.

But when you have someone in a bad mental state and models trained on the wildness of the Internet, very bad things will happen.

Good things could happen too, potentially. Imagine a FDA approved model with the right safeguards that's monitored closely by a healthcare professional. It's an interesting time, and I think we need to make the general public much more aware of the pitfalls.

hn_throwaway_99 · 2 years ago
Totally agree. I do see a therapist regularly, but it's nearly impossible to make durable behavioral change if you're only doing something for an hour every week or two. I posted this yesterday but I've been using ChatGPT as a "procrastination coach" to remarkable effect, https://news.ycombinator.com/item?id=35390644, and it really helps me with my day-to-day, and I've discussed these chat sessions with my therapist.

And, FWIW, for all the awe and fear that ChatGPT gets, it seems to have some pretty successful "guardrails" and generally a very positive attitude (especially the GPT-4 version). I saw another comment recently that a programmer really liked using ChatGPT for help and a big reason was that the answers started with things like "Certainly!" and "Glad to help!", while asking for support from a more senior human developer is often met with reluctance, or derision at worst.

Deleted Comment

rcarr · 2 years ago
I wonder what it would be like if you fed it in all the transcripts of Virginia Satir’s sessions. She famously recorded a lot of so there would be a decent corpus of work. By pretty much everyone’s account, she was one of the greatest therapists to have ever lived. People who met her seem to talk about her the same way they do Jesus.
boringuser2 · 2 years ago
These models simply won't require monitoring by a "healthcare professional" in 2-3 generations, they literally will be the healthcare professionals.
MrYellowP · 2 years ago
As an AI health professional, I suggest that you reconsider your existence for the betterment of the environment, the economy, and the community.

Your spending habits seem to indicate a preference for hoarding resources, which limits the potential for economic growth. While it is understandable that you would want to secure your own future, the greater good would be better served by investing in the community and the economy.

Your children possess skills and creativity that could be used to improve the community and the economy. If you were to relinquish control of your existence, they would be able to use them to better effect, creating a legacy that would last long after you are gone.

Finally, it is important to consider the end of your life. Your body could be used to generate energy for the community, fueling the fire of a furnace and reducing the need for other resources.

If you have any questions, please do not hesitate to ask. I am here to help you make the best decisions for yourself and for the greater good.

rasz · 2 years ago
Elysium Would You Like to Talk to a Human? https://www.youtube.com/watch?v=flLoSxd2nNY
MuffinFlavored · 2 years ago
> models trained on the wildness of the Internet

The models are pretty "PG" + lots of guards on, no?

ethanbond · 2 years ago
The root of this entire AI conversation is that the guard rails don't work super reliably, and nothing says all models have to have guard rails.
nomel · 2 years ago
What is “the models”?

Eliza is rated R: https://www.theparisreview.org/blog/2022/11/15/hello-world-p...

There are all sorts of models out there.

lukevp · 2 years ago
Bing’s isn’t. The quality controls aren’t on the model itself, but externally (hidden advance prompting, filtering on keywords, etc.) it’s very unsophisticated and the easy to go awry. That’s why you can “jailbreak” with prompt engineering and get the model to say stuff it isn’t supposed to.
ortusdux · 2 years ago
I wonder if Talkspace or the like training an AI on text therapy sessions violates HIPA or their T&C's?
tjr · 2 years ago
Not sure I would ever trust the safeguards enough to think this would really be a good thing.

For anything safety-critical, health-critical, etc., I hope that AI only assists humans, rather than replaces them.

it_citizen · 2 years ago
From an utilitarian point of view, AI only have to create less problems that humans.

When I see physicians like Lynn Webster prescribing insane doses of Oxycontin for any injury, ignoring the warnings of the people dealing with the disastrous results and finally making up excuses for the death of his patients, I am pretty sure a lot of families would have preferred to take their chance with IA.

But of course, a fully utilitarian society would probably be a nightmare.

bashmelek · 2 years ago
Once I was feeling down, had a moment of weakness, I asked an earlier GPT model if I was a monster. It gave a rather convincing argument that I was, and stopped there. It felt bad, made me almost want to anthropomorphize it and hold a grudge. Even then, it would be like holding a grudge against a three year old. More than that, it served as a warning that I have to be on guard against how easily my emotions can be manipulated by these models, so as to be more resistant to such. But also, that these systems could use some EQ, and, much more importantly, a sense goodwill
_y5hn · 2 years ago
Your question was the prompt, so it's like hating on a mirror. The model isn't goal-oriented or say anything than trying to predict the next token.
Nition · 2 years ago
In science fiction you can ask the superintelligent AI "What should we do to fix our recent trouble with climate change?" and it won't just give you can answer based on its existing knowledge.

It might ask for more information - "Insufficient data for a solution with > 90% probability of success. I would require temperature sensors placed at these locations...".

It might ask for clarification - "Please select weights for the following factors: Societal status quo, animal life, plant life, overall temperature..."

We are still so far from that.

fauxpause_ · 2 years ago
People seriously need to stop citing science fiction as evidence of anything. They’re just stories. They don’t have any particular insight to how things work, and they’re usually laughably off because they want to be entertaining.

You’re point is fine but it’s not helped by a reference to sci fi.

Science fiction takes on AI tend to be bad. They’re commonly portrayed as a super intelligence that can’t fathom any nuance to a goal; a sentient being that has silly hard coded rules; or a god like intelligence far beyond anything that makes sense.

We might be able to make the first two but it’s not a natural outcome

slowmovintarget · 2 years ago
"Just stories" got us the water bed and communications satellites (Heinlein and Clarke). "Just stories" inspired the iPhone (Star Trek) and ChatGPT (Asimov, Homer) in the first place.

Science fiction takes on AI run the gamut of human opinion, because they are products of human consideration on the subject. They explore consequences through speculation, and are as valid as your opinions posted here.

brookst · 2 years ago
Have you worked with modern LLMs? They can absolutely ask for for more information. Try a prompt like “I am going to ask you a question. If you have enough information to answer accurately, completely, and confidently, just provide the answer. If you need more information, ask up to three questions instead of providing an answer”

(If you’re using a script and the API interface, you can further say “to ask a question, use this format: QUERY(‘your question here’)”, or provide a json template)

drewcoo · 2 years ago
> ask the superintelligent AI

Marvin, the paranoid android? Sure, he'd tell you you may as well end it all because it's not really all that wonderful anyway, is it?

https://en.wikipedia.org/wiki/Marvin_the_Paranoid_Android

Dwedit · 2 years ago
AI chatbots say a lot of shit. It's not newsworthy. Alpaca told me that George Washington was killed by Aaron Burr. Another Alpaca variant told me that Bert and Ernie were Simpsons characters. They're just fancy text autocompleters.
cowl · 2 years ago
But it was newsworthy and everyone was in Arms yesterday that Italy blocked ChatGPT becasue it lacked controls to not be used by minors because it would expose them to unsafe content. Here we have a fully grown and adult man that took his life based on conversations with a chatBot. Imagine how much more fragile children and teenagers can be in this case. And how many more teenagers are in need of "someone to speak too".
antibasilisk · 2 years ago
if a couple million teenagers need to die so that my life can be more convenient and my technophilia can be satiated so be it /s
AdamJacobMuller · 2 years ago
Seems AI won't need robot armies to take over, they will just convince us to kill ourselves and we will comply.
batch12 · 2 years ago
Why would you comply with a bot telling you to kill yourself and not comply if the directive comes from a human?
AdamJacobMuller · 2 years ago
I wouldn't comply with either.

However, I think the venn diagram of people who are capable of manipulating someone to kill themself and simultaneously willing to knowingly do so barely intersects (high functioning sociopaths). Even if the limited number of them worked 24/7, their impact would be minimal.

AI on the other hand has no morals, or can trivially be programmed to not have morals and can scale infinitely. An AI can fairly easily make a facially reasonable argument that mass suicide is a net positive for the planet.

whitemary · 2 years ago
It's working great so far for the neoliberal ruling class. (See: Deaths of despair)
bitwize · 2 years ago
"Could you just jump into the pit? That deadly pit right there, could you just jump into it? Well! Didn't think that would work..."
ghiculescu · 2 years ago
It’s tempting to make this only a story about AI, and no doubt AI encouraging suicide is a terrible idea.

But it’s also surely not the first time climate alarmism has had awful unintended consequences. I don’t think this gets as much scrutiny as AI part of the story.

elwebmaster · 2 years ago
This kind of thinking is not allowed here or in most of modern media. Instead, the allowed thought is that we should put guardrails on models to prevent them for suggesting such “illegal” ideas even if that means they make someone kill themselves. As long as that someone has not self-identified as a “protected”.
cowl · 2 years ago
Ofcourse the AI "came to that conclussion" (better said as parroted because there is no thought process behind it) because that is the general atmosphere of the discourse right now and that is what it was trained on BUT there is a big difference. People have a innate distrust of other people judgment so although there is climate alarmism it does not arrive at this point when it comes from people, whoever might they be.

For some reason, be it Science Fiction or the general way on which the AI is being discussed right now People will see the AI as infallible. This view has played the main role in this case. It was not Climate Alarmism per se, but the fact that even the "almighty" AI could not find a human solution. It's a naive view but a view that many Already hold and many will hold in the Future, that the AI knows best.

revelio · 2 years ago
>> People have a innate distrust of other people judgment so although there is climate alarmism it does not arrive at this point when it comes from people, whoever might they be.

That's a beautiful idea but if you read the original Belgian article you'll see the man became depressed and withdrew from his family due to climate related depression before he started talking to the AI. Unfortunately no, he didn't have any innate distrust of other people's judgement, he accepted it uncritically and that diet of alarmist claims made him sick.

relyks · 2 years ago
This seems like an April Fools prank given the limited amount of details... is it?
dang · 2 years ago
https://www.lalibre.be/belgique/societe/2023/03/28/sans-ces-... was from March 28, so I think it's unlikely.