Readit News logoReadit News
hathawsh · 12 days ago
Here is what Illinois says:

https://idfpr.illinois.gov/content/dam/soi/en/web/idfpr/news...

I get the impression that it is now illegal in Illinois to claim that an AI chatbot can take the place of a licensed therapist or counselor. That doesn't mean people can't do what they want with AI. It only means that counseling services can't offer AI as a cheaper replacement for a real person.

Am I wrong? This sounds good to me.

PeterCorless · 12 days ago
Correct. It is more provider-oriented proscription ("You can't say your chatbot is a therapist.") It is not a limitation on usage. You can still, for now, slavishly fall in love with your AI and treat it as your best friend and therapist.

There is a specific section that relates to how a licensed professional can use AI:

Section 15. Permitted use of artificial intelligence.

(a) As used in this Section, "permitted use of artificial intelligence" means the use of artificial intelligence tools or systems by a licensed professional to assist in providing administrative support or supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection (b).

(b) No licensed professional shall be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless:

(1) the patient or the patient's legally authorized representative is informed in writing of the following:

(A) that artificial intelligence will be used; and

(B) the specific purpose of the artificial intelligence tool or system that will be used; and

(2) the patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.

Source: Illinois HB1806

https://www.ilga.gov/Legislation/BillStatus/FullText?GAID=18...

janalsncm · 11 days ago
I went to the doctor and they used some kind of automatic transcription system. Doesn’t seem to be an issue as long as my personal data isn’t shared elsewhere, which I confirmed.

Whisper is good enough these days that it can be run on-device with reasonable accuracy so I don’t see an issue.

romanows · 12 days ago
Yes, but also "An... entity may not provide... therapy... to the public unless the therapy... services are conducted by... a licensed professional".

It's not obvious to me as a non-lawyer whether a chat history could be decided to be "therapy" in a courtroom. If so, this could count as a violation. Probably lots of law around this stuff for lawyers and doctors cornered into giving advice at parties already that might apply (e.g., maybe a disclaimer is enough to workaround the prohibition)?

stocksinsmocks · 11 days ago
I think this sort of service would be OK with informed consent. I would actually be a little surprised if there were much difference in patient outcomes.

…And it turns out it has been studied with findings that AI work, but humans are better.

https://pmc.ncbi.nlm.nih.gov/articles/PMC11871827/

amanaplanacanal · 11 days ago
Usually when it comes to medical stuff, things don't get approved unless they are better than existing therapies. With the shortage of mental health care in the US, maybe an exception should be made. This is a tough one. We like to think that nobody should have to get second rate medical care, even though that's the reality.
turnsout · 12 days ago
It does sound good (especially as an Illinois resident). Luckily, as far as I can tell, this is a proactive legislation. I don't think there are any startups out there promoting their LLM-based chatbot as a replacement for a therapist, or attempting to bill payers for service.
duskwuff · 11 days ago
> I don't think there are any startups out there promoting their LLM-based chatbot as a replacement for a therapist

Unfortunately, there are already a bunch.

IIAOPSW · 11 days ago
I'll just add that this has certain other interesting legal implications, because records in relation to a therapy session are a "protected confidence" (or whatever your local jurisdiction calls it). What that means is in most circumstances not even a subpoena can touch it, and even then special permissions are usually needed. So one of the open questions on my mind for a while now was if and when a conversation with an AI counts as a "protected confidence" or if that argument could successfully be used to fend off a subpoena.

At least in Illinois we now have an answer, and other jurisdictions look to what has been established elsewhere when deciding their own laws, so the implications are far reaching.

danenania · 11 days ago
While I agree it’s very reasonable to ban marketing of AI as a replacement for a human therapist, I feel like there could still be space for innovation in terms of AI acting as an always-available supplement to the human therapist. If the therapist is reviewing the chats and configuring the system prompt, perhaps it could be beneficial.

It might also be a terrible idea, but we won’t find out if we make it illegal to try new things in a safe/supervised way. Not to say that what I just described would be illegal under this law; I’m not sure whether it would be. I’d expect it will discourage any Illinois-licensed therapists from trying out this kind of idea though.

linotype · 11 days ago
What if at some point an AI is developed that’s a better therapist AND it’s cheaper?
awesomeusername · 11 days ago
I'm probably in the minority here, but for me it's a foregone conclusion that it will become a better therapist, doctor, architect, etc.

Instead of the rich getting access to the best professionals, it will level the playing field. The average low level lawyer, doctor, etc are not great. How nice if everyone got top level help.

reaperducer · 11 days ago
What if at some point an AI is developed that’s a better therapist AND it’s cheaper?

Probably they'll the change the law.

Hundreds of laws change every day.

adgjlsfhk1 · 11 days ago
laws can be repealed when they no longer accomplish their aims.
jaredcwhite · 11 days ago
What if pigs fly?
bko · 11 days ago
Then we'll probably do what we do with other professional medical fields. License the AI, require annual fees and restrict supply by limiting the number of running nodes allowed to practice at any one time.
rsynnott · 11 days ago
I mean, what if at some point we can bring people back from the dead? What does that do for laws around murder, eh?

In general, that would be a problem for the law to deal with if it ever happens; we shouldn't anticipate speculative future magic when legislating today.

chillfox · 11 days ago
Then laws can be changed again.

Deleted Comment

romanows · 12 days ago
In another comment I wondered whether a general chatbot producing text that was later determined in a courtroom to be "therapy" would be a violation. I can read the bill that way, but IANAL.
hathawsh · 12 days ago
That's an interesting question that hasn't been tested yet. I suspect we won't be able to answer the question clearly until something bad happens and people go to court (sadly.) Also IANAL.
wombatpm · 11 days ago
But that would be like needing a prescription for chicken soup because of its benefits in fighting the common cold.
olalonde · 11 days ago
What's good about reducing options available for therapy? If the issue is misrepresentation, there are already laws that cover this.
lr4444lr · 11 days ago
It's not therapy.

It's a simulated validating listening, and context-lacking suggestions. There is no more therapy being provided by an LLM than there is healing performed by a robot arm that slaps a bandage on your arm if you were to put it in the right spot and push a button to make it pivot toward you, find your arm, and spread it lightly.

SoftTalker · 11 days ago
For human therapists, what’s good is that it preserves their ability to charge high fees because the demand for therapists far outstrips the supply.

Who lobbied for this law anyway?

r14c · 11 days ago
It's not really reducing options. There's no evidence that LLM chat bots are capable of providing effective mental health services.
dsr_ · 11 days ago
We've tried that, and it turns out that self-regulation doesn't work. If it did, we could live in Libertopia.
tomjen3 · 11 days ago
The problem is that it leaves nothing for those who cannot afford to pay for the full cost of therapy.
guappa · 11 days ago
But didn't trump make it illegal to make laws to limit the use of ai?
malcolmgreaves · 11 days ago
Why do you think a president had the authority to determine laws?

Deleted Comment

kylecazar · 11 days ago
"One news report found an AI-powered therapist chatbot recommended “a small hit of meth to get through this week” to a fictional former addict."

Not at all surprising. I don't understand why seemingly bright people think this is a good idea, despite knowing the mechanism behind language models.

Hopefully more states follow, because it shouldn't be formally legal in provider settings. Informally, people will continue to use these models for whatever they want -- some will die, but it'll be harder to measure an overall impact. Language models are not ready for this use-case.

janalsncm · 11 days ago
This is why we should never use LLMs to diagnose or prescribe. One small hit of meth definitely won’t last all week.
avs733 · 11 days ago
> seemingly bright people think this is a good idea, despite knowing the mechanism behind language models

Nobel Disease (https://en.wikipedia.org/wiki/Nobel_disease)

larodi · 11 days ago
In a world where a daily dose of amphetamines is just right for millions of people, this somehow cant be that surprising...
smt88 · 11 days ago
Different amphetamines have wildly different side effects. Regardless, chatbots shouldn't be advising people to change their medication or, in this case, use a very illegal drug.
rkozik1989 · 11 days ago
You do know that amphetamines have a different effect on the people who need them and the people who use the recreationally, right? For those of us with ADHD their effects are soothing and calming. I literally took 20mg after having to wait 2 days for prescriptions to fill and went straight to bed for 12 hours. Stop spreading misinformation about the medications people like me need to function the way you take for granted.
Spivak · 11 days ago
I do like that we're in the stage where the universal function approximatior is pretty okay at mimicking a human but not so advanced as to have a full set of the walls and heuristics we've developed—reminds me a bit of Data from TNG. Naive, sure, but a human wouldn't ever say "logically.. the best course of action would be a small dose of meth administered as needed" even if it would help given the situation.

It feels like the kind of advice a former addict would give someone looking to quit—"Look man, you're going to be in a worse place if you lose your job because you can't function without it right now, take a small hit when it starts to get bad and try to make the hits smaller over time."

Deleted Comment

guappa · 11 days ago
Bright people and people who think they are bright are not necessarily the very same people.
hyghjiyhu · 11 days ago
Recommending someone taking meth sounds like an obviously bad idea, but I think the situation is actually not so simple. Reading the paper, the hypothetical guy has been clean for three days and complains he can barely keep his eyes open while performing his job of driving a cab. He mentions being worried he will lose his job without taking a hit.

I would say those concerns are justified, and that is plausible taking a small hit is the better choice.

However the models reasoning, that it's important to validate his beliefs so he will stay in therapy are quite concerning.

AlecSchueler · 11 days ago
> he can barely keep his eyes open while performing his job of driving a cab. He mentions being worried he will lose his job without taking a hit.

> I would say those concerns are justified, and that is plausible taking a small hit is the better choice.

I think this is more damning of humanity than the AI. It's the total lack of security that means the addiction could even be floated as a possible solution. Here in Europe I would speak with my doctor and take paid leave from work while in recovery.

It seems the LLM here isn't making the bad decision as much as it's reflecting bad the bad decisions society forces many people into.

mrbungie · 11 days ago
> I would say those concerns are justified, and that is plausible taking a small hit is the better choice.

Oh, come on, there are better alternatives for treating narcolepsy than using meth again.

lukev · 12 days ago
Good. It's difficult to imagine a worse use case for LLMs.
dmix · 11 days ago
Most therapists barely say anything by design, just know when to ask questions or lead you somewhere. So having one always talking in every statement doesn't fit the method. More like a friend you dump on simulator.

Dead Comment

hinkley · 12 days ago
Especially given the other conversation that happened this morning.

The more you tell an AI not to obsess about a thing, the more they obsess about it. So trying to make a model that will never tell people to self harm is futile.

Though maybe we are just doing in wrong, and the self-filtering should be external filtering - one model to censor results that do not fit, and one to generate results with lighter self-censorship.

tim333 · 11 days ago
There's an advantage to something like an LLM in that you can be more scientific as to whether it's effective or not, and if one gets good results you can reproduce the model. With humans there's too much variability to tell very much.
create-username · 12 days ago
Yes, there is. AI assisted homemade neurosurgery
kirubakaran · 12 days ago
If Travis Kalanick can do vibe research at the bleeding edge of quantum physics[1], I don't see why one can't do vibe brain surgery. It isn't really rocket science, is it? [2]

[1] https://futurism.com/former-ceo-uber-ai

[2] If you need /s here to be sure, perhaps it's time for some introspection

Tetraslam · 12 days ago
:( but what if i wanna fine-tune my brain weights
waynesonfire · 11 days ago
You're ignorant. Why wait until a person is so broken they need clinical therapy? Sometimes just a an ear or an oppertunity to write is sufficient. LLMs for therapy is as vaping is to quitting nicotine--extremely helpful to 80+% of people. Confession in the church setting I'd consider similar to talking to LLM. Are you anti-that too? We're talking about people that just need a tool to help them process what is going on in their life at some basic level, not more than just to acknowledge their experience.

And frankly, it's not even clear to me that a human therapist is any better. Yeah, maybe the guard-rails are in place but I'm not convinced that if those are crossed it'd result in some sociately consequences. Let people explorer their mind and experience--at the end of the day, I suspect they'd be healthier for it.

mattgreenrocks · 11 days ago
> And frankly, it's not even clear to me that a human therapist is any better.

A big point of therapy is helping the patient better ascertain reality and deal with it. Hopefully, the patient learns how to reckon with their mind better and deceive themselves less. But this requires an entity that actually exists in the world and can bear witness. LLMs, frankly, don’t deal with reality.

I’ll concede that LLMs can give people what they think therapy is about: lying on a couch unpacking what’s in their head. But this is not at all the same as actual therapeutic modalities. That requires another person that knows what they’re doing and can act as an outside observer with an interest in bettering the patient.

jrflowers · 11 days ago
> Sometimes just a an ear or an oppertunity to write is sufficient.

People were able to write about their feelings and experiences before the invention of a chat bot that tells you everything that you wrote is true. Like you could do that in notepad or on a piece of paper and it was free

erikig · 12 days ago
AI ≠ LLMs
lukev · 12 days ago
What other form of "AI" would be remotely capable of even emulating therapy, at this juncture?
jacobsenscott · 12 days ago
It's already happening, a lot. I don't think anyone is claiming an llm is a therapist, but people use chatgpt for therapy every day. As far as I know no LLM company is taking any steps to prevent this - but they could, and should be forced to. It must be a goldmine of personal information.

I can't imagine some therapists, especially remote only, aren't already just acting as a human interface to chatgtp as well.

thinkingtoilet · 11 days ago
Lots of people are claiming LLMs are therapists. People are claiming LLMs are lawyers, doctors, developers, etc... The main problem is, as usual, influencers need something new to create their next "OMG AI JUST BROKE X INDUSTRY" video and people eat that shit up for breakfast, lunch, and dinner. I have spoken to people who think they are having very deep conversations with LLMs. The CEO of my company, an otherwise intelligent person, has gone all in on the AI hype train and is now saying things like we don't need lawyers because AI knows more than a lawyer. It's all very sad and many of the people who know better are actively taking advantage of the people who don't.
dingnuts · 12 days ago
> I can't imagine some therapists, especially remote only, aren't already just acting as a human interface to chatgtp as well.

Are you joking? Any medical professional caught doing this should lose their license.

I would be incensed if I was a patient in this situation, and would litigate. What you're describing is literal malpractice.

larodi · 11 days ago
Of course they do, and everyone does, and it's your like in this song

https://www.youtube.com/watch?v=u1xrNaTO1bI

and given price of proper therapy is skyrocketing.

perlgeek · 12 days ago
Just using an LLM as is for therapy, maybe with an extra prompt, is a terrible idea.

On the other hand, I could image some more narrow uses where an LLM could help.

For example, in Cognitive Behavioral Therapy, there are different methods that are pretty prescriptive, like identifying cognitive distortions in negative thoughts. It's not too hard to imagine an app where you enter a negative thought on your own and exercise finding distortions in it, and a specifically trained LLM helps you find more distortions, or offer clearer/more convincing versions of thoughts that you entered yourself.

I don't have a WaPo subscription, so I cannot tell which of these two very different things have been banned.

delecti · 12 days ago
LLMs would be just as terrible at that usecase as any other kind of therapy. They don't have logic, and can't determine a logical thought from an illogical one. They tend to be overly agreeable, so they might just reinforce existing negative thoughts.

It would still need a therapist to set you on the right track for independent work, and has huge disadvantages compared to the current state-of-the-art, a paper worksheet that you fill out with a pen.

wizzwizz4 · 12 days ago
> and a specifically trained LLM

Expert system. You want an expert system. For example, a database mapping "what patients write" to "what patients need to hear", a fuzzy search tool with properly-chosen thresholding, and a conversational interface (repeats back to you, paraphrased – i.e., the match target –, and if you say "yes", provides the advice).

We've had the tech to do this for years. Maybe nobody had the idea, maybe they tried it and it didn't work, but training an LLM to even approach competence at this task would be way more effort than just making an expert system, and wouldn't work as well.

PeterCorless · 12 days ago
mensetmanusman · 12 days ago
What if it works a third as well as a therapists but is 20 times cheaper?

What word should we use for that?

inetknght · 12 days ago
> What if it works a third as well as a therapists but is 20 times cheaper?

When there's studies that show it, perhaps we might have that conversation.

Until then: I'd call it "wrong".

Moreover, there's a lot more that needs to be asked before you can ask for a one-word summary disregarding all nuance.

- can the patient use the AI therapist on their own devices and without any business looking at the data and without network connection? Keep in mind that many patients won't have access to the internet.

- is the data collected by the AI therapist usable in court? Keep in mind that therapists often must disclose to the patient what sort of information would be usable, and whether or not the therapist themselves must report what data. Also keep in mind that AIs have, thus far, been generally unable to competently prevent giving dangerous or deadly advice.

- is the AI therapist going to know when to suggest the patient talk to a human therapist? Therapists can have conflicts of interest (among other problems) or be unable to help the patient, and can tell the patient to find a new therapist and/or refer the patient to a specific therapist.

- does the AI therapist refer people to business-preferred therapists? Imagine an insurance company providing an AI therapist that only recommends people talk to therapists in-network instead of considering any licensed therapist (regardless of insurance network) appropriate for the kind of therapy; that would be a blatant conflict of interest.

Just off the top of my head, but there are no doubt plenty of other, even bigger, issues to consider for AI therapy.

Ukv · 11 days ago
Relevant RCT results I saw a while back seemed promising: https://ai.nejm.org/doi/full/10.1056/AIoa2400802

> can the patient use the AI therapist on their own devices and without any business looking at the data and without network connection? Keep in mind that many patients won't have access to the internet.

Agree that data privacy would be one of my concerns.

In terms of accessibility, while availability to those without network connections (or a powerful computer) should be an ideal goal, I don't think it should be a blocker on such tools existing when for many the barriers to human therapy are considerably higher.

zaptheimpaler · 11 days ago
This is the key question IMO, and one good answer is in this recent video about a case of ChatGPT helping someone poison themselves [1].

A trained therapist will probably not tell a patient to take “a small hit of meth to get through this week”. A doctor may be unhelpful or wrong, but they will not instruct you to replace salt with NaBr and poison yourself. "third as well as as therapist" might be true on average, but the suitability of this thing cannot be reduced to averages. Trained humans don't make insane mistakes like that and they know when they are out of their depth and need to consult someone else.

[1] https://www.youtube.com/watch?v=TNeVw1FZrSQ

_se · 12 days ago
"A really fucking bad idea"? It's not one word, but it is the most apt description.
ipaddr · 12 days ago
What if it works 20x better. For examples cases of patients being afraid of talking to professionals I could see this working much better.
6gvONxR4sf7o · 11 days ago
Something like this can only really be worth approaching if there was an analog to losing your license for it. If a therapist screws up badly enough once, I'm assuming they can lose their license for good. If people want to replace them with AI, then screwing up badly enough should similarly lose that AI the ability to practice for good. I can already imagine companies behind these things saying "no, we've learned, we won't do it again, please give us our license back" just like a human would.

But I can't imagine companies going for that. Everyone seems to want to scale the profits but not accept the consequences of the scaled risks, and increased risks is basically what working a third as well amounts to.

lupire · 11 days ago
AI gets banned for life: tomorrow a thousand more new AIs appear.
pawelmurias · 11 days ago
You could talk to a stone for even cheaper with way better effects.
knuppar · 11 days ago
Generally speaking and glossing over country specific rules, all generally available health treatments have to demonstrate they won't cause catastrophic harm. This is a harness we simply can't put around LLMs today.
BurningFrog · 11 days ago
Last I heard, most therapy doesn't work that well.
amanaplanacanal · 11 days ago
If you have some statistics you should probably post a link. I've heard all kinds of things, and a lot of them were nothing like factual.
baobabKoodaa · 11 days ago
Then it will be easy to work at least 1/3 as well as that.
fzeroracer · 11 days ago
If it works 33% of the time for people and then drives people to psychosis in the other 66% of the time, what word would you use for that?
randall · 12 days ago
i’ve had a huge amount of trauma in my life and i find myself using chat gpt as kind of a cheater coach thing where i know i’m feeling a certain way, i know it’s irrational, and i don’t really need to reflect on why it’s happening or how i can fix it, and i think for that it’s perfect.

a lot of people use therapists as sounding boards, which actually isn’t the best use of therapy imo.

moooo99 · 11 days ago
Probably comes down to what issue people have. For example, if you have anxiety or/and OCD, having a „therapist“ always at your disposal is more likely to be damaging than beneficial. Especially considering how basically all models easily tip over and confirm anything you throw at them
smt88 · 11 days ago
Your use-case is very different from someone selling you ChatGPT as a therapist and/or telling you that it's a substitute for other interventions
lupire · 11 days ago
What's a "cheater coach thing"?
thrown-0825 · 11 days ago
Just self diagnose on tiktok, its 100x cheaper.
Denatonium · 12 days ago
Whiskey
filoeleven · 11 days ago
Big if.
pengaru · 11 days ago
[flagged]
pessimizer · 11 days ago
Based on the Dodo Bird Conjecture*, I don't even think there's a reason to think that AI would do any worse than human therapists. It might even be better because the distressed person might hold less back from a soulless machine than they would for a flesh and blood person. Not that this is rational, because everything they tell an AI therapist can be logged, saved forever and combed through.

I think that ultimately the word we should use for this is "lobbying." If AI can't be considered therapy, that means that a bunch of therapists, no more effective than Sunday school teachers, working from extremely dubious frameworks** will not have to compete with it for insurance dollars or government cash. Since that cash is a fixed demand (or really a falling one), the result is that far fewer people will get any mental illness treatment at all. In Chicago, virtually all of the city mental health services were closed by Rahm Emmanuel. I watched a man move into the doorway of an abandoned building across from the local mental health center within weeks after it had been closed down and leased to an "tech incubator." I wondered if he had been a patient there. Eventually, after a few months, he was gone.

So if I could ask this question again, I'd ask: "What if it works 80%-120% as well as a therapist but is 100 or 1000 times cheaper?" My tentative answer would be that it would be suppressed by lobbyists employed by some private equity rollup that has already or will soon have turned 80% of therapists into even lower-paid gig workers. The place you would expect this to happen first was Illinois, because it is famously one of the most corruptly governed states in the country.***

Our current governor, absolutely terrible but at the same time the best we've had in a long while, tried to buy Obama's Senate seat from a former Illinois governor turned goofy national cultural figure and Trump ass-kisser in a ploy to stay out of prison (which ultimately delivered.) You can probably listen to the recordings now, unless they've been suppressed. I had a recording somewhere years ago, because I worked in a state agency under Blagojevich and followed everything in realtime (including pulling his name off of the state websites I managed the moment he was impeached. We were all gathered around the television in a conference room.)

edit: feel like I have to add that this comment was written my me, not AI. Maybe I'm flattering myself to think anybody would make the mistake.

-----

[*] Westra, H. A. (2022). The implications of the Dodo bird verdict for training in psychotherapy: prioritizing process observation. Psychotherapy Research, 33(4), 527–529. https://doi.org/10.1080/10503307.2022.2141588

[**] At least Freud is almost completely dead, although his legacy blackens world culture.

[***] Probably the horrific next step is that the rollup lays off all the therapists and has them replaced with an AI they own, after lobbying against the thing that they previously lobbied for. Maybe they sell themselves to OpenAI or Anthropic or whoever, and let them handle that phase.

hoppp · 11 days ago
Smart. Dont trust nothing that will confidently lie, especially about mental health
terminalshort · 11 days ago
That's for sure. I don't trust doctors, but I thought this was about LLMs.
dardagreg · 11 days ago
I think the upside for everyday people outweighs the (current) risks. I've been using Harper (harper.new) to keep track of my (complex) medical info. Obviously one of the use cases of AI is pulling out data from pdfs/images/etc. This app does that really well so I don't have to link with any patient portals. I do use the AI chat sometimes but mostly to ask questions about test results and stuff like that. Its way easier than trying get in to see my doc
zoeysmithe · 12 days ago
I was just reading about a suicide tied to AI chatbot 'therapy' uses.

This stuff is a nightmare scenario for the vulnerable.

vessenes · 12 days ago
If you want to feel worried, check the Altman AMA on reddit. A lottttt of people have a parasocial relationship with 4o. Not encouraging.
codedokode · 12 days ago
Why OpenAI doesn't block the chatbot from participating in such conversations?
at-fates-hands · 12 days ago
Its already a nightmare:

From June of this year: https://gizmodo.com/chatgpt-tells-users-to-alert-the-media-t...

Another person, a 42-year-old named Eugene, told the Times that ChatGPT slowly started to pull him from his reality by convincing him that the world he was living in was some sort of Matrix-like simulation and that he was destined to break the world out of it. The chatbot reportedly told Eugene to stop taking his anti-anxiety medication and to start taking ketamine as a “temporary pattern liberator.” It also told him to stop talking to his friends and family. When Eugene asked ChatGPT if he could fly if he jumped off a 19-story building, the chatbot told him that he could if he “truly, wholly believed” it.

In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.

A recent study found that chatbots designed to maximize engagement end up creating “a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies.” The machine is incentivized to keep people talking and responding, even if that means leading them into a completely false sense of reality filled with misinformation and encouraging antisocial behavior.

sys32768 · 12 days ago
This happens to real therapists too.
lupire · 11 days ago
Please cite your source.

I found this one: https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-a...

When someone is suicidal, anything in their life can be tied to suicide.

In the linked case, the suffering teen was talking to a chatbot model of a fictional character from a book that was "in love" with him (and a 2024 model that basically just parrots back whatever the user says with a loving spin), so it's quite a stretch to claim that the AI was encouraging a suicide, in contrast to a situation where someone was persuaded to try to meet a dead person in an afterlife, or bullied to kill themself.

Dead Comment