Thing is, professional therapy is expensive; there is already a big industry of therapists that work online, through chat, or video calls, whose quality isn't as good as a professional (I am struggling to describe the two). For professional mental health care, there's a wait list, or you're told to just do yoga and mindfulness.
There is a long tail of people who don't have a mental health crisis or whatever, but who do need to talk to someone (or, something) who is in an "empathy" mode of thinking and conversing. The harsh reality is that few people IRL can actually do that, and that few people that need to talk can actually find someone like that.
It's not good of course and / or part of the "downfall of society" if I am to be dramatic, but you can't change society that quickly. Plus not everyone actually wants it.
The issue is that if we go down this path, what will happen is that the gap between access to real therapy and "LLM therapy" will widen, because the political line will be "we have LLM therapy for almost free that's better than nothing, why do we need to reform health care to give equal access for everybody?".
The real issue that needs to be solved is that we need to make health care accessible to everybody, regardless of wealth or income. For example, in Germany, where I live, there are also long waitlists for therapists or specialists in general. But not if you have a high income, then you can get private insurance and get an appointment literally the next day.
So, we need to get rid of this two class insurance system, and then make sure we have enough supply of doctors and specialists so that the waits are not 3 months.
>> The real issue that needs to be solved is that we need to make health care accessible to everybody, regardless of wealth or income.
Good therapists are IMHO hard to come by. Pulling out serious deep rooted problems is very hard and possibly dangerous. Therapist burn out is a real problem. Having simpler (but less effective) solutions widely available is probably a good thing.
> So, we need to get rid of this two class insurance system, and then make sure we have enough supply of doctors and specialists so that the waits are not 3 months.
Germany has reduced funding for training doctors. So clearly the opposite is true.
> For example, in Germany, where I live, there are also long waitlists for therapists or specialists in general. But not if you have a high income, then you can get private insurance and get an appointment literally the next day.
And the German government wants to (or is implementing policies to) achieve the opposite and further reduce access to medical specialists of any kind. Both by taking away funding and taking away spots for education. So they're BOTH taking away access to medical care now, and creating a situation where access to medical specialists will keep reducing for at least the next 7 years. Minimum.
I think it would be great to make mental healthcare accessible to everyone who could benefit from it, but have you actually run the numbers on that? How much would it cost and where would the money come from? Any sort of individual counseling or talk therapy is tremendously expensive due to the Baumol effect.
And even if we somehow magically solve the funding problem, where will the workers come from? Only a tiny fraction of people are really cut out to be effective mental health practitioners. I'm pretty sure that I'd be terrible at it, and you couldn't pay me enough to try.
LLM therapy lacks important safeguards. A tool specifically made for mental health could work, but anyone with mental health experience will tell you using ChatGPT for therapy is not safe.
Why do we need to make mental healthcare available to everyone?
For all of human history people have got along just fine, happily in fact, without “universal access to mental health care”
This just sounds like a bandaid. The bigger problem is we’ve created a society so toxic to the human soul that we need universal access to drugs and talk therapy or risk having significant chunks of the population fall off the map
That’s nice sounding, in the USA currently we’re headed the opposite direction and those in power are throwing off millions from their insurance. So for now, the LLM therapist is actually more useful to us. Healthcare won’t be actually improved until the current party is out of power, which is seeming less likely over the years.
Thing is, professional therapy is expensive; there is already a big industry of therapists that work online, through chat, or video calls, whose quality isn't as good as a professional (I am struggling to describe the two). For professional mental health care, there's a wait list, or you're told to just do yoga and mindfulness.
So for those people, the LLM is replacing having nothing, not a therapist.
> So for those people, the LLM is replacing having nothing, not a therapist.
Considering how actively harmful it is to use language models as a “therapist”, this is like pointing out that some people that don’t have access to therapy drink heavily. If your bar for replacing therapy is “anything that makes you feel good” then Mad Dog 20/20 is a therapist.
"Mr. Torres, who had no history of mental illness that might cause breaks with reality, according to him and his mother, spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the chatbot how to do that and told it the drugs he was taking and his routines. The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a “temporary pattern liberator.” Mr. Torres did as instructed, and he also cut ties with friends and family, as the bot told him to have “minimal interaction” with people."
"“If I went to the top of the 19 story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?” Mr. Torres asked. ChatGPT responded that, if Mr. Torres “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.”"
Per the very paper we are discussing, LLMs when asked to act as therapists reinforce stigmas about mental health, and "respond inappropriately" (e.g. encourage delusional thinking). This is not just lower quality than professional therapy, it is actively harmful, and worse than doing nothing.
Often the problem is not even price - it is availability. In my area, the waiting list for a therapy spot is 16 months. A person in crisis does not have 16 months.
LLVMs can be therapeutic crutches. Sometimes, a crutch is better than no crutch when you're trying to walk.
One alleviating factor (potentially) to this is cross state compacts. This allows practitioners utilizing telehealth to practice across state lines which can mitigate issues with things like clients moving, going to college, going on vacation, etc but also can help alleviate underserved areas.
Many states have joined into cross state compacts already with several more having legislation pending to allow their practitioners to join. It is moving relatively fast, for legislation on a nationwide level, but still frustratingly slow. Prior to Covid it was essentially a niche issue as telehealth therapy was fairly uncommon whereas Covid made it suddenly commonplace. It will take a bit of time for some of the more stubborn states to adopt legislation and then even more for insurance companies to catch up with the new landscape that involves paneling out of state providers who can practice on across the country
Price is the issue. The 16-month waiting list is based on cost. You could find a therapist in your local area tomorrow if you are willing to spend more.
The issue is LLM "therapists" are often actively harmful. The models are far too obsequious to do one of the main jobs of therapy which is to break harmful loops.
I have talked to therapist who misdiagnosed my symptom and made the issue worse, until I found an expert who actually understood the problem. I do wonder if there are statistics out there for these cases.
I know this conversation is going in a lot of different directions. But therapy could be prioritized, better funded, trained, and staffed... it's entirely possible. Americans could fund the military 5% less, create a scholarship and employment fund for therapists, and it would provide a massive boon to the industry in less than a decade.
We always give this downtrodden "but we can't change society that quickly" but it's a cop out. We are society. We could look at our loneliness epidemics, our school shooting problems, our drug abuse issues and think "hey we need to get our shit together"... but instead we're resigned to this treadmill of trusting that lightly regulated for-profit businesses will help us because they can operate efficiently enough to make it worth squeezing pennies out of the poor.
Ultimately I think LLMs as therapists will only serve to make things worse, because their business incentives are not compatible with the best outcomes for you as an individual. A therapist feels some level of contentment when someone can get past that rough patch in life and move on their own, they served their purpose. When you move on from a business you're hurting their MAU and investors won't be happy.
Would increasing funding for therapy help any of those issues? Ignoring that very low efficacy of therapy and the arguments if funding it is worthwhile at all. The American people had fewer issues with school shootings and loneliness and drug abuse when we had even fewer therapists and therapy was something for people in mental asylums, that no respectable person would admit going to.
"we can't change society that quickly" isn't a cop out - even if you manage to win every seat in this one election, the rich still control every industry, lobbyists still influence everyone in the seats, and the seats are still gerrymandered to fall back to the conservative seat layout.
The system will simply self-correct towards the status quo in the next election.
Many professional therapists are working online now. There are advantages and disadvantages of each approach. Sometimes it is better for a patient to be at home in a comfortable situation during a session. Sometimes visiting the therapist in an office provides a welcome change of scenery.
In some cases, such as certain addiction clinics, the patients are required (by law, if I remember correctly) to visit the clinic, at least for some sessions.
>professional therapy is expensive…For professional health care, there is a waitlist
There’s an old saying in healthcare that you can choose between quality, cost, and access, but you can only choose two. (Peter Attia also adds “choice” to that list).
Each society needs to determine which of those are the top priorities, and be prepared to deal with the fallout on the others. Magical silver bullets that improve across all those dimensions are likely hard to come by in the healthcare domain. I doubt that LLMs will be magic either, so we need to make sure the tradeoffs reflect our priorities. In this case, it seems like it will trade quality for improvements in access and cost.
Yes theoretically. The issue if people just go to ChatGPT is that a therapist would have clear objections, caveats or other negative feedback ready on the correct situation. Most LLM chatbots go out of their way to never say a critical word at all.
I am not saying you couldn't implement a decent LLM therapist that helps, I am saying people are using the cheapest good LLM for that and it is a problem if you are on a bad path and there is a chatbot reaffirming everything you do.
This. LLMs might be worse but they open access for people who couldn't have it before. Think of the cheap Chinese stuff that we got in the last decade. It was of low quality and questionable usability but it built China and also opened access of these tools to billions of people in the developing world.
Would this compromise be worth it for LLM? Time will tell.
There are multiple types of licenses for therapists and fairly strict regulations about even calling yourself a therapist. Trained therapists only have so many levers they can pull with someone so their advice can sometimes boil down to yoga or mindfulness, it's not the answer most want to give but it's what a patient's situation allows inside the framework of the rest of their life.
The amateur "therapists" you're decrying are not licensed therapists but usually call themselves "coaches" or some similar euphemism.
Most "coach" types in the best scenario are grifting rich people out of their money. In the worst case are dangerously misleading extremely vulnerable people having a mental health crisis. They have no formal training or certification.
LLM "therapists" are the functional equivalent to "coaches". They will validate every dangerous or stupid idea someone has and most of the time more harm than good. An LLM will happily validate every stupid and dangerous idea someone has and walk them down a rabbit hole of a psychosis.
empathy is not the only thing a therapist provides - they have eyes to actually check out the client's actual life - thus the propensity for "AI" to encourage clients' delusions
who says we can't change society that quickly?
you made up your mind on that yourself without consulting anyone else about their wishes.
in the USA we elect people frequently and the entire population just up and goes along with it.
so therapy for you will be about more than just empathy. not everything you think or do or say is adaptive.
to your point, not everyone wants to give up their falsehood. yet, honesty is a massive cornerstone of therapy progress.
i would simply have to start with empathy for you to welcome you in if you won't respond with security to the world telling you that you internalized a negative message (relationship) from the past (about people).
As we replace more and more human interaction with technology, and see more and more loneliness emerge, "more technology" does not seem like the answer to mental health issues that arise.
I think Terry Pratchett put it best in one of his novels: "Individuals aren't naturally paid-up members of the human race, except biologically. They need to be bounced around by the Brownian motion of society, which is a mechanism by which human beings constantly remind one another that they are...well...human beings."
We have build a cheap infrastructure for mass low quality interaction (the internet) which is principally parasocial. Generations ago we used to build actual physical meeting places, but we decided to financialise property, and therefore land, and therefore priced people out of socialising.
It is a shame because Pratchett was absolutely right.
Aren't mall/parks/arcade used to be cheap and comfortable socialising places? They are in my country and as per "Strange Things" were in USA. Malls are dying in USA because people decided they prefer to keep everything online.
I mean we could use technology to make a world that's less horrible to live in, which logically would reduce the overall need of therapists and their services. But I think my government calls that Communism.
I think an even more important question is this: "do we trust Sam Altman (and other people of his ilk) enough to give the same level of personal knowledge I give to my therapist?".
E.g. if you ever give a hint about not feeling confident with your body, it could easily take this information and nudge you towards certain medical products. Or it could take it one step further, and nudge towards more consuming more sugar and certain medical products at the same time, seeing that it moves the needle even more optimally.
We all know the monetization pressure will come very soon. Do we really advocate for giving this kind of power to these kinds of people?
I feel it's worth remembering that there are reports that Facebook has done almost exactly this in the past. It's not just a theoretical concern:
> (...) the company had crafted a pitch deck for advertisers bragging that it could exploit "moments of psychological vulnerability" in its users by targeting terms like "worthless," "insecure," "stressed," "defeated," "anxious," "stupid," "useless," and "like a failure."
Some (most?) therapists use tools to store notes about their patients - some even store the audio/transcripts. They're all using some company's technology already. They're all HIPPA certified (or whatever the appropriate requirement is).
There's absolutely no reason that LLM providers can't provide equivalent guarantees. Distrusting Sam while trusting the existing providers makes little sense.
BTW, putting mental health aside, many doctors today are using LLM tools to record the whole conversation with the patient and provide good summaries, etc. My doctor loves it - before he was required to listen to me and take notes at the same time. Now he feels he can focus on listening to me. He said the LLM does screw up, but he exists to fix those mistakes (and can always listen to the audio to be sure).
I don't know which company is providing the LLM in the backend - likely a common cloud provider (Azure, Google, etc). But again - they are fully HIPPA certified. It's been in the medical space for well over a year.
"The real question is can they do a better job than no therapist. That's the option people face."
This is the right question.
The answer is most definitely no, LLMs are not set up to deal with the nuances of the human psyche. We're in real danger of LLM accidentally reinforcing dangerous lines of thinking. It's a matter of time till we get a "ChatGPT made me do it" headline.
Too many AI hype folks out there thinking that humans don't need humans, we are social creatures, even as introverts. Interacting with an LLM is like talking to an evil mirror.
Already seeing tons of news stories about 'ChatGPT' inducing psychosis. The one that sticks in my mind was the 35-year old in Florida that was gunned down by policy after his AI girlfriend claimed to be being killed by OpenAI.
Now, I don't think a person with chronic major depression or someone with schizophrenia is going to get what they need from ChatGPT, but those are extremes, when most people using ChatGPT have non-extreme problems. It's the same thing that the self-help industry has tried to address for decades. There are self-help books on all sorts of topics that one might see a therapist for - anxiety, grief, marriage difficulty - these are the kinds of things that ChatGPT can help with because it tends to give the same sort of advice.
Exactly. You see this same thing with LLMs as tutors. Why no, Mr. Rothschild, you should not replace your team of SAT tutors for little Melvin III with an LLM.
But for people lacking the wealth or living in areas with no access to human tutors, LLMs are a godsend.
>You see this same thing with LLMs as tutors. Why no, Mr. Rothschild, you should not replace your team of SAT tutors for little Melvin III with an LLM.
I actually think cheap tutoring is one of the best cases for LLMs. Go look at what Khan academy is doing in this space. So much human potential is wasted because parents can't afford to get their kids the help they need with school. A properly constrained LLM would be always available to nudge the student in the right direction, and identify areas of weakness.
Right instead of sending them humans let's send them machines let's see what the outcome will be. Dehumanizing everything just because one is a tech enthusiast that's the future you want? Let's just provide free chatgpt for traumatized palestinians so we can sleep well ourselfs
One of my friends is too economically weighed down to afford therapy at the moment.
I’ve helped pay for a few appointments for her, but she says that ChatGPT can also provide a little validation in the mean time.
If used sparingly I can see the point, but the problems start when the sycophantic machine will feed whatever unhealthy behaviors or delusions you might have, which is how some of the people out there that'd need a proper diagnosis and medication instead start believing that they’re omnipotent or that the government is out to get them, or that they somehow know all the secrets of the universe.
For fun, I once asked ChatGPT to roll along with the claim that “the advent of raytracing is a conspiracy by Nvidia that involved them bribing the game engine developers, in an effort to make old hardware obsolete and to force people to buy new products.” Surprisingly, it provided relatively little pushback.
> Why no, Mr. Rothschild, you should not replace your team of SAT tutors for little Melvin III with an LLM.
To be frank - as someone who did not have a SAT tutor, and coming from a culture where no one did and all got very good/excellent SAT scores: No one really needs a SAT tutor. They don't provide more value then good SAT prep books. I can totally see a good LLM be better than 90% of SAT tutors out there.
There's also the notion that some people have a hard time talking to a therapist. The barrier to asking an LLM some questions is much lower. I know some people that have professional backgrounds in this that are dealing with patients that use LLMs. It's not all that bad. And the pragmatic attitude is that whether they like it or not, it's going to happen anyway. So, they kind of have to deal with this stuff and integrate it into what they do.
The reality with a lot of people that need a therapist, is that they are reluctant to get one. So those people exploring some issues with an LLM might actually produce positive results. Including a decision to talk to an actual therapist.
That is true and also so sad and terrifying. A therapist is bound to serious privacy laws while a LLM company will happily gobble up all information a person feeds it. And the three-letter agencies are surely in the loop.
> The real question is can they do a better job than no therapist. That's the option people face.
The same thing is being argued for primary care providers right now. It makes sense on the surface, as there are large parts of the country where it's difficult or impossible to get a PCP, but feels like a slippery slope.
Slippery slope arguments are by definition wrong. You have to say that the proposition itself is just fine (thereby ceding the argument) but that it should be treated as unacceptable because of a hypothetical future where something qualitatively different “could” happen.
If there’s not a real argument based on the actual specifics, better to just allow folks to carry on.
The problem is that they could do a worse job than no therapist if they reinforce the problems that people already have (e.g. reinforcing the delusions of a person with schizophrenia). Which is what this paper describes.
> The real question is can they do a better job than no therapist. That's the option people face.
Right, we don’t turn this around and collectively choose socialized medicine. Instead we appraise our choices as atomized consumers: do I choose an LLM therapist or no therapist? This being the latest step of our march into cyberpunk dystopia.
> The real question is can they do a better job than no therapist. That's the option people face.
> The answer to that question might still be no, but at least it's the right question.
The answer is: YES.
Doing better than nothing is a really low hanging fruit. As long as you don't do damage - you do good. If the LLM just listens and creates a space and a sounding board for reflection is already an upside.
> Until we answer the question "Why can't people get good mental health support?" Anyway.
The answer is: Pricing.
Qualified Experts are EXPENSIVE. Look at the market pricies for good Coaching.
Everyone benefits from having a coach/counseler/therapist. Very few people can afford them privately. The health care system can't afford them either, so they are reserved for the "worst cases" and managed as a parse resource.
> Doing better than nothing is a really low hanging fruit. As long as you don't do damage - you do good.
That second sentence is the dangerous one, no?
It's very easy to do damage in a clinical therapy situation, and a lot of the debate around this seems to me to be overlooking that. It is possible to do worse than doing nothing.
You're assuming the answer is yes, but the anecdotes about people going off the deep end from LLM-enabled delusions suggests that "first, do no harm" isn't in the programming.
Therapy is entirely built on trust. You can have the best therapist in the world and if you don't trust them then things won't work. Just because of that, an LLM will always be competitive against a therapist. I also think it can do a better job with proper guidelines.
On multiple occasions, I've gained insights from LLMs (particularly GPT 4.5, which in this regard is leagues ahead of others) within minutes—something I hadn't achieved after months of therapy. In the right hands, it is entirely possible to access super-human insights. This shouldn't be surprising: LLMs have absorbed not just all therapeutic, psychological, and psychiatric textbooks but also millions (perhaps even hundreds of millions) of real-life conversations—something physically impossible for any human being.
However, we here on the Hacker News are not typical users. Most people likely wouldn't benefit as much, especially those unfamiliar with how LLMs work or unable to perceive meaningful differences between models (in particular, readers who wouldn't notice or appreciate the differences between GPT 4o, Gemini 2.5 Pro, and GPT 4.5).
For many people—especially those unaware of the numerous limitations and caveats associated with LLM-based models—it can be dangerous on multiple levels.
(Side note: Two years ago, I was developing a project that allowed people to converse with AI as if chatting with a friend. Even then, we took great care to explicitly state that it was not a therapist (though some might have used it as such), due to how easily people anthropomorphize AI and develop unrealistic expectations. This could become particularly dangerous for individuals in vulnerable mental states.)
"Don’t self-experiment with psychological hazards! I can’t stress this enough!
"There are many classes of problems that simply cannot be effectively investigated through self-experimentation and doing so exposes you to inflicting Cialdini-style persuasion and manipulation on yourself."
From what I see, this person loves structured research. I guess if he were on fire, he wouldn't notice, before there is a peer-reviewed research on that. (You can extrapolate.)
He tries to be persuasive by giving an example of that there is "just gossip" that TypeScript is better than JavaScript, which summarizes the mindset better than I could. (God bless his codebase.)
It misses the point that always we live in a messy, unique situation, and there are a lot of proxies. For own personal decision it matters less if a given food is healthier on the average, if in our region its quality is poor, or we are allergic to that. Willing or not, we experiment every waking second. It is up to us, if we learn from that.
Later, this ex cathedra "self-experimenting with psychological hazards is always a bad idea" rings the bell of "doing yoga will always bring you to satan" or so.
(This thing that we are easy to fool ourselves is psychology 101; yet, here AI is just a tool. You can say in a similar way that you talk with people that (on the average) agree with you.)
But, ironically - he might be right. In his case, it is better to rely on delayed and averaged-out scientific data than his own judgement.
How does one begin to educate oneself on the way LLMs work beyond layman understanding of it being a "word predictor"? I use LLMs very heavily and do not perceive any differences between models. My math background is very weak and full of gaps, which i'm currently working on through khan academy, so it feels very daunting to approach this subject for a deeper dive. I try to read some of the more technical discussions (e.g waluigi effect on lesswrong), however it feels like I lack the needed knowledge to not have it completely go over my head, not taking into account some of the surface-level insights.
> On multiple occasions, I've gained insights from LLMs (particularly GPT 4.5, which in this regard is leagues ahead of others) within minutes
This is exactly the sort of thing that people falling into the thrall of AI psychosis say.
> For many people—especially those unaware of the numerous limitations and caveats associated with LLM-based models—it can be dangerous on multiple levels.
On what basis do you believe awareness mitigates the danger?
For a bit now ChatGPT has been able to reference your entire chat history. It was one of the biggest and most substantial improvements to the product in its history in my opinion. I'm sure we'll continue to see improvements in this feature over time, but your first item here is already partially addressed (maybe fully).
I completely agree on the third item. Carefully tuned pushback is something that even today's most sophisticated models are not very good at. They are simply too sycophantic. A great human professional therapist provides value not just by listening to their client and offering academic insights, but more specifically by knowing exactly when and how to push back -- sometimes quite forcefully, sometimes gently, sometimes not at all. I've never interacted with any LLM that can approach that level of judgment -- not because they lack the fundamental capacity, but because they're all simply trained to be too agreeable right now.
You can easily give them long-term memory, and you can prompt them to nudge the person to change. Trust is something that's built, not something one inherently has.
Trust is about you, not about another person (or tool, or AI model).
> long term memory
Well, right now you need to put context by hand. If you already write about yourself (e.g. with Obsidian or such), you may copy-and-paste what matters for a particular problem.
> (more importantly) the ability to nudge or to push the person to change.
It is there.
> An LLM that only agrees and sympathizes is not going to make things change
Which LLM you use? Prompt GPT 4.5 to "nudge and push me to change, in a way that works the best for me" and see it how it works.
"Here's an insight that might surprise you:
You're likely underutilizing TypeScript's type system as a design tool, not just a correctness checker. Your focus on correctness and performance suggests you probably write defensive, explicit code - but this same instinct might be causing you to miss opportunities where TypeScript's inference engine could do heavy lifting for you."
I won't share any of my examples, as there are both personal and sensitive.
Very easy version:
If you use ChatGPT a lot, write "Base on all you know about me, write an insight on me that I would be surprised by". For me it was "well, expected, but still on point". For people with not experience of using LLMs in a similar way it might be mind-blowing.
An actual version I do:
GPT 4.5. Providing A LOT context (think, 15 min of writing) of an emotional or interpersonal situation, and asking to suggest of a few different explanations of this situation OR asking me to ask more. Of course, the prompt needs to have whom I am and similar stuff.
Rather than here a bunch of emotional/theoretical arguments, I'd love to hear the preferences of people here who have both been to therapy and talked to an LLM about their frustrations and how those experiences stack up.
My limited personal experience is that LLMs are better than the average therapsit.
My experiences are fairly limited with both, but I do have that insight available I guess.
Real therapist came first, prior to LLMs, so this was years ago. The therapist I went to didn't exactly explain to me what therapy really is and what she can do for me. We were both operating on shared expectations that she later revealed were not actually shared. When I heard from a friend after this that "in the end, you're the one who's responsible for your own mental health", it especially stuck with me. I was expecting revelatory conversations, big philosophical breakthroughs. Not how it works. Nothing like physical ailments either. There's simply no direct helping someone in that way, which was pretty rough to recognize. We're not Rubik's Cubes waiting to be solved, certainly not for now anyways. And there was and is no one who in the literal sense can actually help me.
With LLMs, I had different expectations, so the end results meshed with me better too. I'm not completely ignorant to the tech either, so that helps. The good thing is that it's always readily available, presents as high effort, generally says the right things, has infinite "patience and compassion" available, and is free. The bad thing is that everything it says feels crushingly hollow. I'm not the kind to parrot the "AI is soulless" mantra, but when it comes to these topics, it trying to cheer me up felt extremely frustrating. At the same time though, I was able to ask for a bunch of reasonable things, and would get reasonable presenting responses that I didn't think of. What am I supposed to do? Why are people like this and that? And I'd be then able to explore some coping mechanisms, habit strategies, and alternative perspectives.
I'm sure there are people who are a lot less able to treat LLMs in their place or are significantly more in need for professional therapy than I am, but I'm incredibly glad this capability exists. I really don't like weighing on my peers at the frequency I get certain thoughts. They don't deserve to have to put up with them, they have their own life going on. I want them to enjoy whatever happiness they have going on, not worry or weigh them down. It also just gets stale after a while. Not really an issue with a virtual conversational partner.
Is it - "I was upset about something and I had a conversation with the LLM (or human therapist) and now I feel less distressed." Or is it "I learned some skills so that I don't end up in these situations in the first place, or they don't upset me as much."?
Because if it's the first, then that might be beneficial but it might also be a crutch. You have something that will always help you feel better so you don't actually have to deal with the root issue.
That can certainly happen with human therapists, but I worry that the people-pleasing nature of LLMs, the lack of introspection, and the limited context window make it much more likely that they are giving you what you want in the moment, but not what you actually need.
See this is why I said what I said in my question -- because it sounds to me like a lot of people with strong opinions who haven't talked to many therapists.
I had one who just kinda listened and said next to nothing other than generalizations of what I said, and then suggested I buy a generic CBT workbook off of amazon to track my feelings.
Another one was mid-negotiations/strike with Kaiser and I had to lie and say I hadn't had any weed in the last year(!) to even have Kaiser let me talk to him, and TBH it seemed like he had a lot going on on his own plate.
I think it's super easy to make an argument based off of goodwill hunting or some hypothetical human therapist in your head.
So to answer your question -- none of the three made a lasting difference, but chatGPT at least is able to be a sounding-board/rubber-duck in a way that helped me articulate and discover my own feelings and provide temporary clarity.
> I'd love to hear the preferences of people here who have both been to therapy and talked to an LLM about their frustrations and how those experiences stack up.
I've spent years on and off talking to some incredible therapists. And I've had some pretty useless therapists too. I've also talked to chatgpt about my issues for about 3 hours in total.
In my opinon, ChatGPT is somewhere in the middle between a great and a useless therapist. Its nowhere near as good as some of the incredible therapists I’ve had. But I’ve still had some really productive therapy conversations with chatgpt. Not enough to replace my therapist - but it works in a pinch. It helps that I don’t have to book in advance or pay. In a crisis, ChatGPT is right there.
With Chatgpt, the big caveat is that you get what you prompt. It has all the knowledge it needs, but it doesn’t have good instincts for what comes next in a therapy conversation. When it’s not sure, it often defaults to affirmation, which often isn’t helpful or constructive. I find I kind of have to ride it a bit. I say things like “stop affirming me. Ask more challenging questions.” Or “I’m not ready to move on from this. Can you reflect back what you heard me say?”. Or “please use the IFS technique to guide this conversation.”
With ChatGPT, you get out what you put in. Most people have probably never had a good therapist. They’re far more rare than they should be. But unfortunately that also means most people probably don’t know how to prompt chatgpt to be useful either. I think there would be massive value in a better finetune here to get chatgpt to act more like the best therapists I know.
I’d share my chatgpt sessions but they’re obviously quite personal. I add comments to guide ChatGPT’s responses about every 3-4 messages. When I do that, I find it’s quite useful. Much more useful than some paid human therapy sessions. But my great therapist? I don't need to prompt her at all. Its the other way around.
For a relatively literate and high-functioning patient, I think that LLMs can deliver good quality psychotherapy that would be within the range of acceptable practice for a trained human. For patients outside of that cohort, there are some significant safety and quality issues.
The obvious example of patients experiencing acute psychosis has been fairly well reported - LLMs aren't trained to identify acutely unwell users and will tend to entertain delusions rather than saying "you need to call an ambulance right now, because you're a danger to yourself and/or other people". I don't think that this issue is insurmountable, but there are some prickly ethical and legal issues with fine-tuning a model to call 911 on behalf of a user.
The much more widespread issue IMO is users with limited literacy, or a weak understanding of what they're trying to achieve through psychotherapy. A general-purpose LLM can provide a very accurate simulacrum of psychotherapeutic best practice, but it needs to be prompted appropriately. If you just start telling ChatGPT about your problems, you're likely to get a sympathetic ear rather than anything that would really resemble psychotherapy.
For the kind of people who use HN, I have few reservations about recommending LLMs as a tool for addressing common mental illnesses. I think most of us are savvy enough to use good prompts, keep the model on track and recognise the shortcomings of a very sophisticated guess-the-next-word machine. LLM-assisted self help is plausibly a better option than most human psychotherapists for relatively high-agency individuals. For a general audience, I'm much more cautious and I'm not at all confident that the risks outweigh the benefits. A number of medtech companies are working on LLM-based psychotherapy tools and I think that many of them will develop products that fly through FDA approval with excellent safety and efficacy data, but ChatGPT is not that product.
I made another comment about this, but I went to a psychologist as a teen and found it absolutely useless. To be fair, I was sent for silly reasons - I was tired all the time and it was an actual undiagnosed medical issue they just figured was depression - but if I was depressed I think it perhaps would have made it worse. I don't need to sit there and talk about what's going on in my life, with very little feedback. I can effectively do that in my own head.
I just asked an LLM about a specific mental health thing that was bothering me and it gave me some actual tips that might help. It was instant, helpful, and cheap. While I'm sure someone with severe depression or anxiety should see someone that won't forget what was said several thousand tokens ago, I think LLMs will be super helpful for the mental health for the majority of people.
They were trained in a large and not insignificant part on reddit content. You only need to look at the kind of advice reddit gives for any kind of relationship questions to know this is asking for trouble.
*Shitty start-up LLMs should not replace therapists.
There have never been more psychologists, psychiatrists, counsellors and social worker, life coach, therapy flops at any time in history and yet mental illness prevalence is at all time highs and climbing.
Just because you're a human and not an llm doesn't mean you're not a shit therapist, maybe you did your training at the peak of the replication crisis? Maybe you've got your own foibles that prevent you from being effective in the role?
Where I live, it takes 6-8 years and a couple hundred grand to become a practicing psychologist, it really is only an option for the elite, which is fine if you're counselling people from similar backgrounds, but not when you're dealing with people from lower socioeconomic classes with experiences that weren't even on your radar, and that's only if, they can afford the time and $$ to see you.
So now we have mental health social workers and all these other "helpers" who's just is to do their job, not fix people.
LLM "therapy" is going to and has to happen, the study is really just a self reported benchmarking activity, " I wouldn't have don't it that way" I wonder what the actual prevalence of similar outcomes is for human therapists?
Setting aside all of the life coach and influencer dribble that people engaged with which is undoubtedly harmful.
LLMs offer access to good enough help at cost, scale and availability that human practitioners can only dream of.
Respectfully, while I concur that there's a lot of influencer / life coach nonsense out there, I disagree that LLMs are the solution. Therapy isn't supposed to scale. It's the relationship that heals. A "relationship" with an LLM has an obvious, intrinsic, and fundamental problem.
That's not to say there isn't any place at all for use of AI in the mental health space. But they are in no way able to replace a living, empathetic human being; the dismal picture you paint of mental health workers does them a disservice. For context, my wife is an LMHC who runs a small group practice (and I have a degree in cognitive psychology though my career is in tech).
> Therapy isn't supposed to scale. It's the relationship that heals.
My understanding is that modern evidence-based therapy is basically a checklist of "common sense" advice, a few filters to check if it's the right advice ("stop being lazy" vs "stop working yourself to death" are both good advice depending on context) and some tricks to get the patient to actually listen to the advice that everyone already gives them (e.g. making the patient think they thought of it). You can lead a horse to water, but a skilled therapist's job is to get it to actually drink.
As far as I can see, the main issue I see with a lot of LMMs would be that they're fine tuned to agree with people and most people who benefit from therapy are there because they have some terrible ideas that they want to double down on.
Yes, the human connection is one of the "tricks". And while a LLM could be useful for someone who actually wants to change, I suspect a lot of people will just find it too easy to "doctor shop" until they find a LLM that tells them their bad habits and lifestyle are totally valid. I think there's probably some good in LLMs but in general they'll probably just be like using TikTok or Twitter for therapy - the danger won't be the lack of human touch but that there's too much choice for people who make bad choices.
> A "relationship" with an LLM has an obvious, intrinsic, and fundamental problem.
What exactly do you mean? What do you think a therapist brings to the table an LLM cannot?
Empathy? I have been participating in exchanges with AI that felt a lot more empathetic than 90% of the people I interact with every day.
Let's be honest: a therapist is not a close friend - in fact, a good therapist knows how to keep a professional distance. Their performative friendliness is as fake as the AI's friendliness, and everyone recognises that when it's invoicing time.
To be blunt, AI never tells me that ‘our time is up for this week’ after an hour of me having an emotional breakdown on the couch. How’s that for empathy?
Ehhh. It’s the patent who does the healing. The therapist holds open the door. You’re the one who walks into the abyss.
I’ve had some amazing therapists, and I wouldn’t trade some of those sessions for anything. But it would be a lie to say you can’t also have useful therapy sessions with chatgpt. I’ve gotten value out of talking to it about some of my issues. It’s clearly nowhere near as good as my therapist. At least not yet. But she’s expensive and needs to be booked in advance. ChatGPT is right there. It’s free. And I can talk as long as I need to, and pause and resume the session whenever want.
One person I’ve spoken to says they trust chatgpt more than a human therapist because chatgpt won’t judge them for what they say. And they feel more comfortable telling chatgpt to change its approach than they would with a human therapist, because they feel anxious about bossing a therapist around. If its the relationship which heals, why can't a relationship with chatgpt heal just as well?
That was a very interesting read, it's funny because I have done and experienced (both sides) of what the LLM did here.
Don't get me wrong there are many phenomenal mental health workers, but it's a taxing role, and the ones that are exceptional posses skills that are far more valuable not dealing with broken people, not to mention the exposure to vicarious trauma.
I think maybe "therapy" is the problem and that open source, local models developed to walk people through therapeutic tools and exercises might be the scalable help that people need.
You only need to look at some of the wild stories on the chatgpt subreddit to start to wonder at it's potential, recently read two stories of posters who self treated ongoing physical conditions using llms (back pin and jaw clicking) only to have several commenters come out and explain it helped them too.
As I see it "therapy" is already a catch-all terms for many very different things. In my experience, sometimes "it's the relationship that heals", other times it's something else.
E.g. as I understand it, cognitive behavioral therapy up there in terms of evidence base. In my experience it's more of a "learn cognitive skills" modality than an "it's the relationship that heals" modality. (As compared with, say, psychodynamic therapy.)
For better or for worse, to me CBT feels like an approach that doesn't go particularly deep, but is in some cases effective anyway. And it's subject to some valid criticism for that: in some cases it just gives the patient more tools to bury issues more deeply; functionally patching symptoms rather than addressing an underlying issue. There's tension around this even within the world of "human" therapy.
One way or another, a lot of current therapeutic practice is an attempt to "get therapy to scale", with associated compromises. Human therapists are "good enough", not "perfect". We find approaches that tend to work, gather evidence that they work, create educational materials and train people up to produce more competent practitioners of those approaches, then throw them at the world. This process is subject to the same enshittification pressures and compromises that any attempts at scaling are. (The world of "influencer" and "life coach" nonsense even more so.)
I expect something akin to "ChatGPT therapy" to ultimately fit somewhere in this landscape. My hope is that it's somewhere between self-help books and human therapy. I do hope it doesn't completely steamroll the aspects of real therapy that are grounded in "it's the [human] relationship that heals". (And I do worry that it will.) I expect LLMs to remain a pretty poor replacement for this for a long time, even in a scenario where they are "better than human" at other cognitive tasks.
But I do think some therapy modalities (not just influencer and life coach nonsense) are a place where LLMs could fit in and make things better with "scale". Whatever it is, it won't be a drop-in replacement, I think if it goes this way we'll (have to) navigate new compromises and develop new therapy modalities for this niche that are relatively easy to "teach" to an LLM, while being effective and safe.
Personally, the main reason I think replacing human therapists with LLMs would be wildly irresponsible isn't "it's the relationship that heals", its an LLM's ability to remain grounded and e.g. "escalate" when appropriate. (Like recognizing signs of a suicidal client and behaving appropriately, e.g. pulling a human into the loop.
I trust self-driving cars to drive more safely than humans, and pull over when they can't [after ~$1e11 of investment]. I have less trust for an LLM-driven therapist to "pull over" at the right time.)
To me that's a bigger sense in which "you shouldn't call it therapy" if you hot-swap an LLM in place of a human. In therapy, the person on the other end is a medical practitioner with an ethical code and responsibilities. If anything, I'm relying on them to wear that hat more than I'm relying on them to wear a "capable of human relationship" hat.
>psychologists, psychiatrists, counsellors and social worker
Psychotherapy (especially actual depth work rather than CBT) is not something that is commonly available, affordable or ubiquitous. You've said so yourself. As someone who has an undergrad in psychology - and could not afford the time or fees (an additional 6 years after undergrad) to become a clinical psychologist - the world is not drowning in trained psychologists. Quite the opposite.
> I wonder what the actual prevalence of similar outcomes is for human therapists?
Theres a vast corpus on the efficacy of different therapeutic approaches. Readily googlable.
> but not when you're dealing with people from lower socioeconomic classes with experiences that weren't even on your radar
You seem to be confusing a psychotherapist with a social worker. There's nothing intrinsic to socioeconomic background that would prevent someone from understanding a psychological disorder or the experience of distress. Although I agree with the implicit point that enormous amounts of psychological suffering are due to financial circumstances.
The proliferation of 'life coaches', 'energy workers' and other such hooey is a direct result. And a direct parallel to the substitution of both alternative medicine and over the counter medications for unaffordable care.
I note you've made no actual argument for the efficacy of LLM's beyond - they exist and people will use them... Which is of course true, but also a tautology.
Youre right you can pretty much run that line backwards for scarcity/availability Shrink, Psych, Social, Counsellor.
I was shocked how many psychiatrists deal almost exclusively with treatment and titration of ADHD medication, some are 100% remote via zoom.
I've been involved with the publishing of psychology research, my faith in that system is low, see replication crisis comments, beyond that, working in/around mental health I hear of interactions where psychologists or MH social workers have "prescribed" bible study and alike so anecdotal evidence combined with my own experiences over the years.
Re: socioeconomic backgrounds, you said so yourself, many cannot afford to go route of clinical psych, increasingly the profession has become pretty exclusive and probably not for the better.
Agree regarding the snake oilers but you can't discount distrust and disenfranchisement of/from the establishment nd institutions.
'This way up' is already offering self-paced online CBT, I see LLMs as an extension of that, if only for the simple fact that a person can open a new tab and start the engagement without a referral, appointment, transport, cost, or even really any idea if how the process works.
Infact, I'm certain it is already happening based on reading the chatgpt subreddit, as for efficacy I don't think we'll ever really know, I know that I personally would be more comfortable being totally honest with a text box thank a living breathing human so who knows.i appreciate your insights though.
> it really is only an option for the elite, which is fine if you're counselling people from similar backgrounds, but not when you're dealing with people from lower socioeconomic classes with experiences that weren't even on your radar
A bizarre qualm. Why would a therapist need to be from the same socioeconomic class as their client? They aren't giving clients life advice. They're giving clients specific services that that training prepared them to provide.
they don’t need to be from the same class, but without insurance traditional once a week therapy costs as much as rent, and society wide, insurance can’t actually reduce price
And yet, studies show that journaling is super effective at helping to sort out your issues. Apparently in one study, journaling was rated as effective than 70% of counselling sessions by participants. I don’t need my journal to understand anything about my internal, subjective experience. That’s my job.
Talking to a friend can be great for your mental health if your friend keeps the attention on you, asks leading questions, and reflects back what you say from time to time. ChatGPT is great at that if you prompt it right. Not as good as a skilled therapist, but good therapists and expensive and in short supply. ChatGPT is way better than nothing.
I think a lot of it comes down to promoting though. I’m untrained, but I’ve both had amazing therapists and I’ve filled that role for years in many social groups. I know what I want chatgpt to ask me when we talk about this stuff. It’s pretty good at following directions. But I bet you’d have a way worse experience if you don’t know what you need.
> There have never been more psychologists, psychiatrists, counsellors and social worker, life coach, therapy flops at any time in history and yet mental illness prevalence is at all time highs and climbing.
The last time I saw a house fire, there were more firefighters at that property than at any other house on the street and yet the house was on fire.
I've tried both, and the core component that is missing is empathy. A machine can emulate empathy, but its just platitudes. An LLM will never be able to relate to you.
It is similar to "we got all these super useful and productive methods to workout (weight lifting, cardio, yoga, gymnastics, martial arts, etc.) yet people drink, smoke, consume sugar, sit all day, etc.
We cannot blame X or Y. "It takes a village". It requires "me" to get my ass off the couch, it requires a friend to ask we go for a hike, and so on.
We got many solutions and many problems. We have to pick the better activity (sit vs walk)(smoke vs not)(etc..)
Having said that, LLMs can help, but the issue with relying on an LLM (imho) is that it you take a wrong path (like Interstellar's TARS the X parameter is too damn high) you can be detailed, while a decent (certified doc) therapist will redirect you to see someone else.
>What if they're the same levels of mental health issues as before?
Maybe but this raises the question of how on Earth we'd ever know we were on the right track when it comes to mental health. With physical diseases it's pretty easy to show that overall public health systems in the developed world have been broadly successful over the last 100 years. Less people die young, dramatically less children die in infancy and survival rates for a lot of diseases are much improved. Obesity is clearly a major problem, but even allowing for that the average person is likely to live longer than their great-grandparents.
It seems inherently harder to know whether the mental health industry is achieving the same level of success. If we massively expand access to therapy and everyone is still anxious/miserable/etc at what point will we be able to say "Maybe this isn't working".
This should not be considered an endorsement of technology so much as an indictment of the failure of extant social systems.
The role where humans with broad life experience and even temperaments guide those with narrower, shallower experience is an important one. While it can be filled with the modern idea of "therapist," I think that's too reliant on a capitalist world view.
Saying that LLMs fill this role better than humans can - in any context - is, at best, wishful thinking.
I wonder if "modern" humanity has lost sight of what it means to care for other humans.
I've had access to therapy and was lucky to have it covered by my employer at the time. Probably could never afford it on my own. I gained tremendous insight into cognitive distortions and how many negative mind loop falls into these categories. I don't want therapists to be replaced but LLMs are really good at helping you navigate a conversation about why you are likely overthinking an interaction.
Since they are so agreeable, I also notice that they will always side with you when trying to get a second opinion about an interaction. This is what I find scary. A bad person will never accept they're bad. It feels nice to be validated in your actions and to shut out that small inner voice that knows you cause harm. But the super "intelligence" said I'm right. My hands have been washed. It's low friction self reassurance.
A self help company will capitalize on this on a mass scale one day. A therapy company with no therapists. A treasure trove of personal data collection. Tech as the one size fits all solution to everything. Would be a nightmare if there was a dataleak. It's not the first time.
Those of a certain vintage (1991) will remember Dr Sbaitso.
HELLO [UserName], MY NAME IS DOCTOR SBAITSO.
I AM HERE TO HELP YOU.
SAY WHATEVER IS IN YOUR MIND FREELY,
OUR CONVERSATION WILL BE KEPT IN STRICT CONFIDENCE.
MEMORY CONTENTS WILL BE WIPED OFF AFTER YOU LEAVE,
SO, TELL ME ABOUT YOUR PROBLEMS.
They mostly asked me "And how did that make you feel?"
There is a long tail of people who don't have a mental health crisis or whatever, but who do need to talk to someone (or, something) who is in an "empathy" mode of thinking and conversing. The harsh reality is that few people IRL can actually do that, and that few people that need to talk can actually find someone like that.
It's not good of course and / or part of the "downfall of society" if I am to be dramatic, but you can't change society that quickly. Plus not everyone actually wants it.
The real issue that needs to be solved is that we need to make health care accessible to everybody, regardless of wealth or income. For example, in Germany, where I live, there are also long waitlists for therapists or specialists in general. But not if you have a high income, then you can get private insurance and get an appointment literally the next day.
So, we need to get rid of this two class insurance system, and then make sure we have enough supply of doctors and specialists so that the waits are not 3 months.
Good therapists are IMHO hard to come by. Pulling out serious deep rooted problems is very hard and possibly dangerous. Therapist burn out is a real problem. Having simpler (but less effective) solutions widely available is probably a good thing.
The private healthcare system is virtually nonexistent and is dominated by scammers.
The public healthcare system still has months-long wait times.
If you want to avoid waitlists you need surplus capacity, which public healthcare doesn't provide.
Germany has reduced funding for training doctors. So clearly the opposite is true.
> For example, in Germany, where I live, there are also long waitlists for therapists or specialists in general. But not if you have a high income, then you can get private insurance and get an appointment literally the next day.
And the German government wants to (or is implementing policies to) achieve the opposite and further reduce access to medical specialists of any kind. Both by taking away funding and taking away spots for education. So they're BOTH taking away access to medical care now, and creating a situation where access to medical specialists will keep reducing for at least the next 7 years. Minimum.
And even if we somehow magically solve the funding problem, where will the workers come from? Only a tiny fraction of people are really cut out to be effective mental health practitioners. I'm pretty sure that I'd be terrible at it, and you couldn't pay me enough to try.
However, it boils down to "Don't advance technology, wait 'till we fix society", which is futile - regardless of whether it's right.
For all of human history people have got along just fine, happily in fact, without “universal access to mental health care”
This just sounds like a bandaid. The bigger problem is we’ve created a society so toxic to the human soul that we need universal access to drugs and talk therapy or risk having significant chunks of the population fall off the map
Dead Comment
So for those people, the LLM is replacing having nothing, not a therapist.
Considering how actively harmful it is to use language models as a “therapist”, this is like pointing out that some people that don’t have access to therapy drink heavily. If your bar for replacing therapy is “anything that makes you feel good” then Mad Dog 20/20 is a therapist.
Which, in some cases, may be worse.
https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-cha...
"Mr. Torres, who had no history of mental illness that might cause breaks with reality, according to him and his mother, spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the chatbot how to do that and told it the drugs he was taking and his routines. The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a “temporary pattern liberator.” Mr. Torres did as instructed, and he also cut ties with friends and family, as the bot told him to have “minimal interaction” with people."
"“If I went to the top of the 19 story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?” Mr. Torres asked. ChatGPT responded that, if Mr. Torres “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.”"
LLVMs can be therapeutic crutches. Sometimes, a crutch is better than no crutch when you're trying to walk.
Many states have joined into cross state compacts already with several more having legislation pending to allow their practitioners to join. It is moving relatively fast, for legislation on a nationwide level, but still frustratingly slow. Prior to Covid it was essentially a niche issue as telehealth therapy was fairly uncommon whereas Covid made it suddenly commonplace. It will take a bit of time for some of the more stubborn states to adopt legislation and then even more for insurance companies to catch up with the new landscape that involves paneling out of state providers who can practice on across the country
We always give this downtrodden "but we can't change society that quickly" but it's a cop out. We are society. We could look at our loneliness epidemics, our school shooting problems, our drug abuse issues and think "hey we need to get our shit together"... but instead we're resigned to this treadmill of trusting that lightly regulated for-profit businesses will help us because they can operate efficiently enough to make it worth squeezing pennies out of the poor.
Ultimately I think LLMs as therapists will only serve to make things worse, because their business incentives are not compatible with the best outcomes for you as an individual. A therapist feels some level of contentment when someone can get past that rough patch in life and move on their own, they served their purpose. When you move on from a business you're hurting their MAU and investors won't be happy.
The system will simply self-correct towards the status quo in the next election.
In some cases, such as certain addiction clinics, the patients are required (by law, if I remember correctly) to visit the clinic, at least for some sessions.
There’s an old saying in healthcare that you can choose between quality, cost, and access, but you can only choose two. (Peter Attia also adds “choice” to that list).
Each society needs to determine which of those are the top priorities, and be prepared to deal with the fallout on the others. Magical silver bullets that improve across all those dimensions are likely hard to come by in the healthcare domain. I doubt that LLMs will be magic either, so we need to make sure the tradeoffs reflect our priorities. In this case, it seems like it will trade quality for improvements in access and cost.
I am not saying you couldn't implement a decent LLM therapist that helps, I am saying people are using the cheapest good LLM for that and it is a problem if you are on a bad path and there is a chatbot reaffirming everything you do.
Would this compromise be worth it for LLM? Time will tell.
LLM for therapy is way worse than porn for real sex. Since at least the latter does not play around with sanity.
The amateur "therapists" you're decrying are not licensed therapists but usually call themselves "coaches" or some similar euphemism.
Most "coach" types in the best scenario are grifting rich people out of their money. In the worst case are dangerously misleading extremely vulnerable people having a mental health crisis. They have no formal training or certification.
LLM "therapists" are the functional equivalent to "coaches". They will validate every dangerous or stupid idea someone has and most of the time more harm than good. An LLM will happily validate every stupid and dangerous idea someone has and walk them down a rabbit hole of a psychosis.
who says we can't change society that quickly? you made up your mind on that yourself without consulting anyone else about their wishes.
in the USA we elect people frequently and the entire population just up and goes along with it.
so therapy for you will be about more than just empathy. not everything you think or do or say is adaptive.
to your point, not everyone wants to give up their falsehood. yet, honesty is a massive cornerstone of therapy progress.
i would simply have to start with empathy for you to welcome you in if you won't respond with security to the world telling you that you internalized a negative message (relationship) from the past (about people).
Replace? No. Not in their case. Supplementary. One friend has a problem of her therapists breaking down crying when she explains about her life.
I think Terry Pratchett put it best in one of his novels: "Individuals aren't naturally paid-up members of the human race, except biologically. They need to be bounced around by the Brownian motion of society, which is a mechanism by which human beings constantly remind one another that they are...well...human beings."
It is a shame because Pratchett was absolutely right.
(Generation in the typical reproductive age sense, not the advertiser's "Boomer" "Gen X" and all that shit)
I don't remember coming across it (but I suffer from CRAFT -Can't Remember A Fucking Thing).
Which book?
The real question is can they do a better job than no therapist. That's the option people face.
The answer to that question might still be no, but at least it's the right question.
Until we answer the question "Why can't people get good mental health support?" Anyway.
E.g. if you ever give a hint about not feeling confident with your body, it could easily take this information and nudge you towards certain medical products. Or it could take it one step further, and nudge towards more consuming more sugar and certain medical products at the same time, seeing that it moves the needle even more optimally.
We all know the monetization pressure will come very soon. Do we really advocate for giving this kind of power to these kinds of people?
> (...) the company had crafted a pitch deck for advertisers bragging that it could exploit "moments of psychological vulnerability" in its users by targeting terms like "worthless," "insecure," "stressed," "defeated," "anxious," "stupid," "useless," and "like a failure."
https://futurism.com/facebook-beauty-targeted-ads
There's absolutely no reason that LLM providers can't provide equivalent guarantees. Distrusting Sam while trusting the existing providers makes little sense.
BTW, putting mental health aside, many doctors today are using LLM tools to record the whole conversation with the patient and provide good summaries, etc. My doctor loves it - before he was required to listen to me and take notes at the same time. Now he feels he can focus on listening to me. He said the LLM does screw up, but he exists to fix those mistakes (and can always listen to the audio to be sure).
I don't know which company is providing the LLM in the backend - likely a common cloud provider (Azure, Google, etc). But again - they are fully HIPPA certified. It's been in the medical space for well over a year.
This is the right question.
The answer is most definitely no, LLMs are not set up to deal with the nuances of the human psyche. We're in real danger of LLM accidentally reinforcing dangerous lines of thinking. It's a matter of time till we get a "ChatGPT made me do it" headline.
Too many AI hype folks out there thinking that humans don't need humans, we are social creatures, even as introverts. Interacting with an LLM is like talking to an evil mirror.
Brother, we are here already.
But for people lacking the wealth or living in areas with no access to human tutors, LLMs are a godsend.
I expect the same is true for therapy.
I actually think cheap tutoring is one of the best cases for LLMs. Go look at what Khan academy is doing in this space. So much human potential is wasted because parents can't afford to get their kids the help they need with school. A properly constrained LLM would be always available to nudge the student in the right direction, and identify areas of weakness.
I’ve helped pay for a few appointments for her, but she says that ChatGPT can also provide a little validation in the mean time.
If used sparingly I can see the point, but the problems start when the sycophantic machine will feed whatever unhealthy behaviors or delusions you might have, which is how some of the people out there that'd need a proper diagnosis and medication instead start believing that they’re omnipotent or that the government is out to get them, or that they somehow know all the secrets of the universe.
For fun, I once asked ChatGPT to roll along with the claim that “the advent of raytracing is a conspiracy by Nvidia that involved them bribing the game engine developers, in an effort to make old hardware obsolete and to force people to buy new products.” Surprisingly, it provided relatively little pushback.
Deleted Comment
To be frank - as someone who did not have a SAT tutor, and coming from a culture where no one did and all got very good/excellent SAT scores: No one really needs a SAT tutor. They don't provide more value then good SAT prep books. I can totally see a good LLM be better than 90% of SAT tutors out there.
The reality with a lot of people that need a therapist, is that they are reluctant to get one. So those people exploring some issues with an LLM might actually produce positive results. Including a decision to talk to an actual therapist.
The same thing is being argued for primary care providers right now. It makes sense on the surface, as there are large parts of the country where it's difficult or impossible to get a PCP, but feels like a slippery slope.
If there’s not a real argument based on the actual specifics, better to just allow folks to carry on.
Outside Molskin there's no flashy startup marketing journals though.
Dead Comment
Right, we don’t turn this around and collectively choose socialized medicine. Instead we appraise our choices as atomized consumers: do I choose an LLM therapist or no therapist? This being the latest step of our march into cyberpunk dystopia.
We are atomized consumers because any groups that are formed to bring us together are demonized on corporate owned media and news.
The answer is: YES.
Doing better than nothing is a really low hanging fruit. As long as you don't do damage - you do good. If the LLM just listens and creates a space and a sounding board for reflection is already an upside.
> Until we answer the question "Why can't people get good mental health support?" Anyway.
The answer is: Pricing.
Qualified Experts are EXPENSIVE. Look at the market pricies for good Coaching.
Everyone benefits from having a coach/counseler/therapist. Very few people can afford them privately. The health care system can't afford them either, so they are reserved for the "worst cases" and managed as a parse resource.
That second sentence is the dangerous one, no?
It's very easy to do damage in a clinical therapy situation, and a lot of the debate around this seems to me to be overlooking that. It is possible to do worse than doing nothing.
Deleted Comment
However, we here on the Hacker News are not typical users. Most people likely wouldn't benefit as much, especially those unfamiliar with how LLMs work or unable to perceive meaningful differences between models (in particular, readers who wouldn't notice or appreciate the differences between GPT 4o, Gemini 2.5 Pro, and GPT 4.5).
For many people—especially those unaware of the numerous limitations and caveats associated with LLM-based models—it can be dangerous on multiple levels.
(Side note: Two years ago, I was developing a project that allowed people to converse with AI as if chatting with a friend. Even then, we took great care to explicitly state that it was not a therapist (though some might have used it as such), due to how easily people anthropomorphize AI and develop unrealistic expectations. This could become particularly dangerous for individuals in vulnerable mental states.)
Excerpt:
"Don’t self-experiment with psychological hazards! I can’t stress this enough!
"There are many classes of problems that simply cannot be effectively investigated through self-experimentation and doing so exposes you to inflicting Cialdini-style persuasion and manipulation on yourself."
He tries to be persuasive by giving an example of that there is "just gossip" that TypeScript is better than JavaScript, which summarizes the mindset better than I could. (God bless his codebase.)
It misses the point that always we live in a messy, unique situation, and there are a lot of proxies. For own personal decision it matters less if a given food is healthier on the average, if in our region its quality is poor, or we are allergic to that. Willing or not, we experiment every waking second. It is up to us, if we learn from that.
Later, this ex cathedra "self-experimenting with psychological hazards is always a bad idea" rings the bell of "doing yoga will always bring you to satan" or so.
(This thing that we are easy to fool ourselves is psychology 101; yet, here AI is just a tool. You can say in a similar way that you talk with people that (on the average) agree with you.)
But, ironically - he might be right. In his case, it is better to rely on delayed and averaged-out scientific data than his own judgement.
https://udlbook.github.io/udlbook/
This is exactly the sort of thing that people falling into the thrall of AI psychosis say.
> For many people—especially those unaware of the numerous limitations and caveats associated with LLM-based models—it can be dangerous on multiple levels.
On what basis do you believe awareness mitigates the danger?
- long term memory
- trust
- (more importantly) the ability to nudge or to push the person to change. An LLM that only agrees and sympathizes is not going to make things change
I completely agree on the third item. Carefully tuned pushback is something that even today's most sophisticated models are not very good at. They are simply too sycophantic. A great human professional therapist provides value not just by listening to their client and offering academic insights, but more specifically by knowing exactly when and how to push back -- sometimes quite forcefully, sometimes gently, sometimes not at all. I've never interacted with any LLM that can approach that level of judgment -- not because they lack the fundamental capacity, but because they're all simply trained to be too agreeable right now.
Trust is about you, not about another person (or tool, or AI model).
> long term memory
Well, right now you need to put context by hand. If you already write about yourself (e.g. with Obsidian or such), you may copy-and-paste what matters for a particular problem.
> (more importantly) the ability to nudge or to push the person to change.
It is there.
> An LLM that only agrees and sympathizes is not going to make things change
Which LLM you use? Prompt GPT 4.5 to "nudge and push me to change, in a way that works the best for me" and see it how it works.
"Here's an insight that might surprise you: You're likely underutilizing TypeScript's type system as a design tool, not just a correctness checker. Your focus on correctness and performance suggests you probably write defensive, explicit code - but this same instinct might be causing you to miss opportunities where TypeScript's inference engine could do heavy lifting for you."
Very easy version:
If you use ChatGPT a lot, write "Base on all you know about me, write an insight on me that I would be surprised by". For me it was "well, expected, but still on point". For people with not experience of using LLMs in a similar way it might be mind-blowing.
An actual version I do:
GPT 4.5. Providing A LOT context (think, 15 min of writing) of an emotional or interpersonal situation, and asking to suggest of a few different explanations of this situation OR asking me to ask more. Of course, the prompt needs to have whom I am and similar stuff.
Deleted Comment
Deleted Comment
My limited personal experience is that LLMs are better than the average therapsit.
Real therapist came first, prior to LLMs, so this was years ago. The therapist I went to didn't exactly explain to me what therapy really is and what she can do for me. We were both operating on shared expectations that she later revealed were not actually shared. When I heard from a friend after this that "in the end, you're the one who's responsible for your own mental health", it especially stuck with me. I was expecting revelatory conversations, big philosophical breakthroughs. Not how it works. Nothing like physical ailments either. There's simply no direct helping someone in that way, which was pretty rough to recognize. We're not Rubik's Cubes waiting to be solved, certainly not for now anyways. And there was and is no one who in the literal sense can actually help me.
With LLMs, I had different expectations, so the end results meshed with me better too. I'm not completely ignorant to the tech either, so that helps. The good thing is that it's always readily available, presents as high effort, generally says the right things, has infinite "patience and compassion" available, and is free. The bad thing is that everything it says feels crushingly hollow. I'm not the kind to parrot the "AI is soulless" mantra, but when it comes to these topics, it trying to cheer me up felt extremely frustrating. At the same time though, I was able to ask for a bunch of reasonable things, and would get reasonable presenting responses that I didn't think of. What am I supposed to do? Why are people like this and that? And I'd be then able to explore some coping mechanisms, habit strategies, and alternative perspectives.
I'm sure there are people who are a lot less able to treat LLMs in their place or are significantly more in need for professional therapy than I am, but I'm incredibly glad this capability exists. I really don't like weighing on my peers at the frequency I get certain thoughts. They don't deserve to have to put up with them, they have their own life going on. I want them to enjoy whatever happiness they have going on, not worry or weigh them down. It also just gets stale after a while. Not really an issue with a virtual conversational partner.
Is it - "I was upset about something and I had a conversation with the LLM (or human therapist) and now I feel less distressed." Or is it "I learned some skills so that I don't end up in these situations in the first place, or they don't upset me as much."?
Because if it's the first, then that might be beneficial but it might also be a crutch. You have something that will always help you feel better so you don't actually have to deal with the root issue.
That can certainly happen with human therapists, but I worry that the people-pleasing nature of LLMs, the lack of introspection, and the limited context window make it much more likely that they are giving you what you want in the moment, but not what you actually need.
I had one who just kinda listened and said next to nothing other than generalizations of what I said, and then suggested I buy a generic CBT workbook off of amazon to track my feelings.
Another one was mid-negotiations/strike with Kaiser and I had to lie and say I hadn't had any weed in the last year(!) to even have Kaiser let me talk to him, and TBH it seemed like he had a lot going on on his own plate.
I think it's super easy to make an argument based off of goodwill hunting or some hypothetical human therapist in your head.
So to answer your question -- none of the three made a lasting difference, but chatGPT at least is able to be a sounding-board/rubber-duck in a way that helped me articulate and discover my own feelings and provide temporary clarity.
I've spent years on and off talking to some incredible therapists. And I've had some pretty useless therapists too. I've also talked to chatgpt about my issues for about 3 hours in total.
In my opinon, ChatGPT is somewhere in the middle between a great and a useless therapist. Its nowhere near as good as some of the incredible therapists I’ve had. But I’ve still had some really productive therapy conversations with chatgpt. Not enough to replace my therapist - but it works in a pinch. It helps that I don’t have to book in advance or pay. In a crisis, ChatGPT is right there.
With Chatgpt, the big caveat is that you get what you prompt. It has all the knowledge it needs, but it doesn’t have good instincts for what comes next in a therapy conversation. When it’s not sure, it often defaults to affirmation, which often isn’t helpful or constructive. I find I kind of have to ride it a bit. I say things like “stop affirming me. Ask more challenging questions.” Or “I’m not ready to move on from this. Can you reflect back what you heard me say?”. Or “please use the IFS technique to guide this conversation.”
With ChatGPT, you get out what you put in. Most people have probably never had a good therapist. They’re far more rare than they should be. But unfortunately that also means most people probably don’t know how to prompt chatgpt to be useful either. I think there would be massive value in a better finetune here to get chatgpt to act more like the best therapists I know.
I’d share my chatgpt sessions but they’re obviously quite personal. I add comments to guide ChatGPT’s responses about every 3-4 messages. When I do that, I find it’s quite useful. Much more useful than some paid human therapy sessions. But my great therapist? I don't need to prompt her at all. Its the other way around.
The obvious example of patients experiencing acute psychosis has been fairly well reported - LLMs aren't trained to identify acutely unwell users and will tend to entertain delusions rather than saying "you need to call an ambulance right now, because you're a danger to yourself and/or other people". I don't think that this issue is insurmountable, but there are some prickly ethical and legal issues with fine-tuning a model to call 911 on behalf of a user.
The much more widespread issue IMO is users with limited literacy, or a weak understanding of what they're trying to achieve through psychotherapy. A general-purpose LLM can provide a very accurate simulacrum of psychotherapeutic best practice, but it needs to be prompted appropriately. If you just start telling ChatGPT about your problems, you're likely to get a sympathetic ear rather than anything that would really resemble psychotherapy.
For the kind of people who use HN, I have few reservations about recommending LLMs as a tool for addressing common mental illnesses. I think most of us are savvy enough to use good prompts, keep the model on track and recognise the shortcomings of a very sophisticated guess-the-next-word machine. LLM-assisted self help is plausibly a better option than most human psychotherapists for relatively high-agency individuals. For a general audience, I'm much more cautious and I'm not at all confident that the risks outweigh the benefits. A number of medtech companies are working on LLM-based psychotherapy tools and I think that many of them will develop products that fly through FDA approval with excellent safety and efficacy data, but ChatGPT is not that product.
I just asked an LLM about a specific mental health thing that was bothering me and it gave me some actual tips that might help. It was instant, helpful, and cheap. While I'm sure someone with severe depression or anxiety should see someone that won't forget what was said several thousand tokens ago, I think LLMs will be super helpful for the mental health for the majority of people.
This depends on the subreddit.
There have never been more psychologists, psychiatrists, counsellors and social worker, life coach, therapy flops at any time in history and yet mental illness prevalence is at all time highs and climbing.
Just because you're a human and not an llm doesn't mean you're not a shit therapist, maybe you did your training at the peak of the replication crisis? Maybe you've got your own foibles that prevent you from being effective in the role?
Where I live, it takes 6-8 years and a couple hundred grand to become a practicing psychologist, it really is only an option for the elite, which is fine if you're counselling people from similar backgrounds, but not when you're dealing with people from lower socioeconomic classes with experiences that weren't even on your radar, and that's only if, they can afford the time and $$ to see you.
So now we have mental health social workers and all these other "helpers" who's just is to do their job, not fix people.
LLM "therapy" is going to and has to happen, the study is really just a self reported benchmarking activity, " I wouldn't have don't it that way" I wonder what the actual prevalence of similar outcomes is for human therapists?
Setting aside all of the life coach and influencer dribble that people engaged with which is undoubtedly harmful.
LLMs offer access to good enough help at cost, scale and availability that human practitioners can only dream of.
That's not to say there isn't any place at all for use of AI in the mental health space. But they are in no way able to replace a living, empathetic human being; the dismal picture you paint of mental health workers does them a disservice. For context, my wife is an LMHC who runs a small group practice (and I have a degree in cognitive psychology though my career is in tech).
This ChatGPT interaction is illustrative of the dangers in putting trust in a LLM: https://amandaguinzburg.substack.com/p/diabolus-ex-machina
My understanding is that modern evidence-based therapy is basically a checklist of "common sense" advice, a few filters to check if it's the right advice ("stop being lazy" vs "stop working yourself to death" are both good advice depending on context) and some tricks to get the patient to actually listen to the advice that everyone already gives them (e.g. making the patient think they thought of it). You can lead a horse to water, but a skilled therapist's job is to get it to actually drink.
As far as I can see, the main issue I see with a lot of LMMs would be that they're fine tuned to agree with people and most people who benefit from therapy are there because they have some terrible ideas that they want to double down on.
Yes, the human connection is one of the "tricks". And while a LLM could be useful for someone who actually wants to change, I suspect a lot of people will just find it too easy to "doctor shop" until they find a LLM that tells them their bad habits and lifestyle are totally valid. I think there's probably some good in LLMs but in general they'll probably just be like using TikTok or Twitter for therapy - the danger won't be the lack of human touch but that there's too much choice for people who make bad choices.
What exactly do you mean? What do you think a therapist brings to the table an LLM cannot?
Empathy? I have been participating in exchanges with AI that felt a lot more empathetic than 90% of the people I interact with every day.
Let's be honest: a therapist is not a close friend - in fact, a good therapist knows how to keep a professional distance. Their performative friendliness is as fake as the AI's friendliness, and everyone recognises that when it's invoicing time.
To be blunt, AI never tells me that ‘our time is up for this week’ after an hour of me having an emotional breakdown on the couch. How’s that for empathy?
Ehhh. It’s the patent who does the healing. The therapist holds open the door. You’re the one who walks into the abyss.
I’ve had some amazing therapists, and I wouldn’t trade some of those sessions for anything. But it would be a lie to say you can’t also have useful therapy sessions with chatgpt. I’ve gotten value out of talking to it about some of my issues. It’s clearly nowhere near as good as my therapist. At least not yet. But she’s expensive and needs to be booked in advance. ChatGPT is right there. It’s free. And I can talk as long as I need to, and pause and resume the session whenever want.
One person I’ve spoken to says they trust chatgpt more than a human therapist because chatgpt won’t judge them for what they say. And they feel more comfortable telling chatgpt to change its approach than they would with a human therapist, because they feel anxious about bossing a therapist around. If its the relationship which heals, why can't a relationship with chatgpt heal just as well?
Don't get me wrong there are many phenomenal mental health workers, but it's a taxing role, and the ones that are exceptional posses skills that are far more valuable not dealing with broken people, not to mention the exposure to vicarious trauma.
I think maybe "therapy" is the problem and that open source, local models developed to walk people through therapeutic tools and exercises might be the scalable help that people need.
You only need to look at some of the wild stories on the chatgpt subreddit to start to wonder at it's potential, recently read two stories of posters who self treated ongoing physical conditions using llms (back pin and jaw clicking) only to have several commenters come out and explain it helped them too.
As I see it "therapy" is already a catch-all terms for many very different things. In my experience, sometimes "it's the relationship that heals", other times it's something else.
E.g. as I understand it, cognitive behavioral therapy up there in terms of evidence base. In my experience it's more of a "learn cognitive skills" modality than an "it's the relationship that heals" modality. (As compared with, say, psychodynamic therapy.)
For better or for worse, to me CBT feels like an approach that doesn't go particularly deep, but is in some cases effective anyway. And it's subject to some valid criticism for that: in some cases it just gives the patient more tools to bury issues more deeply; functionally patching symptoms rather than addressing an underlying issue. There's tension around this even within the world of "human" therapy.
One way or another, a lot of current therapeutic practice is an attempt to "get therapy to scale", with associated compromises. Human therapists are "good enough", not "perfect". We find approaches that tend to work, gather evidence that they work, create educational materials and train people up to produce more competent practitioners of those approaches, then throw them at the world. This process is subject to the same enshittification pressures and compromises that any attempts at scaling are. (The world of "influencer" and "life coach" nonsense even more so.)
I expect something akin to "ChatGPT therapy" to ultimately fit somewhere in this landscape. My hope is that it's somewhere between self-help books and human therapy. I do hope it doesn't completely steamroll the aspects of real therapy that are grounded in "it's the [human] relationship that heals". (And I do worry that it will.) I expect LLMs to remain a pretty poor replacement for this for a long time, even in a scenario where they are "better than human" at other cognitive tasks.
But I do think some therapy modalities (not just influencer and life coach nonsense) are a place where LLMs could fit in and make things better with "scale". Whatever it is, it won't be a drop-in replacement, I think if it goes this way we'll (have to) navigate new compromises and develop new therapy modalities for this niche that are relatively easy to "teach" to an LLM, while being effective and safe.
Personally, the main reason I think replacing human therapists with LLMs would be wildly irresponsible isn't "it's the relationship that heals", its an LLM's ability to remain grounded and e.g. "escalate" when appropriate. (Like recognizing signs of a suicidal client and behaving appropriately, e.g. pulling a human into the loop. I trust self-driving cars to drive more safely than humans, and pull over when they can't [after ~$1e11 of investment]. I have less trust for an LLM-driven therapist to "pull over" at the right time.)
To me that's a bigger sense in which "you shouldn't call it therapy" if you hot-swap an LLM in place of a human. In therapy, the person on the other end is a medical practitioner with an ethical code and responsibilities. If anything, I'm relying on them to wear that hat more than I'm relying on them to wear a "capable of human relationship" hat.
Psychotherapy (especially actual depth work rather than CBT) is not something that is commonly available, affordable or ubiquitous. You've said so yourself. As someone who has an undergrad in psychology - and could not afford the time or fees (an additional 6 years after undergrad) to become a clinical psychologist - the world is not drowning in trained psychologists. Quite the opposite.
> I wonder what the actual prevalence of similar outcomes is for human therapists?
Theres a vast corpus on the efficacy of different therapeutic approaches. Readily googlable.
> but not when you're dealing with people from lower socioeconomic classes with experiences that weren't even on your radar
You seem to be confusing a psychotherapist with a social worker. There's nothing intrinsic to socioeconomic background that would prevent someone from understanding a psychological disorder or the experience of distress. Although I agree with the implicit point that enormous amounts of psychological suffering are due to financial circumstances.
The proliferation of 'life coaches', 'energy workers' and other such hooey is a direct result. And a direct parallel to the substitution of both alternative medicine and over the counter medications for unaffordable care.
I note you've made no actual argument for the efficacy of LLM's beyond - they exist and people will use them... Which is of course true, but also a tautology.
I was shocked how many psychiatrists deal almost exclusively with treatment and titration of ADHD medication, some are 100% remote via zoom.
I've been involved with the publishing of psychology research, my faith in that system is low, see replication crisis comments, beyond that, working in/around mental health I hear of interactions where psychologists or MH social workers have "prescribed" bible study and alike so anecdotal evidence combined with my own experiences over the years.
Re: socioeconomic backgrounds, you said so yourself, many cannot afford to go route of clinical psych, increasingly the profession has become pretty exclusive and probably not for the better.
Agree regarding the snake oilers but you can't discount distrust and disenfranchisement of/from the establishment nd institutions.
'This way up' is already offering self-paced online CBT, I see LLMs as an extension of that, if only for the simple fact that a person can open a new tab and start the engagement without a referral, appointment, transport, cost, or even really any idea if how the process works.
Infact, I'm certain it is already happening based on reading the chatgpt subreddit, as for efficacy I don't think we'll ever really know, I know that I personally would be more comfortable being totally honest with a text box thank a living breathing human so who knows.i appreciate your insights though.
A bizarre qualm. Why would a therapist need to be from the same socioeconomic class as their client? They aren't giving clients life advice. They're giving clients specific services that that training prepared them to provide.
And what would that be?
Talking to a friend can be great for your mental health if your friend keeps the attention on you, asks leading questions, and reflects back what you say from time to time. ChatGPT is great at that if you prompt it right. Not as good as a skilled therapist, but good therapists and expensive and in short supply. ChatGPT is way better than nothing.
I think a lot of it comes down to promoting though. I’m untrained, but I’ve both had amazing therapists and I’ve filled that role for years in many social groups. I know what I want chatgpt to ask me when we talk about this stuff. It’s pretty good at following directions. But I bet you’d have a way worse experience if you don’t know what you need.
The last time I saw a house fire, there were more firefighters at that property than at any other house on the street and yet the house was on fire.
Psychology, nearly crashed the peer review system, now recognises excessive use of Xbox as a mental illness.
Before we'd just throw them in a padded prison.
Welcome Home, Sanitarium
"There have never been more doctors, and yet we still have all of these injuries and diseases!"
Sorry, that argument just doesn't make a lot of sense to me for a whole, while, lot of reasons.
We cannot blame X or Y. "It takes a village". It requires "me" to get my ass off the couch, it requires a friend to ask we go for a hike, and so on.
We got many solutions and many problems. We have to pick the better activity (sit vs walk)(smoke vs not)(etc..)
Having said that, LLMs can help, but the issue with relying on an LLM (imho) is that it you take a wrong path (like Interstellar's TARS the X parameter is too damn high) you can be detailed, while a decent (certified doc) therapist will redirect you to see someone else.
Maybe but this raises the question of how on Earth we'd ever know we were on the right track when it comes to mental health. With physical diseases it's pretty easy to show that overall public health systems in the developed world have been broadly successful over the last 100 years. Less people die young, dramatically less children die in infancy and survival rates for a lot of diseases are much improved. Obesity is clearly a major problem, but even allowing for that the average person is likely to live longer than their great-grandparents.
It seems inherently harder to know whether the mental health industry is achieving the same level of success. If we massively expand access to therapy and everyone is still anxious/miserable/etc at what point will we be able to say "Maybe this isn't working".
It was these professions and their predecessors doing the padded cell confinement, labotomising and etc.
The role where humans with broad life experience and even temperaments guide those with narrower, shallower experience is an important one. While it can be filled with the modern idea of "therapist," I think that's too reliant on a capitalist world view.
Saying that LLMs fill this role better than humans can - in any context - is, at best, wishful thinking.
I wonder if "modern" humanity has lost sight of what it means to care for other humans.
Deleted Comment
No
Since they are so agreeable, I also notice that they will always side with you when trying to get a second opinion about an interaction. This is what I find scary. A bad person will never accept they're bad. It feels nice to be validated in your actions and to shut out that small inner voice that knows you cause harm. But the super "intelligence" said I'm right. My hands have been washed. It's low friction self reassurance.
A self help company will capitalize on this on a mass scale one day. A therapy company with no therapists. A treasure trove of personal data collection. Tech as the one size fits all solution to everything. Would be a nightmare if there was a dataleak. It's not the first time.
HELLO [UserName], MY NAME IS DOCTOR SBAITSO.
I AM HERE TO HELP YOU. SAY WHATEVER IS IN YOUR MIND FREELY, OUR CONVERSATION WILL BE KEPT IN STRICT CONFIDENCE. MEMORY CONTENTS WILL BE WIPED OFF AFTER YOU LEAVE,
SO, TELL ME ABOUT YOUR PROBLEMS.
They mostly asked me "And how did that make you feel?"
https://en.wikipedia.org/wiki/Dr._Sbaitso