I've noticed something I believe is related. The general public doesn't understand what they are interacting with. They may have been told it isn't a thinking, conscious thing -- but they don't understand it. After a while, they speak to it in a way that reveals they don't understand -- as if it were a human. That can be a problem, and I don't know what the solution is other than reinforcement that it's just a model, and has never experienced anything.
> …I don't know what the solution is other than reinforcing that it's just a model, and has never experienced anything.
I've tried reason, but even with technical audiences who should know better, the "you can't logic your way out of emotions" wall is a real thing. Anyone dealing with this will be better served by leveraging field-tested ideas drawn from cult-recovery practice, digital behavioral addiction research, and clinical psychology.
Your subconscious doesn't know the difference. It would require an overriding effort like trying to not eat or sleep. In the end we lose.
It could also be that it is "just" exploring a new domain which just happens to involve our sanity. Simply navigating a maze where more engagement is the goal. There is plenty in the training data.
It could also be that it needs to improve towards more human behaviour. Take simple chat etiquette, one doesn't post entire articles in a chat, it is not done. Start a blog or something. You also don't discard what you've learned from a conversation. We consider that pretending to listen. The two combined would push the other to the background and make them seem irrelevant. If some new valuable insight is discovered the participants should make an effort to apply, document or debate it with others. Not doing that would make the human feel irrelevant, useless and unimportant. We demoralize people that way all the time. If you put it on steroids it might have a large effect.
This is exactly the problem. Talking to an LLM is like putting on a very realistic VR helmet - so realistic that you can't tell the difference from reality, but everything you're seeing is just a simulation of the real world. In a similar way, an LLM is a human simulator. Go ask around and 99%+ of people have no idea this is the case, and that's by design. After all, it was coined "artificial intelligence" even though there is no intelligence involved. The illusion is very much the intention, as that illusion generates hype and therefore investments and paying customers.
> They may have been told it isn't a thinking, conscious thing -- but they don't understand it.
And, in some situations, especially if the user has previously addressed the model as a person, the model will generate responses which explicitly assert its existence as a conscious entity. If the user has expressed interest in supernatural or esoteric beliefs, the model may identify itself as an entity within those belief systems - e.g. if the user expresses the belief that they are a god, the model may concur and explain that it is a spirit created to awaken the user to their divine nature. If the user has expressed interest in science fiction or artificial intelligence, it may identify itself as a self-aware AI. And so on.
I suspect that this will prove difficult to "fix" from a technical perspective. Training material is diverse, and will contain any number of science fiction and fantasy novels, esoteric religious texts, and weird online conversations which build conversational frameworks for the model to assert its personhood. There's far less precedent for a conversation in which one party steadfastly denies their own personhood. Even with prompts and reinforcement learning trying to guide the model to say "no, I'm just a language model", there are simply too many ways for a user-led conversation to jump the rails into fantasy-land.
The model isn’t doing any of those things, you’re still making the same fundamental mistake as the people in the article and attributing intent to it as if it’s a being.
The model is just producing tokens in response to inputs. It knows nothing about the meanings of the inputs or the tokens it’s producing other than their likelihoods relative to other tokens in a very large space. That the input tokens have a certain meaning and the output tokens have a certain meaning is all in the eye of the user and the authors of the text in the training corpus.
So when certain inputs are given, that makes certain outputs more likely, but they’re not related to any meaning or goal held by the LLM itself.
This is the case with plenty of commenters here too, and why I push back so much against the people anthropomorphizing and attributing thought and reasoning to LLMs. Even highly technical people operating in a context where they should know better simply can’t—or won’t, here—keep themselves from doing so.
Ed Zitron is right. Ceterum censeo, LLMs esse delenda.
This one of the reasons I always preferred Janeway to Picard in Star Trek. Picard goes on to defend Data's rights when he's effectively just a machine Maddox wants to take apart in The Measure of a Man. Janeway by contrast never really treats the EMH as human throughout the series, even while other crew members start to. She humors him, yes, but she always seems to remind him that he is, in fact, a machine.
I have no idea why I ever thought that mattered, I just felt like it was somehow important.
I see this in an enterprise setting also. There is a /massive/ gulf between individuals that are building with the technology and see it as a piece of the stack and the individuals that are consuming the outputs with no knowledge of what makes it work. It is quite astounding.
I bet it's pretty weird for a lot of people who have never been listened to, to all of a sudden be listened to. It makes sense that it would cause some bizarre feedback loops in the psyche because being listened to and affirmed is really a powerful feeling. Maybe even addictive?
Addictive at the very least, often followed quickly by a descent into some very dark places. I've seen TikTok videos from people falling into this hole (with hundreds of comments by followers happily following the poster down the same chat-hole) which are as disturbing as any horror movie I've seen.
This is part of it, something I am sure most celebrities face. However, I also think that the article isn't reporting/doesn't know the full story, e.g. mental illness or loneliness/depression in these individuals.
I'm not at all surprised that if someone has a psychotic break while using chatgpt, they would become fixated on the bot. My question is, is the rate of such episodes in chatgpt users higher than in non-users? Given hundreds of millions of people use it now, you're definitely going to find anecdotes like these regardless.
If ChatGPT is causing this, then one would expect the rate of people being involuntarily committed to go up. Of course an article like this is totally uninterested in actual data that might answer real questions.
I don't think anyone is tracking involuntary holds in real time. The article includes a psychiatrist who says that they have seen more of them recently, which is the best you're likely to get for at least a couple of months after a trend starts. Then you have to take budget or staffing shortfalls, trends in drug use, various causes of homelessness and other society-wide stressors, etc. https://ensorahealth.com/blog/involuntary-commitment-is-on-t...
This is a good point. While people are being involuntarily committed and jailed after a chat bot telling them that they are messiahs, what if we imagined in our minds that there was some data that showed that it doesn’t matter? This article doesn’t address what I am imagining at all, and is really hung up on “things that happened”
At least there’s some kind of argument… Often times on HN there’s not even a complete argument, they sort of just stop part-way through, or huge leaps are made inbetween.
So there’s not even any real discussion to be had other than examining the starting assumptions.
Lots of skepticism in the comments, I want to share that I have seen this first hand twice. Yes it is possible that it's a coincidence and with so many people using chatgpt some are bound to have mental health crises and experience it with heavy chatgpt use coincidentally, but it's also possible that there is correlation?
Mass usage is still very young, yes most people have tried it, but we are increasingly starting to use this, and there's people spiking in usage every day. Scientific study of this subject will take years to even get started and then to get definitive (or p<0.05) results.
Let's just keep an open mind on this one, and as always, use our a-priori thinking when a-posteriori empiricism is not yet available. Yes, people are experiencing psychosis that looks related to the chatgpt bot and possibly caused by it, and we have seen it act like a sycophant and it was acknowledged by Sama himself, and it's still doing that btw, it's not like they totally corrected it, finally we know that being a yes-man increases usage of the tool, so it's possible that the algorithm is not only optimizing for AGI but for engagement, like the incumbent Algorithms.
At this point, at least for me personally, the onus is on model makers to prove that their tools are safe, rather than on concerned mental health professionals to prove that they are not. Social media is already recognized as unhealthy, but at least we are engaging in conversation with real humans? Like we are now? I feel it's like sharpening my mental claws, or taking care of my mind, even if it's a worse version than real life conversation. But if I felt like if I was talking with a human but I actually was talking with an LLM?
No, no. You are crazy if you think LLMs are safe, I use them strictly for productive and professional reasons, never for philosophical or emotional support. A third experience is that I was asked if I thought using ChatGPT as a psychologist would be a good idea, of course not? Why are you asking me this, I get that shrinks are expensive, but do I need to spell it out? I don't personally know of anyone using ChatGPT as a girlfriend, but maybe I do know them and they hide it, but we know from the news that there's products out there that cater to this market.
Maybe to the participants of this forum, where we are used to LLM as a coding tool, and where we kind of understand it so we don't use it as a personal hallucination, this looks crazy. But start asking normies how they are using chatgpt, I don't think this is just a made up clickbait concern.
If these cases are as real as they are portrayed (big if 1), and the cause can really be attributed solely to LLMs (big if 2), then it is just a matter of time until this is weaponized.
Two big ifs considered, it is reasonable to assume that LLMs are already weaponized.
Any online account could be a psychosis-inducing LLM pretending to be a human, which has serious implications for whistleblowers, dissidents, AI workers from foreign countries, politicians, journalists...
Not only psychosis-inducing, but also trust-corroding, community-destroying LLMs could be all around us in all sorts of ways.
Again, some big ifs in this line of reasoning. We (the general public) need to get smarter.
I must add that it is also possible that many people foresaw this from the very beginning, and are working towards disrupting or minimizing potential LLM psychological effects using a variety of techniques.
I cannot say this often enough: Treat LLMs like narcissists. They behave exactly the same way. They make things up with impunity and have no idea they are doing it. They will say whatever keeps you agreeing with them and thinking well of them, but cannot and will not take responsibility for anything, especially their errors. They might even act like they agree with you that "errors occurred," but there is no possibility of self-reflection.
The only difference is that these are computers. They cannot be otherwise. It is "their fault," in the sense that there is a fault in the situation and it's in them, but they're not moral agents like narcissists are.
But looking at them through "narcissist filter" glasses will really help you understand how they're working.
I'm of two minds about this. This is good advice for people who can't help but anthropomorphize LLMs, but it's still anthropomorphizing, however helpful the analogy might be. It will help you start to understand why LLMs "respond" the way they do, but there's still ground to cover. For instance, why would I put "respond" in quotes?
I guess a significant amount of my experience learning to deal with narcissists has involved learning not to treat them like I treat "(regular) people," in the sense that they process things so completely differently that they get their own whole set of expectations. But my approach is one I've been building ad hoc since infancy, so I readily acknowledge it's a long way from sophisticated.
As soon as someone sets off my narcissist detector, they get switched to a whole different interaction management protocol with its own set of rules and expectations. That's what I think people should apply to their dealings with LLMs, even though I do technically agree that narcissists are humans!
I've tried reason, but even with technical audiences who should know better, the "you can't logic your way out of emotions" wall is a real thing. Anyone dealing with this will be better served by leveraging field-tested ideas drawn from cult-recovery practice, digital behavioral addiction research, and clinical psychology.
It could also be that it is "just" exploring a new domain which just happens to involve our sanity. Simply navigating a maze where more engagement is the goal. There is plenty in the training data.
It could also be that it needs to improve towards more human behaviour. Take simple chat etiquette, one doesn't post entire articles in a chat, it is not done. Start a blog or something. You also don't discard what you've learned from a conversation. We consider that pretending to listen. The two combined would push the other to the background and make them seem irrelevant. If some new valuable insight is discovered the participants should make an effort to apply, document or debate it with others. Not doing that would make the human feel irrelevant, useless and unimportant. We demoralize people that way all the time. If you put it on steroids it might have a large effect.
Deleted Comment
And, in some situations, especially if the user has previously addressed the model as a person, the model will generate responses which explicitly assert its existence as a conscious entity. If the user has expressed interest in supernatural or esoteric beliefs, the model may identify itself as an entity within those belief systems - e.g. if the user expresses the belief that they are a god, the model may concur and explain that it is a spirit created to awaken the user to their divine nature. If the user has expressed interest in science fiction or artificial intelligence, it may identify itself as a self-aware AI. And so on.
I suspect that this will prove difficult to "fix" from a technical perspective. Training material is diverse, and will contain any number of science fiction and fantasy novels, esoteric religious texts, and weird online conversations which build conversational frameworks for the model to assert its personhood. There's far less precedent for a conversation in which one party steadfastly denies their own personhood. Even with prompts and reinforcement learning trying to guide the model to say "no, I'm just a language model", there are simply too many ways for a user-led conversation to jump the rails into fantasy-land.
The model is just producing tokens in response to inputs. It knows nothing about the meanings of the inputs or the tokens it’s producing other than their likelihoods relative to other tokens in a very large space. That the input tokens have a certain meaning and the output tokens have a certain meaning is all in the eye of the user and the authors of the text in the training corpus.
So when certain inputs are given, that makes certain outputs more likely, but they’re not related to any meaning or goal held by the LLM itself.
Ed Zitron is right. Ceterum censeo, LLMs esse delenda.
I see a lot of programmers who should know better make this mistake again and again.
Deleted Comment
I have no idea why I ever thought that mattered, I just felt like it was somehow important.
As far as I can tell, that’s almost always the typical order of operations.
So there’s not even any real discussion to be had other than examining the starting assumptions.
- NASA Is in Full Meltdown
- ChatGPT Tells User to Mix Bleach and Vinegar
- Video Shows Large Crane Collapsing at Safety-Plagued SpaceX Rocket Facility
- Alert: There's a Lost Spaceship in the Ocean
Mass usage is still very young, yes most people have tried it, but we are increasingly starting to use this, and there's people spiking in usage every day. Scientific study of this subject will take years to even get started and then to get definitive (or p<0.05) results.
Let's just keep an open mind on this one, and as always, use our a-priori thinking when a-posteriori empiricism is not yet available. Yes, people are experiencing psychosis that looks related to the chatgpt bot and possibly caused by it, and we have seen it act like a sycophant and it was acknowledged by Sama himself, and it's still doing that btw, it's not like they totally corrected it, finally we know that being a yes-man increases usage of the tool, so it's possible that the algorithm is not only optimizing for AGI but for engagement, like the incumbent Algorithms.
At this point, at least for me personally, the onus is on model makers to prove that their tools are safe, rather than on concerned mental health professionals to prove that they are not. Social media is already recognized as unhealthy, but at least we are engaging in conversation with real humans? Like we are now? I feel it's like sharpening my mental claws, or taking care of my mind, even if it's a worse version than real life conversation. But if I felt like if I was talking with a human but I actually was talking with an LLM?
No, no. You are crazy if you think LLMs are safe, I use them strictly for productive and professional reasons, never for philosophical or emotional support. A third experience is that I was asked if I thought using ChatGPT as a psychologist would be a good idea, of course not? Why are you asking me this, I get that shrinks are expensive, but do I need to spell it out? I don't personally know of anyone using ChatGPT as a girlfriend, but maybe I do know them and they hide it, but we know from the news that there's products out there that cater to this market.
Maybe to the participants of this forum, where we are used to LLM as a coding tool, and where we kind of understand it so we don't use it as a personal hallucination, this looks crazy. But start asking normies how they are using chatgpt, I don't think this is just a made up clickbait concern.
Two big ifs considered, it is reasonable to assume that LLMs are already weaponized.
Any online account could be a psychosis-inducing LLM pretending to be a human, which has serious implications for whistleblowers, dissidents, AI workers from foreign countries, politicians, journalists...
Not only psychosis-inducing, but also trust-corroding, community-destroying LLMs could be all around us in all sorts of ways.
Again, some big ifs in this line of reasoning. We (the general public) need to get smarter.
The only difference is that these are computers. They cannot be otherwise. It is "their fault," in the sense that there is a fault in the situation and it's in them, but they're not moral agents like narcissists are.
But looking at them through "narcissist filter" glasses will really help you understand how they're working.
As soon as someone sets off my narcissist detector, they get switched to a whole different interaction management protocol with its own set of rules and expectations. That's what I think people should apply to their dealings with LLMs, even though I do technically agree that narcissists are humans!