In humans, a hallucination is formally defined as being a sensory experience without an external stimulus. LLMs have no sensors to experience the world with (other than their text input) and (probably) don’t even have a subjective experience in the same way humans do.
A more suitable term would be confabulation, which is what humans do when due to a memory error (eg. due to Korsakoff syndrome) we produce distorted memories of oneself. This may sometimes sound very believable to outsiders; the comparison with LLMs making stuff up is rather apt!
So please call it confabulating instead of hallucinating when LLMs make stuff up.
At the moment, most people still think LLMs are basically like the computers from Star Trek, rational, sentient and always correct. Like those lawyers who used ChatGPT to generate legal arguments - it didn't even occur to them that AI could fabricate data, they assumed it was just a search engine you could talk to like a person.
This is why we still need metaphors to spread cultural knowledge. To that end I think it's less important to be technically accurate than to impart an idea clearly. "Hallucinate" and "confabulate" get the same point across, but the former is more widely understood.
Even "confabulate" isn't great, since it carries the connotation of either deception or senility/mental illness. But the "confabulation" of LLMs is inherent to their design. They aren't intended to discern truth, or accuracy, but statistical similarity to a stream of language tokens.
And humans don't really do that, so we don't really have language fit to describe how LLMs operate without resorting to anthropomorphism and metaphors from human behavior.
Hard disagree. You're browsing the Web not exploring the Internet.
I think that humans actually confabulate in minor ways a lot more than we realize; just ask your parents to compare stories with their siblings about something that happened to them growing up -- in my experience you often get completely different versions of events. Given that, it might actually be a good opportunity (as the comment sibling suggests) to introduce that into the language.
...towards less complexity and redundancy. Scientifically accurate terms are called that because they're usually not part of everyday language. Hallucination is a much more common word and thus more self-explanatory to most people. For a random person AI confabulation sounds about as intuitive as flux capacitor.
Deleted Comment
The fact I had to look it up to make sure, and that it isn't the primary definition, makes this a bad alternative to a well understood word which has already become established.
If you do want to use another term for laypeople, I think "bullshitting" or "BSing" would have connotations that are more relevant than "hallucinating".
That's not to say they're not useful. The ability to BS is well regarded among humans as long as you, as a consumer, have a decent BS detector. And like a good BS artist, if you stay within their area of expertise they can be really useful. Its when you ask them something that they should or almost know that they start to be full of s**.
The fact that BS is also used for deliberate lying to mislead is a strike against that word. Using "bullshit" and "hallucinate" in conjunction somewhat paint a picture of the quality of the answer you get and the "motivation" used to get there.