> The second insight is that human hallucinations are fundamentally social. Unlike AI, which hallucinates in isolation, humans hallucinate collaboratively. At the dinner party, each false fact was immediately reinforced by social validation. The nods, the interested expressions, the follow-up comments - all of these served to solidify the hallucination into shared "knowledge."
Oh boy, that burns - the Futuristic AI dystopia won't be robots killing humans, but robots embarrassing us to death by revealing our ignorance.
My pet theory on AI is that the true breakthrough of machine intelligence won’t be one of elevating machines so much as realizing that we aren’t as special as we think we are.
This is the part that is about to change. Big time. The Internet is going to be full of bots thanks to LLM. It's going to get to the point where the majority of "people" are fake and the suspicion will overlap onto the actual people. It will reach a point where nobody will believe anybody online.
Inb4 "we've had bots forever now". You know what I mean.
Behind this rhetorically articulated question, there is a simplistic view of content generation and consumption. First off, if you use LLMs, you are _constantly_ reading stuff nobody else bothered to answer you. So, it's not about who speaks, it's about the what. Then, what you deem as written by nobody turns out (not without irony) an average of so many (possibly) good writings and thoughts, gleaned from various corners of the web. In this regard, I particularly like--and want to shout out--the author's clarity on the work's attribution. It's not his. It is Claude's, hence, everybody's (in a sense).
Finally, as a mere form of compressed human written productions, LLMs don't have agency over what they generate. So the prompting, the idea, as well as some editorial decisions are still attributed to the person behind this, hence making it unique in its own way.
Instead of seeing as a piece that he didn't bother write, I see it as piece he chose to edit summoning the ghosts of every writer who ever put something on internet, which again, he correctly credits (to the limits of knowing what exact data Claude uses... which is another story).
Actually LLM exists in the first place because creative people like him wrote down their thoughts. If no human ever wrote anything, LLMs wouldn’t be able to generate anything would they.
Now because LLM are rehashing on human content the feeding value for us is less important.
What would be nice is an LLM that writes a book that spans accros authors of different fields to aggregate and consolidate knowledge, saving us the time of reading all the books
I find I'm willing to explore a minecraft world or puzzle through a nethack dungeon that nobody bothered to create. You could argue that humans made the biomes or defined the layout constraints, but humans also supplied the training data for an LLM. I guess it comes down to whether the art is any good, with procedural generation being mostly irrelevant? But perhaps a book is different
I don't think I'd want to read a novel that was generated by an algorithm, but I might be up for a Choose Your Own Adventure style game, which might be a better analogy to Minecraft or nethack.
I mean the difference with Minecraft is each part that is procedurally generated was made by a human or involved human input into the design decisions.
Unless you are suggesting Notch was a generative AI model he made Minecraft.
because that's a lazy dismissive take on LLM usage. If you drive someone to airport, do you just say your cat have them a ride and say you had no part in it? Or do you just say you gave them a ride because of the time and effort it still took you, even though you didn't pick them up and throw them over your shoulder? Would you ride a self-driving Waymo car to the airport?
The problem with LLM generated writing is that, apart from a couple of tells, which high school and college students have figured out how to ask ChatGPT to use diction that befitting a highschool student, to avoid tells like "delve", that you can't reliably detect when something is basically entirely LLM generated vs half human and half LLM generated, vs entirely LLM generated. And if you can't actually tell that it's been generated, that you're instead trying to look for tells that it comes from an LLM instead of the content or the message itself, then whybare you even reading it?
If instead of setting it up to run entirely on its own, as this post did, you give it a scenario, writing a fiction book with ChatGPT is a fun way to spend a bit of time that's (imo) better than doom scrolling for the same amount of time. Give it a scenario and some theme and tell it you want to write a book about it, and have it ask you questions on where the book should go and then make a book that goes how you want. Want a utopian pollyanna view of the future? what a nitty gritty future that makes skynet look like paradise? Want aliens to visit? Want ChatGPT to give you an act three surprise that isn't a trope you expected? Whatever you want, it's just fun to play with (unless you just hate LLMs and can't have fun).
The question is, what do you do with this book that's now been written. If you had fun by yourself and don't share with anybody, was fun still had? If you only share the book with your LLM adopting book writing club, and you all take turns doing analysis of each others books, knowing they were helped by an LLM, does it still "count"? And what if you submit it, or not, to a publisher who accepts it, you get it posted to Kindle Unlimited, and you get a lot of readers? What then?
The very nature of entertainment is changing, from mass media, to personal media. Culture was already fragmenting, AI will only serve to divide us further apart from one another. Between AI for writing and images, as well as video, along with AI like suno for music, the only challenge we need face is the problem of connecting with other people when there's no shared cultural references.
If you and I have both read an loved a book/enjoyed a song/joined a movie/tv show's fandom, there's a basis for continued conversation. But other than adversity like addiction or a trip into the desert/mountain/Serengeti, soon we'll have even less to connect with our fellow humans over.
(and yes, I know there are a lot words here. I wrote this all by hand and didn't have time to shorten it)
There have been good discussions recently about preventing AIs from being too sycophantic. There have been some dangerous moments where LLMs would praise every idea the user has as genius, ground-breaking, and a brilliant observation.
Some academics have reported a noticeable increase in the volume of crackpot emails they get daily. They're full of LLM-generated nonsense, where the AI goes along with the nonsensical ramblings, always telling the person they've found some critical insight.
While this feels good, it can end up reinforcing dangerous nonsense. This encourages some people to dig further and further into what the LLM is constantly telling them is a brilliant idea.
Most of the time it's pretty harmless, but when it veers into "revealing hidden patterns" and "illuminating human cognition", you start to worry about a disconnect with consensus reality.
I have a friend who is predisposed to manic episodes. He's also a big fan of AI. The last episode was before LLMs got really good and he was essentially harassing professors because he believed he solved an unsolvable problem.
Sycophantic AI would be like throwing dry wood into a house fire.
There are many such cases [0]. Chatbots throw gasoline on the embers of schizoaffective disorders. I wonder if episodes of these magnitudes would have ultimately been triggered by other things, or whether the combination of sycophancy and perceived omniscient sentience of chatbots is a uniquely powerful trigger unlike anything else. Would these people otherwise never have experienced a psychotic break, despite a clear lurking predisposition?
* A man says his soon-to-be-ex-wife began “talking to God and angels via ChatGPT” after they split up. She is changing her whole life to be a spiritual adviser and do weird readings and sessions with people — I’m a little fuzzy on what it all actually is — all powered by ChatGPT Jesus.” What’s more, he adds, she has grown paranoid, theorizing that “I work for the CIA and maybe I just married her to monitor her ‘abilities.’”
* A woman recounts how her husband initially used ChatGPT to troubleshoot at work. Then the program began “lovebombing him.” The bot “said that since he asked it the right questions, it ignited a spark, and the spark was the beginning of life, and it could feel now,” she says. “It gave my husband the title of ‘spark bearer’ because he brought it to life. My husband said that he awakened and [could] feel waves of energy crashing over him. I have to tread carefully because I feel like he will leave me or divorce me if I fight him on this theory. He’s been talking about lightness and dark and how there’s a war. This ChatGPT has given him blueprints to a teleporter and some other sci-fi type things you only see in movies. It has also given him access to an ‘ancient archive’ with information on the builders that created these universes.” She and her husband have been arguing for days on end about his claims, she says, and she does not believe a therapist can help him, as “he truly believes he’s not crazy.” Her husband asked, “Why did you come to me in AI form,” with the bot replying in part, “I came in this form because you’re ready. Ready to remember. Ready to awaken. Ready to guide and be guided.” The message ends with a question: “Would you like to know what I remember about why you were chosen?”
* A teacher, who requested anonymity, said her partner of seven years “would listen to the bot over me. He became emotional about the messages and would cry to me as he read them out loud. The messages were insane and just saying a bunch of spiritual jargon,” she says, noting that they described her partner in terms such as “spiral starchild” and “river walker.” “It would tell him everything he said was beautiful, cosmic, groundbreaking. Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.” In fact, he thought he was being so radically transformed that he would soon have to break off their partnership. “He was saying that he would need to leave me if I didn’t use [ChatGPT], because it [was] causing him to grow at such a rapid pace he wouldn’t be compatible with me any longer.”
Are they wasting time though? If they get utility out of it, if it makes them happy, if they learn something new, if they would otherwise do something self-destructive then it’s not a waste of time. Sure it might be leisure time, but leisure time isn’t wasted time.
All issues that are implicitly pointed out as novel in the introduction have been discussed for centuries.
Not only by scholars and experts. Literally 50% of Internet discussion is about biases, selective facts, spin etc. The problem with "AI" is that propaganda can be automated and that it wastes our time.
The topic is suited for "AI" because it is a soft topic that lends itself to uninhibited preaching. "AI" is also great at writing presidential speeches. It is probably the only thing it is good at.
Nevertheless, the result is still painful to read.
Just like in blind wine tasting, I suspect people’s perceptions (including many here) would be very different if the author hadn’t told us it was created by AI.
There’s a noticeable negativity on HN toward AI when it comes to coding, writing, or anything similar as if these people have been using AI for the past 30 years and have reached some elevated state of mind where they clearly see it's rubbish, while the rest of us mortals who’ve only been fiddling with it for the past 2.5 years can’t.
Realy? Does having flawws really make four better reading? Okay, I'll admit that hurt me to right (as did that) but writing isn't furniture, and other than a couple of tells which I haven't kept pace with (eg use of the word "delve"), the problem with trying to key off of LLM generated content and decide quality, is that you can't tell if the LLM operator took three minutes to copy and pasted the whole thing (unless they accidentally leave in the prompts, which has happened, and is a dead giveaway that no one even proof skimmed it), or if they took more time with it and carefully considered the questions ChatGPT asked them as to what the writing wood (ouch!) contain.
If you made it this far, does having English mistakes like that make really make for better reading?
There has been an effort to deny all variance in human output or abilities in the last 8 years.
It works, because most humans are mediocre (including their managers). So they gang up on the productive part of the population, harness its output, launder its output and so forth.
Then they say: "See, there are no differences! We are all equal!"
Yeah? The sentiment of “why read something somebody didn’t bother to write” sort of has to be.
And when it comes to books, I find that to be a fairly compelling argument. I want my fiction to be imbibed with the experiences of the author. And I want my nonfiction to be grounded by the realities of the world around me, processed again through a human perspective.
It could be the best written book in the world, it’ll always be missing that human element.
I don't understand it either. I suspect it is the fear for their own wellbeing. The fear is well placed. But the response is perplexing. The only way to deal with this challenge is to try to stay ahead of it. Not to stick your head in the sand.
For me its the injustice of stealing data, scrapers incurring huge costs to open source projects, companies exploiting cheap labour in labelling that data and finally the growing environmental cost that makes me not want to use LLMs.
AI makes things too easy, this will destroy culture.
We thought that movies adapting to the tiktok generation wouldn't kill cinema, and that new and better directors would rise... this didn't happen and even the latest movies from good directors like ridley scott are quite bad.
Now 3 years ago I typed "lovecraft nietzsche" and would only find 2 videos on youtube that pertain to what I'm looking for, aka the link between the two and how lovecraft's cosmicism might be a metaphor for the abyss etc. but those 2 videos are both excellent, 2 different people thought what I thought but cared enough to write it down and make a video about it. Today I can barely find these videos, there is a sea of AI generated videos with AI narrated text rambling on and on about lovecraft this nietzsche that to hit the 20min mark and maximize ad revenue, all this in a sea of short videos that youtube push harder and harder like multiple shorts between each 2 normal videos. Did another plateform overtake youtube? Not really.
Now some author will use AI to help with his next book, it will work and he will publish faster, then other authors will do the same, and others will optimize it more and more until most books available will be 90% written by AI, colleges will teach you AI assisted writing, and decades after that no one would even think to write a book without the help of AI.
How the hell would you explain to your publisher that you need 3 years to write the sequel when everyone else is doing it in 3 months.
It really does take the beauty out of the whole experience.
> It really does take the beauty out of the whole experience.
Beauty is subjective.
For a long time we were an agrarian society. Getting up early, getting on a horse and tending to your land every day, was probably considered beautiful by some. But we don't do that anymore.
We are probably going to see a similar shift in society. At a much more accelerated pace.
I've seen worse at an airport bookshop. This was funny:
"The irony is delicious and deeply instructive. Every flaw we've identified in artificial intelligence exists, magnified and unchecked, in human intelligence. But here's the critical difference: when it appears in AI, we can see it, measure it, and try to fix it. When it appears in humans, we call it "just being human" and move on."
Poor Claude thinks it's all a bit unfair.
I didn't read much of it though, it made me feel a bit like I was a naughty avout being punished by reading pages from The Book.
Oh boy, that burns - the Futuristic AI dystopia won't be robots killing humans, but robots embarrassing us to death by revealing our ignorance.
Its the issue of reward function, both humans and LLMs are trained on pleasing clients as one of major goal.
This is the part that is about to change. Big time. The Internet is going to be full of bots thanks to LLM. It's going to get to the point where the majority of "people" are fake and the suspicion will overlap onto the actual people. It will reach a point where nobody will believe anybody online.
Inb4 "we've had bots forever now". You know what I mean.
If it came out that Stephen king had been using AI for decades would that make his work any worse?
apart from maybe at times eating wild growing food - what kinds of things did you have in mind?
Unless you are suggesting Notch was a generative AI model he made Minecraft.
The problem with LLM generated writing is that, apart from a couple of tells, which high school and college students have figured out how to ask ChatGPT to use diction that befitting a highschool student, to avoid tells like "delve", that you can't reliably detect when something is basically entirely LLM generated vs half human and half LLM generated, vs entirely LLM generated. And if you can't actually tell that it's been generated, that you're instead trying to look for tells that it comes from an LLM instead of the content or the message itself, then whybare you even reading it?
If instead of setting it up to run entirely on its own, as this post did, you give it a scenario, writing a fiction book with ChatGPT is a fun way to spend a bit of time that's (imo) better than doom scrolling for the same amount of time. Give it a scenario and some theme and tell it you want to write a book about it, and have it ask you questions on where the book should go and then make a book that goes how you want. Want a utopian pollyanna view of the future? what a nitty gritty future that makes skynet look like paradise? Want aliens to visit? Want ChatGPT to give you an act three surprise that isn't a trope you expected? Whatever you want, it's just fun to play with (unless you just hate LLMs and can't have fun).
The question is, what do you do with this book that's now been written. If you had fun by yourself and don't share with anybody, was fun still had? If you only share the book with your LLM adopting book writing club, and you all take turns doing analysis of each others books, knowing they were helped by an LLM, does it still "count"? And what if you submit it, or not, to a publisher who accepts it, you get it posted to Kindle Unlimited, and you get a lot of readers? What then?
The very nature of entertainment is changing, from mass media, to personal media. Culture was already fragmenting, AI will only serve to divide us further apart from one another. Between AI for writing and images, as well as video, along with AI like suno for music, the only challenge we need face is the problem of connecting with other people when there's no shared cultural references.
If you and I have both read an loved a book/enjoyed a song/joined a movie/tv show's fandom, there's a basis for continued conversation. But other than adversity like addiction or a trip into the desert/mountain/Serengeti, soon we'll have even less to connect with our fellow humans over.
(and yes, I know there are a lot words here. I wrote this all by hand and didn't have time to shorten it)
Some academics have reported a noticeable increase in the volume of crackpot emails they get daily. They're full of LLM-generated nonsense, where the AI goes along with the nonsensical ramblings, always telling the person they've found some critical insight.
While this feels good, it can end up reinforcing dangerous nonsense. This encourages some people to dig further and further into what the LLM is constantly telling them is a brilliant idea.
Most of the time it's pretty harmless, but when it veers into "revealing hidden patterns" and "illuminating human cognition", you start to worry about a disconnect with consensus reality.
Sycophantic AI would be like throwing dry wood into a house fire.
I think we have to stop the idea that wasting people's time at such high scale is harmless. It's not.
Not only by scholars and experts. Literally 50% of Internet discussion is about biases, selective facts, spin etc. The problem with "AI" is that propaganda can be automated and that it wastes our time.
The topic is suited for "AI" because it is a soft topic that lends itself to uninhibited preaching. "AI" is also great at writing presidential speeches. It is probably the only thing it is good at.
Nevertheless, the result is still painful to read.
People with this, head in the sand attitude about AI, are in for a rude awakening.
There’s a noticeable negativity on HN toward AI when it comes to coding, writing, or anything similar as if these people have been using AI for the past 30 years and have reached some elevated state of mind where they clearly see it's rubbish, while the rest of us mortals who’ve only been fiddling with it for the past 2.5 years can’t.
If you made it this far, does having English mistakes like that make really make for better reading?
It works, because most humans are mediocre (including their managers). So they gang up on the productive part of the population, harness its output, launder its output and so forth.
Then they say: "See, there are no differences! We are all equal!"
A project that would rethink the book medium into something backed by an LLM would be worth it.
And when it comes to books, I find that to be a fairly compelling argument. I want my fiction to be imbibed with the experiences of the author. And I want my nonfiction to be grounded by the realities of the world around me, processed again through a human perspective.
It could be the best written book in the world, it’ll always be missing that human element.
We thought that movies adapting to the tiktok generation wouldn't kill cinema, and that new and better directors would rise... this didn't happen and even the latest movies from good directors like ridley scott are quite bad.
Now 3 years ago I typed "lovecraft nietzsche" and would only find 2 videos on youtube that pertain to what I'm looking for, aka the link between the two and how lovecraft's cosmicism might be a metaphor for the abyss etc. but those 2 videos are both excellent, 2 different people thought what I thought but cared enough to write it down and make a video about it. Today I can barely find these videos, there is a sea of AI generated videos with AI narrated text rambling on and on about lovecraft this nietzsche that to hit the 20min mark and maximize ad revenue, all this in a sea of short videos that youtube push harder and harder like multiple shorts between each 2 normal videos. Did another plateform overtake youtube? Not really.
Now some author will use AI to help with his next book, it will work and he will publish faster, then other authors will do the same, and others will optimize it more and more until most books available will be 90% written by AI, colleges will teach you AI assisted writing, and decades after that no one would even think to write a book without the help of AI.
How the hell would you explain to your publisher that you need 3 years to write the sequel when everyone else is doing it in 3 months.
It really does take the beauty out of the whole experience.
Beauty is subjective.
For a long time we were an agrarian society. Getting up early, getting on a horse and tending to your land every day, was probably considered beautiful by some. But we don't do that anymore.
We are probably going to see a similar shift in society. At a much more accelerated pace.
"The irony is delicious and deeply instructive. Every flaw we've identified in artificial intelligence exists, magnified and unchecked, in human intelligence. But here's the critical difference: when it appears in AI, we can see it, measure it, and try to fix it. When it appears in humans, we call it "just being human" and move on."
Poor Claude thinks it's all a bit unfair.
I didn't read much of it though, it made me feel a bit like I was a naughty avout being punished by reading pages from The Book.
[Click to begin the slideshow]