This is a good statement of what I suspect many of us have found when rejecting the rewriting advice of AIs. The "pointiness" of prose gets worn away, until it doesn't say much. Everything is softened. The distinctiveness of the human voice is converted into blandness. The AI even says its preferred rephrasing is "polished" - a term which specifically means the jaggedness has been removed.
But it's the jagged edges, the unorthodox and surprising prickly bits, that tear open a hole in the inattention of your reader, that actually gets your ideas into their heads.
I think that mostly depends on how good a writer you are. A lot of people aren't, and the AI legitimately writes better. As in, the prose is easier to understand, free of obvious errors or ambiguities.
But then, the writing is also never great. I've tried a couple of times to get it to write in the style of a famous author, sometimes pasting in some example text to model the output on, but it never sounds right.
> I think that mostly depends on how good a writer you are. A lot of people aren't, and the AI legitimately writes better.
Even poor writers write with character. My dad misspells every 4th word when he texts me, but it’s unmistakably his voice. Endearingly so.
I would push back with passion that AI writes “legitimately” better, as it has no character except the smoothed mean of all internet voices. The millennial gray of prose.
> A lot of people aren't, and the AI legitimately writes better.
It may write “objectively better”, but the very distinct feel of all AI generated prose makes it immediately recognizable as artificial and unbearable as a result.
It depends how you define "good writing", which is too often associated with "proper language", and by extension with proper breeding. It is a class marker.
People have a distinct voice when they write, including (perhaps even especially) those without formal training in writing. That this voice is grating to the eyes of a well educated reader is a feature that says as much about the reader as it does about the writer.
Funnily enough, professional writers have long recognised this, as is shown by the never-ending list of authors who tried to capture certain linguistic styles in their work, particularly in American literature.
There are situations where you may want this class marker to be erased, because being associated with a certain social class can have negative impact on your social prospects. But it remains that something is being lost in the process, and that something is the personality and identity of the writer.
I am really conflicted about this because yes, I think that an LLM can be an OK writing aid in utilitarian settings. It's probably not going to teach you to write better, but if the goal is just to communicate an idea, an LLM can usually help the average person express it more clearly.
But the critical point is that you need to stay in control. And a lot of people just delegate the entire process to an LLM: "here's a thought I had, write a blog post about it", "write a design doc for a system that does X", "write a book about how AI changed my life". And then they ship it and then outsource the process of making sense of the output and catching errors to others.
It also results in the creation of content that, frankly, shouldn't exist because it has no reason to exist. The number of online content that doesn't say anything at all has absolutely exploded in the past 2-3 years. Including a lot of LLM-generated think pieces about LLMs that grace the hallways of HN.
I think it’s essential to realize that AI is a tool for mainstream tasks like composing a standard email and not for the edges.
The edges are where interesting stuff happens. The boring part can be made more efficient. I don’t need to type boring emails, people who can’t articulate well will be elevated.
It’s the efficient popularization of the boring stuff. Not much else.
> The edges are where interesting stuff happens. The boring part can be made more efficient. I don’t need to type boring emails, people who can’t articulate well will be elevated.
I think that boring emails should not be written. What kind of boring emails do you NEED to be written, but not WANT to write? Those are exactly the kind of email that SHOULD NOT be passed through an LLM.
If you need to say yes/no. You don't want to take the whole email conversation and let LLM generate a story about why you said yes/no.
If you want to apply for a leave, just make it optimal "Hi <X>, I want to take leave from Y to Z. Thanks". You don't want to create 2 pages of justification for why you want to take this leave to see your family and friends.
In fact, for every LLM output, I want to see the input instead. What did they have in mind? If I have the input, I can ask LLM to generate 1 million outputs if I really want to read an elaboration. The input is what matters.
If I have the input, I can always generate an output. If I have the output, I don't know what was the input (i.e. the original intention).
It contributes to making “standard” emails boring. I rather enjoy reading emails in each sender’s original voice. People who can’t articulate well aren’t elevated, instead they are perceived to be sending bland slop if they use LLMs to conceal that they can’t express themselves well.
Every group want to label some outgroup as naively benefiting from AI. For programmers, apparently it's the pointy haired bosses. For normies, it's the programmers.
Be careful of this kind of thinking, it's very satisfying but doesn't help you understand the world.
> But it's the jagged edges, the unorthodox and surprising prickly bits, that tear open a hole in the inattention of your reader, that actually gets your ideas into their heads.
This brings to mind what I think is a great description of the process LLMs exert on prose: sanding.
It's an algorithmic trend towards the median, thus they are sanding down your words until they're a smooth average of their approximate neighbors.
no but its bad writing It repeats information, It adds superfluous stuff, doesnt produce more specific forms of saying things, you are making It sounds like its "too perfect" when its bland because its artificial dumbness not artificial intelligence
Well said. In music, it's very similar. The jarring, often out of key tones are the ones that are the most memorable, the signatures that give a musical piece its uniqueness and sometimes even its emotional points. I don't think it's possible for AI to ever figure this out, because there's something about being human that is necessary to experiencing or even describing it. You cannot "algorithmize" the unspoken.
I see it on recent blog posts, on news articles, obituaries, YT channels. Sometimes mixed with voice impersonation of famous physicists like Feynman or Susskind.
I find it genuinely soul-crushing and even depressing, but I may be over sensitive to it as most readers don't seem to notice.
Maybe. Another potential, more positive, timeline is that semantically ablated content filling everyone’s feeds turns people off, and slowly kills the social feed paradigm.
I find it extremely difficult to focus on any piece of writing the moment I see the patterns. Can’t tell if it’s an attitude problem I need to get over or if it’s just that all AI writing really is that bad.
same. it is showing how many people are not trying to participate - just appear to. I want to read from and write for my peers, but it seems we are just awash with fakers
It's almost disgusting to me tbh, for the first time I find it actually easy to unplug and go do offline things, whatever I want to explore online is hidden behind a forest of synth slop I can't even bother looking at anymore
I personally think “generative AI” is a misnomer. More I understand the mathematics behind machine learning more I am convinced that it should not be used to generate text, images or anything that is meant for people to consume, even if it is the most blandest of email. Sometimes you might get lucky, but most of the time you only get what the most boring person in the most boring cocktail party would say if forced to be creative with a gun pointed to his head. It can help in multitude of other ways, help human in the creative process itself, but generating anything even mildly creative by itself… I’ll pass.
Precisely. If companies would just focus on what it could be good at - deductive search, coding boilerplate with assistance, etc. then it would be a great tool. Instead you have dario, altman, and co. trying to pump stock and give us more spaghetti agents.
He ended a critical commentary by suggesting that the author he was responding to should think more critically about the topic rather than repeating falsehoods because "they set off the tuning fork in the loins of your own dogmatism."
> "they set off the tuning fork in the loins of your own dogmatism."
Eh... I don't know. To me, that sounds very AI-ish.
Claude is very good -- at times -- coming up with flowery metaphoric language... if you tell it to. That one is so over-the-top that I'd edit it out.
Put something like this in your prompt and have it revise something:
"Make this read like Jim Thompson crossed with Thomas Harris, filtered through a paperback rack at a truck stop circa 1967. Make it gritty, efficient, and darkly comedic. Don't shy away from suggesting more elegant words or syntax. (For instance, Robert Howard -- Conan -- and H.P. Lovecraft were definitely pulp, but they had a sophisticated vocabulary.) I really want some purple prose and overwrought metaphors."
Occasionally you'll get some gems. Claude is much better than ChatGPT at this kinda stuff. The BEST ones are the ever-growing NSFW models populating huggingface.
In short, do the posts on OpenClawForum all sound alike? Of course.
Just like all the webpages circa 2000 looked alike. The uniformity wasn't because of HTML... rather it was because few people were using HTML to its full potential.
I'm learning to like 'em more, along with every other human idiosyncracy. Besides, it makes a kind of sense, the idea of some resonance occuring in one's gusset. Timber timbre. Flangent thrumming.
I thought it was more creative than sloppy. Don't forget that many ordinary phrases were once jarring mixed imagery. To "wear your heart on your sleeve" was coined by Shakespeare; we still use it because it "stuck" due to its unorthodox phrasing.
If you like your prose to be anodyne, then maybe you like what AI produces.
Yes I noticed this as well. I was last writing up a landing page for our new studio. Emotion filled. Telling a story. I sent it through grok to improve it. It removed all of the character despite whatever prompt I gave. I'm not a great writer, but I think those rough edges are necessary to convey the soul of the concept. I think AI writing is better used for ideation and "what have I missed?" and then write out the changes yourself.
I've found LLMs to be terrible with ideation. I've been using GPT 5.x to come up with ideas and plot lines for a Dungeon World campaign I've been running.
I'm no fantasy author, and my prose leaves much to be desired. The stuff the LLM comes up with is so mind numbingly bland. I've given up on having it write descriptions of any characters or locations. I just use it for very general ideas and plot lines, and then come up with the rest of the details on the fly myself. The plot lines and ideas it comes up with are very generic and bland. I mainly do it just to save time, but I throw away 50% of the "ideas" because they make no sense or are really lame.
What i have found LLMs to be helpful with is writing up fun post-session recaps I share with the adventurers.
I recap in my own words what happened during the session, then have the LLM structure it into a "fun to read" narrative style. ChatGPT seems to prefer a Sanderson jokey tone, but I could probably tailor this.
Then I go through it, and tweak some of the boring / bland bits. The end result is really fun to read, and took 1/20th the time it would have taken me to write it all out myself. The LLM would have never been able to come up with the unique and fun story lines, but it is good at making an existing story have some narrative flare in a short amount of time.
That‘s also my experience. I use AI to help me generate the overall structure of a narrative. Apart from the hallucinations (e.g. June is not in spring), it‘s ok to spot inconsistencies, somewhat acceptable to brainstorm some ideas if you‘re new to a certain genre, but the prose it generates (talking about Opus 4.6) feels like an interpolation of all existing texts.
YES this hits the nail on something I've been trying to express for some time now. Semantic ablation: love it, going to use that a lot not now when arguing why someone's ChatGPT-washed email sucks.
Semantic ablation is also why I'm doubtful of everyone proclaiming that Opus 4 would be AGI if we just gave it the right agent harness and let all the agents run free on the web. In reality they would distill it to a meaningless homogeneous stew.
> We are witnessing a civilizational "race to the middle," where the complexity of human thought is sacrificed on the altar of algorithmic smoothness.
This has long been the case in the area of "business English", which has become highly simplified to fulfill several concurrent, yet conflicting requirements:
- Generally understandable to a wide audience due its lingua franca status
- "Media-trained" to not let internal details slip or admit fault to the public
- "Executive Summary"-fied to provide the coveted "30k ft view" to detail-allergic senior leadership
Considering how heavily weighted language training models are towards corporate press releases, general-audience news media and SEO-optimized blogspam, AI English is quickly going to become an even more blurry photocopy of business English.
But it's the jagged edges, the unorthodox and surprising prickly bits, that tear open a hole in the inattention of your reader, that actually gets your ideas into their heads.
But then, the writing is also never great. I've tried a couple of times to get it to write in the style of a famous author, sometimes pasting in some example text to model the output on, but it never sounds right.
Even poor writers write with character. My dad misspells every 4th word when he texts me, but it’s unmistakably his voice. Endearingly so.
I would push back with passion that AI writes “legitimately” better, as it has no character except the smoothed mean of all internet voices. The millennial gray of prose.
It may write “objectively better”, but the very distinct feel of all AI generated prose makes it immediately recognizable as artificial and unbearable as a result.
People have a distinct voice when they write, including (perhaps even especially) those without formal training in writing. That this voice is grating to the eyes of a well educated reader is a feature that says as much about the reader as it does about the writer.
Funnily enough, professional writers have long recognised this, as is shown by the never-ending list of authors who tried to capture certain linguistic styles in their work, particularly in American literature.
There are situations where you may want this class marker to be erased, because being associated with a certain social class can have negative impact on your social prospects. But it remains that something is being lost in the process, and that something is the personality and identity of the writer.
Which is the real issue, we’re flooding channels not designed for such low effort submissions. AI slop is just SPAM in a different context.
But the critical point is that you need to stay in control. And a lot of people just delegate the entire process to an LLM: "here's a thought I had, write a blog post about it", "write a design doc for a system that does X", "write a book about how AI changed my life". And then they ship it and then outsource the process of making sense of the output and catching errors to others.
It also results in the creation of content that, frankly, shouldn't exist because it has no reason to exist. The number of online content that doesn't say anything at all has absolutely exploded in the past 2-3 years. Including a lot of LLM-generated think pieces about LLMs that grace the hallways of HN.
The edges are where interesting stuff happens. The boring part can be made more efficient. I don’t need to type boring emails, people who can’t articulate well will be elevated.
It’s the efficient popularization of the boring stuff. Not much else.
I think that boring emails should not be written. What kind of boring emails do you NEED to be written, but not WANT to write? Those are exactly the kind of email that SHOULD NOT be passed through an LLM.
If you need to say yes/no. You don't want to take the whole email conversation and let LLM generate a story about why you said yes/no.
If you want to apply for a leave, just make it optimal "Hi <X>, I want to take leave from Y to Z. Thanks". You don't want to create 2 pages of justification for why you want to take this leave to see your family and friends.
In fact, for every LLM output, I want to see the input instead. What did they have in mind? If I have the input, I can ask LLM to generate 1 million outputs if I really want to read an elaboration. The input is what matters.
If I have the input, I can always generate an output. If I have the output, I don't know what was the input (i.e. the original intention).
Dead Comment
He lacks (or lost thru disuse) technical expertise on the subject, so he uses more and more fuzzy words, leaky analogies, buzzwords.
This maybe why AI generated content has so much success among leaders and politicians.
Be careful of this kind of thinking, it's very satisfying but doesn't help you understand the world.
This brings to mind what I think is a great description of the process LLMs exert on prose: sanding.
It's an algorithmic trend towards the median, thus they are sanding down your words until they're a smooth average of their approximate neighbors.
Dead Comment
Dead Comment
I see it on recent blog posts, on news articles, obituaries, YT channels. Sometimes mixed with voice impersonation of famous physicists like Feynman or Susskind.
I find it genuinely soul-crushing and even depressing, but I may be over sensitive to it as most readers don't seem to notice.
Maybe I'm going crazy but I can smell it in the OP as well.
Dead Comment
And, the worst part is noone will ever make a new internet because of the founder effect. We are basically in the worst timeline.
I would rather read the prompt than the generative output, even if it’s just disjointed words and sentence fragments.
don't be mean, it's median AI à la mode
https://youtu.be/605MhQdS7NE?si=IKMNuSU1c1uaVCDB&t=730
He ended a critical commentary by suggesting that the author he was responding to should think more critically about the topic rather than repeating falsehoods because "they set off the tuning fork in the loins of your own dogmatism."
Yeah, AI could not come up with that phrase.
Sounds like word salad. Of course if you write like GPT-2 it would not sound like current models.
Eh... I don't know. To me, that sounds very AI-ish.
Claude is very good -- at times -- coming up with flowery metaphoric language... if you tell it to. That one is so over-the-top that I'd edit it out.
Put something like this in your prompt and have it revise something:
"Make this read like Jim Thompson crossed with Thomas Harris, filtered through a paperback rack at a truck stop circa 1967. Make it gritty, efficient, and darkly comedic. Don't shy away from suggesting more elegant words or syntax. (For instance, Robert Howard -- Conan -- and H.P. Lovecraft were definitely pulp, but they had a sophisticated vocabulary.) I really want some purple prose and overwrought metaphors."
Occasionally you'll get some gems. Claude is much better than ChatGPT at this kinda stuff. The BEST ones are the ever-growing NSFW models populating huggingface.
In short, do the posts on OpenClawForum all sound alike? Of course.
Just like all the webpages circa 2000 looked alike. The uniformity wasn't because of HTML... rather it was because few people were using HTML to its full potential.
If you like your prose to be anodyne, then maybe you like what AI produces.
Dead Comment
Dead Comment
I'm no fantasy author, and my prose leaves much to be desired. The stuff the LLM comes up with is so mind numbingly bland. I've given up on having it write descriptions of any characters or locations. I just use it for very general ideas and plot lines, and then come up with the rest of the details on the fly myself. The plot lines and ideas it comes up with are very generic and bland. I mainly do it just to save time, but I throw away 50% of the "ideas" because they make no sense or are really lame.
What i have found LLMs to be helpful with is writing up fun post-session recaps I share with the adventurers.
I recap in my own words what happened during the session, then have the LLM structure it into a "fun to read" narrative style. ChatGPT seems to prefer a Sanderson jokey tone, but I could probably tailor this.
Then I go through it, and tweak some of the boring / bland bits. The end result is really fun to read, and took 1/20th the time it would have taken me to write it all out myself. The LLM would have never been able to come up with the unique and fun story lines, but it is good at making an existing story have some narrative flare in a short amount of time.
Dead Comment
Semantic ablation is also why I'm doubtful of everyone proclaiming that Opus 4 would be AGI if we just gave it the right agent harness and let all the agents run free on the web. In reality they would distill it to a meaningless homogeneous stew.
Dead Comment
This has long been the case in the area of "business English", which has become highly simplified to fulfill several concurrent, yet conflicting requirements:
- Generally understandable to a wide audience due its lingua franca status
- "Media-trained" to not let internal details slip or admit fault to the public
- "Executive Summary"-fied to provide the coveted "30k ft view" to detail-allergic senior leadership
Considering how heavily weighted language training models are towards corporate press releases, general-audience news media and SEO-optimized blogspam, AI English is quickly going to become an even more blurry photocopy of business English.
It wanted to replace all the little bits of me that were in there.