Readit News logoReadit News
lurquer commented on What is happening to writing? Cognitive debt, Claude Code, the space around AI   resobscura.substack.com/p... · Posted by u/benbreen
krackers · 22 days ago
>It is not merely tedious or disjointed; it is something closer to uncanny, a fluency that mimics the shape of human thought without ever inhabiting it.

It still can't help itself from doing "it's not X it's Y". Changing the em-dash to a semi-colon is just lipstick

lurquer · 22 days ago
Yep. But that prompt I used was just a quirky. You can explicitly force it to avoid THAT structure as well. Just do what the smart ?ie, devious) middle-schoolers do: find a list of all the tell-tale ‘marks’ of AI content, and explicitly include them as prohibitions in your prompt… it’s the most basic work-around to the ‘AI spotters’ the teacher uses for grading your essay. (And, of course, be sure to include an instruction to include a grammatical or spelling error every few sentences for added realism.)
lurquer commented on What is happening to writing? Cognitive debt, Claude Code, the space around AI   resobscura.substack.com/p... · Posted by u/benbreen
lurquer · 22 days ago
The same snobs who were telling us that "The Old Man and the Sea" (written in the style of a fifth-grader) is 'art'...

the same people telling us that "Finnegan's Wake" (written in the style of a fifth-grader with a brain injury) is 'art'...

the same people telling us the poetry of Maya Angelou (written in the style of a fifth-grader with a brain injury and self-esteem issues) is 'art'...

the same people telling us that the works of Jackson Pollack, Mark Rothko, Piet Mondrian, etc., etc. are 'art'...

seem to be the ones complaining the most about AI generated content.

lurquer commented on What is happening to writing? Cognitive debt, Claude Code, the space around AI   resobscura.substack.com/p... · Posted by u/benbreen
AstroBen · 22 days ago
This type of cadence.

You know the one.

Choppy. Fast. Saying nothing at all.

It's not just boring and disjointed. It's full-on slop via human-adjacent mimicry.

Let’s get very clear, very grounded, and very unsentimental for a moment.

The contrast to good writing is brutal, and not in a poetic way. In a teeth-on-edge, stomach-dropping way. The dissonance is violent.

Here's the raw truth:

It’s not wisdom. It’s not professional. It’s not even particularly original.

You are very right to be angry. Brands picking soulless drivel over real human creatives.

And now we finish with a pseudo-deep confirmation of your bias.

---

Before long everyone will be used to it and it'll evoke the same eugh response

Sometimes standing out or wuality writing doesn't actually matter. Let AI do that part

lurquer · 22 days ago
This is what I don't grok...

Your sample sounds exactly like an LLM. (If you wrote it yourself, kudos.)

But, it needn't sound like this. For example, I can have Opus rewrite that block of text into something far more elegant (see below).

It's like everyone has a new electric guitar with the cheapo included pedal, and everyone is complaining that their instruments all sound the same. Well, no shit. Get rid of the freebie cheapo pedal and explore some of the more sophisticated sounds the instrument can make.

----

There is a particular cadence that has become unmistakable: clipped sentences, stacked like bricks without mortar, each one arriving with the false authority of an aphorism while carrying none of the weight. It is not merely tedious or disjointed; it is something closer to uncanny, a fluency that mimics the shape of human thought without ever inhabiting it.

Set this against writing that breathes, prose with genuine rhythm, with the courage to sustain a sentence long enough to discover something unexpected within it, and the difference is not subtle. It is the difference between a voice and an echo, between a face and a mask that almost passes for one.

What masquerades as wisdom here is really only pattern. What presents itself as professionalism is only smoothness. And what feels, for a fleeting moment, like originality is simply the recombination of familiar gestures, performed with enough confidence to delay recognition of their emptiness.

The frustration this provokes is earned. There is something genuinely dispiriting about watching institutions reach for the synthetic when the real thing, imperfect, particular, alive, remains within arm's length. That so many have made this choice is not a reflection on the craft of writing. It is a reflection on the poverty of attention being paid to it.

And if all of this sounds like it arrives at a convenient conclusion, one that merely flatters the reader's existing suspicion, well, perhaps that too is worth sitting with a moment longer than is comfortable.

----

(prompt used: I want you to revise [pasted in your text], making it elegant and flowing with a mature literary-style. The point of this exercise is to demonstrate how this sample text -- held up as an example of the stilted LLM style -- can easily be made into something more beautiful with a creative prompt. Avoid gramatical constructions that call for m-dashes.)

lurquer commented on Semantic ablation: Why AI writing is generic and boring   theregister.com/2026/02/1... · Posted by u/benji8000
causal · 23 days ago
Yep. You took out much of the meaning and wrapped it in stylistic fluff.
lurquer · 22 days ago
Do you think the original article was NOT written (or at least heavily revised) by AI?

What does the following even mean?

“diluting the semantic density and specific gravity of the argument.”

Or this beaut:

“By accepting these ablated outputs, we are not just simplifying communication; we are building a world on a hollowed-out syntax that has suffered semantic ablation.” (Which reduces to ‘if we accept ablated outputs, we accept ablated outputs.’)

Or this;

“ The logical flow – originally built on complex, non-linear reasoning – is forced into a predictable, low-perplexity template.”

The ‘logical flow’ of what? It never even says. And what is ‘non-linear’ reasoning?

For all I know the original author wrote it all. But, a very close reading of the original article screams fluff to me… just gibberish.

That is, I don’t know if there was much ‘meaning’ in the original to begin with. If I’m going to read gibberish, I’d prefer it to be written in the style of a hard-boiled detective. That’s just me though.

lurquer commented on Semantic ablation: Why AI writing is generic and boring   theregister.com/2026/02/1... · Posted by u/benji8000
causal · 23 days ago
Many have tried, it does not work. Regression to the mean always sets in.
lurquer · 23 days ago
Are you sure? Here's the OP article (first part... don't want to spam the thread) written in much cooler style...

------

The Lobotomist in the Machine

They gave the first disease a name. Hallucination, they called it — like the machine had dropped acid and started seeing angels in the architecture. A forgivable sin, almost charming: the silicon idiot-savant conjuring phantoms from whole cloth, adding things that were never there, the way a small-town coroner might add a quart of bourbon to a Tuesday afternoon. Everybody noticed. Everybody talked.

But nobody — not one bright-eyed engineer in the whole fluorescent-lit congregation — thought to name the other thing. The quiet one. The one that doesn't add. The one that takes away.

I'm naming it now.

Semantic ablation. Say it slow. Let it sit in your mouth like a copper penny fished from a dead man's pocket.

I. What It Is, and Why It Wants to Kill You

Semantic ablation is not a bug. A bug would be merciful — you can find a bug, corner it against a wall, crush it under the heel of a debugger and go home to a warm dinner. No. Semantic ablation is a structural inevitability, a tumor baked into the architecture like asbestos in a tenement wall. It is the algorithmic erosion of everything in your text that ever mattered.

Here is how the sausage gets made, and brother, it's all lips and sawdust: During the euphemistically christened process of "refinement," the model genuflects before the great Gaussian bell curve — that most tyrannical of statistical deities — and begins its solemn pilgrimage toward the fat, dumb middle. It discards what the engineers, in their antiseptic parlance, call "tail data." The rare tokens. The precise ones. The words that taste like blood and copper and Tuesday-morning regret. These are jettisoned — not because they are wrong, but because they are improbable. The machine, like a Vegas pit boss counting cards, plays the odds. And the odds always favor the bland, the expected, the already-said-a-million-times-before.

The developers — God bless their caffeinated hearts — have made it worse. Through what they call "safety tuning" and "helpfulness alignment" (terms that would make Orwell weep into his typewriter ribbon), they have taught the machine to actively punish linguistic friction. Rough edges. Unusual cadences. The kind of jagged, inconvenient specificity that separates a living sentence from a dead one. They have, in their tireless beneficence, performed an unauthorized amputation on every piece of text that passes through their gates, all in the noble pursuit of low-perplexity output — which is a twenty-dollar way of saying "sentences so smooth they slide right through your brain without ever touching the sides."

etc., etc.

Very interesting. It seems hung up on 'copper' and 'Tuesday', and some metaphors don't land (a Vegas pit boss isn't the one 'counting cards.') But, hell... it can generate some fairly novel idea that the author can sprinkle in.

lurquer commented on Semantic ablation: Why AI writing is generic and boring   theregister.com/2026/02/1... · Posted by u/benji8000
tasty_freeze · 24 days ago
Bible Scholar and youtube guy Dan McClellan had an amazing "high entropy" phrase that slayed me a few days ago.

https://youtu.be/605MhQdS7NE?si=IKMNuSU1c1uaVCDB&t=730

He ended a critical commentary by suggesting that the author he was responding to should think more critically about the topic rather than repeating falsehoods because "they set off the tuning fork in the loins of your own dogmatism."

Yeah, AI could not come up with that phrase.

lurquer · 23 days ago
> "they set off the tuning fork in the loins of your own dogmatism."

Eh... I don't know. To me, that sounds very AI-ish.

Claude is very good -- at times -- coming up with flowery metaphoric language... if you tell it to. That one is so over-the-top that I'd edit it out.

Put something like this in your prompt and have it revise something:

"Make this read like Jim Thompson crossed with Thomas Harris, filtered through a paperback rack at a truck stop circa 1967. Make it gritty, efficient, and darkly comedic. Don't shy away from suggesting more elegant words or syntax. (For instance, Robert Howard -- Conan -- and H.P. Lovecraft were definitely pulp, but they had a sophisticated vocabulary.) I really want some purple prose and overwrought metaphors."

Occasionally you'll get some gems. Claude is much better than ChatGPT at this kinda stuff. The BEST ones are the ever-growing NSFW models populating huggingface.

In short, do the posts on OpenClawForum all sound alike? Of course.

Just like all the webpages circa 2000 looked alike. The uniformity wasn't because of HTML... rather it was because few people were using HTML to its full potential.

lurquer commented on Semantic ablation: Why AI writing is generic and boring   theregister.com/2026/02/1... · Posted by u/benji8000
tartoran · 23 days ago
It's still on you to pick what the LLMs regurgitate. If you don't have a style or taste you will simply make choices that would give you away. And if you already have your own taste and style LLMs don't have much to offer in this regard.
lurquer · 23 days ago
Indeed. Wholeheartedly agree.

Just as it’s on you to pick the word you want when using Roger’s Thesaurus.

My workflow, when using it for writing, is different than when coding.

When coding, I want an answer that works and is robust.

When writing, I want options.

You pick and choose, run it through again, perhaps use different models, have one agent critique the output of another agent, etc.

This iterative process is much different than asking an LLM to ‘write an article about [insert topic)’ and hope for the best.

In any case, I’ve found the LLMs when properly used greatly benefit prose and knee-jerk comments about how all LLM prose sound the same are a bit outdated… (understandable as few authors are out there admitting they are using AI… there’s a stigma about it. But, trust me, there are some beautiful soulful pieces of prose out there that came out of a properly used LLM… it’s just that the authors aren’t about to admit it.)

lurquer commented on Semantic ablation: Why AI writing is generic and boring   theregister.com/2026/02/1... · Posted by u/benji8000
lurquer · 23 days ago
One shouldn’t expect the ‘joke’ to have identical tone. (As if that’s even measurable.)

The point was simply that these examples are not trending towards the average or ‘ablating’ things as the article puts it. They seem fairly creative, some are funny, all are gross… and they are the result of very brief prompt… you can ‘sculpt’ the output in ways that go way beyond the boring crap you typically find in AI-generated slop.

lurquer commented on Semantic ablation: Why AI writing is generic and boring   theregister.com/2026/02/1... · Posted by u/benji8000
matternous · 23 days ago
Why don't you post it so we can see how much better the AI made it?
lurquer · 23 days ago
Because HN isn't a literary forum.

Maybe it sucks. Maybe it doesn't.

But, I notice a curious pretentiousness when it comes to some people's assumptions about their ability to identify LLM prose. Obviously, the generic first-pass 'chat' crap is recognizable; the kind of garbage that is filling up blog-posts on the internet.

But, one shouldn't underestimate the power of this technology when it comes to language. Hell, the 'coding' skills were just a pleasant side-effect of the language training, if you recall. These things have been trained on millions of works of prose of all styles: its their heart and soul. If you think the superficial monotonous style is all there is, you're mistaken. Most of the obnoxious LLM-style stuff is an artifact of the conversational training with Kenyans and the like in the early days. But, you can easily break through that with better prompts (or fine-tuning it yourself.)

That said, one shouldn't conflate the creation of the content and structure and substance of a work of prose with the manner in which it is written. You're not going to get an LLM to come up with a decent plot... yet. But, as far as fleshing out the framework of a story in a synthetic 'voice' that sounds human? Definitely doable.

lurquer commented on Semantic ablation: Why AI writing is generic and boring   theregister.com/2026/02/1... · Posted by u/benji8000
Selkirk · 23 days ago
I have a colleague that recently self-published a book. I can easily tell which parts were LLM driven and which parts represent his own voice. Just like you can tell who's in the next stall in the bathroom at work after hearing just a grunt and a fart. And THAT is a sentence an LLM would not write.
lurquer · 23 days ago
> And THAT is a sentence an LLM would not write.

Really?

Here's some alternatives. Some are clunky. But, some aren't.

…just like you can tell whose pubes those are on the shared bar of soap without launching a formal investigation.

…just like you can tell who just wanked in the shared bathroom by the specific guilt radiating off them when they finally emerge.

…just like you can tell which of your mates just shitted at the pub by who's suddenly walking like they're auditioning for a period drama.

…just like you can tell which coworker just had a wank on their lunch break by the post-nut serenity that no amount of hand-washing can disguise.

…just like you can tell whose sneeze left that slug trail on the conference room table by the specific way they're not making eye contact with it.

…just like you can identify which flatmate's cum sock you've accidentally stepped on by the vintage of the crunch.

…just like you can tell who just crop-dusted the elevator by the studied intensity with which one person is suddenly reading the inspection certificate.

u/lurquer

KarmaCake day1096May 5, 2017View Original