It feels like some kind of negative appeal to authority: if the words were touched by an AI, they are less credible, and therefore it pays to detect AI as part of a heuristic to determine quality.
But… what if the writer just isn’t a native speaker of your language? Or is a math genius but weak with language? Or…
IMO human content is so variable in quality that it is incumbent on readers to evaluate based on content, not provenance. Using an author’s tools, or ethnicity, or sociowhatever as a proxy for quality doesn’t seem healthy or productive at all.
> To be clear, I fault no one for augmenting their writing with LLMs. I do it. A lot now. It’s a great breaker of writers block. But I really do judge those who copy/paste directly from an LLM into a human-space text arena.
When writing in my second language, I am leaning very heavily on AI to generate plausible writing based on an outline, after which I extensively tweak things (often by adversarial discussion with ChatGPT). It scares me that someone will see it as AI slop though, especially if the original premise of my writing was flimsy...
I’m actually having a really hard time thinking of an AI feature other than coding AI feature that I actually enjoy. Copilot/Aider/Claude Code are awesome but I’m struggling to think of another tool I use where LLMs have improved it. Auto completing a sentence for the next word in Gmail/iMessage is one example, but that existed before LLMs.
I have not once used the features in Gmail to rewrite my email to sound more professional or anything like that. If I need help writing an email, I’m going to do that using Claude or ChatGPT directly before I even open Gmail.