Readit News logoReadit News
leni536 · 7 months ago
My gripes with AI slop:

* Insincerity. I would prefer a disclaimer that you posted AI generated content than presenting it as your own words.

* Imbalance of efforts. If you didn't take the effort to write to me with your own words then you are robbing me of my efforts of reading what your AI assistant wrote.

* We have access to the same AI assistants. Don't try to sell me your AI assistants "insights" as your own insights. I can interact with the same AI assistant to gather the same insights.

Notice that the quality of the AI output is mostly irrelevant for these points. If you have good quality AI outputs then you are still welcome to share it with me, given that you are upfront that it is AI generated.

Cpoll · 7 months ago
> We have access to the same AI assistants. Don't try to sell me your AI assistants "insights" as your own insights. I can interact with the same AI assistant to gather the same insights.

With AI, the insights (or "insights") depend on what questions you ask, and what you know to push back on. We all have access to the same IDEs, but we don't write the same code.

With that said, I agree with your points.

joe_the_user · 7 months ago
I don't think AI output on some factual topic is comparable to distinct things written with IDEs.

On a given topic, I have always found that AI comes to the point of the average talking points of that topic and you really can't cleverly get more out of it because that's all that it "knows" (ie, push back gets either variations on a theme or hallucinations). And this is logical a given method is "average expected reply".

therein · 7 months ago
> With AI, the insights (or "insights") depend on what questions you ask

Which is an interesting place to put the human. You can be fooled to think that your question was unique and special just because it led some blackbox to generate slop that looks like it has insights.

This explains why we have people proudly coming in and posting the output they got their favorite blackbox to generate.

majormajor · 7 months ago
> With AI, the insights (or "insights") depend on what questions you ask, and what you know to push back on. We all have access to the same IDEs, but we don't write the same code.

This is also true if you don't have "AI" but are simply reading sources yourself.

Is AI going to help you realize you need to push back on something you wouldn't have pushed back on without it?

lupusreal · 7 months ago
Not knowing exactly how/what the other person asked their AI is one of the reasons I downvoted all AI slop, even those disclosed as AI generated. Asking in different ways can often generate radically different answers. Even if the prompt is disclosed, how do I know that was the real prompt? I would have to go interrogate the AI myself to see if I get something similar, as well as formulate my own prompts from different angles to see how much the answers change. And if I have to put in all that effort myself, then what is the value of the original slop post?
yapyap · 7 months ago
> With AI, the insights (or "insights") depend on what questions you ask, and what you know to push back on. We all have access to the same IDEs, but we don't write the same code

Yeah no, an AI is not gonna give you a brilliant answer cause you wrote such a brilliant prompt, you just wrote a different question and got a different answer. Like if I type something into google I don’t get the same result as when you type something into google, why? cause we’re not asking the same damn questions.

Deleted Comment

brookst · 7 months ago
I empathize with the obsession (we all have some obsessive behaviors we’re not thrilled with) but I question the utility.

It feels like some kind of negative appeal to authority: if the words were touched by an AI, they are less credible, and therefore it pays to detect AI as part of a heuristic to determine quality.

But… what if the writer just isn’t a native speaker of your language? Or is a math genius but weak with language? Or…

IMO human content is so variable in quality that it is incumbent on readers to evaluate based on content, not provenance. Using an author’s tools, or ethnicity, or sociowhatever as a proxy for quality doesn’t seem healthy or productive at all.

tarkin2 · 7 months ago
I would rather see the errors a non naive speaker would make rather than wading though grammatically correct but generic, meaningless generated business speak in an attempt to extract meaning. When you sound like everyone else you sound like you have nothing new to say, a linguistic soviet union: bland, dull, depressing.

I think there's a bigger point about coming across as linguistically lazy--copying and pasting text without critiquing it akin to copying and pasting a stackoverflow answer--which gives rise to possibly unfair intellectual assumptions.

ewoodrich · 7 months ago
Your comment reminded me of an account I saw in a niche Reddit sub for an e-reader brand that posted gigantic 8 paragraph "reviews" or "feedback for the manufacturer" with bullet points and a summary paragraph of all the previous feedback at the end.

They always had a few useful observations but it required wading through an entire monitor's worth of filler garbage that completely devalued the time/benefit of reading through something with that low of information density.

It was sad because they clearly were very knowledgeable but their insight was ruined by prompting ChatGPT with something like "Write a detailed, well formatted formal letter addressed to Manufacturer X" that was completely unnecessary in a public forum.

mihaic · 7 months ago
I feel the need to paraphrase the Ikea scene in Fight Club: "sentences with tiny errors and imperfections, proof that they were made by the honest, simple, hardworking people of... whereever"
Kiro · 7 months ago
Non native speakers may not want to make errors. I want to post grammatically correct comments. This is even more true for texts that have my real name. It's not just about the receiver.
bostik · 7 months ago
Not quality. Accountability.

I work in (okay, adjacent to) finance. Any communications that are sent / made available to people outside your own organisation are subject to being interpreted as legally binding to various degrees. Provenance of any piece of text/diagram is vitally important.

Let's pair this with a real life example: Google's Gemini sales team haven't understood the above. Their splashy sales pitch for using Gemini as part of someone's workflow is that it can autogenerate document sections and slide decks. The idea of annotating sections based on whether they were written by a human or an unaccountable tool appeared entirely foreign to them.

(The irony is that Google would be particularly well placed to have such annotations. Considering the underlying data structures are CRDTs, and they already show who made any given edit, including an annotation whether the piece of content came from a human or bot should be relatively easy.)

dale_glass · 7 months ago
I don't understand this argument. There is accountability: the user or management is always possible to blame.

Say one of my tasks is writing a document, I use a LLM and it tells people to eat rat poison.

But I'm accountable to my boss. My boss doesn't care a LLM did it, my boss cares I submitted something that horrible as completed work.

And if my boss lets that through then my boss is accountable to their boss.

And if my company posts that on the website, then my company is accountable to the world.

Annotations would be useful, sure. But I don't think for one minute they'd release you from any liability. Maybe they don't make it into the final PDF. Or maybe not everyone understands what they're supposed to take away from them. You post it, you'll be held responsible.

yuliyp · 7 months ago
One issue is that AI skews the costs paid by the parties of the communication. If someone wrote something and then I read it, the effort I took to read and comprehend it is probably lower than the author had to exert to create it.

On the other hand, with AI slop, the cost to read and evaluate is greater than the cost to create, meaning that my attention can be easily DoSed by bad actors.

codetrotter · 7 months ago
memhole · 7 months ago
This is partly what left me to leave a job. Coworkers would send me their AI slop expecting me to review it. Management didn’t care as it checked the box. The deluge of information and ease to create it is what’s made me far more sympathetic to regulation.
Sharlin · 7 months ago
Which is exactly the same problem as with spam.
XorNot · 7 months ago
Oddly enough, LLM generated text is going to be far less likely to sound like a non-native speaker writing though, is the thing. Once you sort of understand the differences in grammar rules, or just from experience, certain types of non-native english always have a feel to them which reflects the mismatch between two languages - i.e. Chinese-English rough translations tend to retain the Chinese grammar structure and also mix up formalisms of words.

LLM text just plain doesn't do this: they're very good at writing perfectly formed English, but it just winds up saying nothing (and models like ChatGPT have been optimized so they end up having a particular voice they speak in as well).

Swizec · 7 months ago
> certain types of non-native english always have a feel to them which reflects the mismatch between two languages

This. My partner always speaks frenglish (french english) after talking to her parents. You have to know a little French to understand her sentences. They’re all English words, but the phraseology is all French.

I do the same with Slovenian. The words are all English, but the shape is Slovenian. It adds a lot of soul to your words.

It can also be topic dependent. When I describe memories from home in English, the language sounds more Slovenian. Likewise when I talk about American stuff to my parents, my Slovenian sounds more English.

ChatGPT would lose all that color.

Read Man In The High Castle to see this for yourself. Whole book is English but you can tell the different nationalities of each character because the shape of their English changes. Philip K Dick used this masterfully.

Tainnor · 7 months ago
LLM certainly does write perfectly grammatical and idiomatic English (I haven't tried enough other languages to know if this is true for, say, Japanese, too). But regular people all have their own idiosyncratic styles - words and turns of phrases they like using more than others, preferred sentence structures and lengths, different levels of politeness, deference and assertiveness, etc.

LLM output to me usually sounds very sanitised style-wise (not just content-wise), some sort of lowest-common-denominator language, which is probably why it sounds so corporate-y. I guess you can influence the style by clever prompt engineering, but I doubt you'd get a very unique style this way.

thaumasiotes · 7 months ago
> Chinese-English rough translations tend to retain the Chinese grammar structure

Those would be _really_ rough translations. Yes, I've seen "It's an achieve my dream's place" written, but that was in an essay written for high school.

dale_glass · 7 months ago
LLMs do whatever you ask them to. They have a default, but they can be directed to use a different response style.

And of course you could build a corpus of text written by Chinese English speakers for more authenticity.

tdeck · 7 months ago
> But… what if the writer just isn’t a native speaker of your language? Or is a math genius but weak with language? Or…

All of these could apply to those YouTube videos that have synthesized speech, but I'll bet most of us click away immediately when we find the video we opened is one of those.

vunderba · 7 months ago
Agreed. Same reason I don't envision TTS podcasts taking off any time soon - the lack of authenticity is a real turn off.
Kiro · 7 months ago
No, we clearly don't. They remain very popular.
JTyQZSnP3cQGa8B · 7 months ago
> what if the writer just isn’t a native speaker of your language [...] evaluate based on content

Evaluate as in "monetize" everything and that's how we ended up in this commercialized internet. The old web was about diversity and meeting new people all over the world. I don't care about grammar mistakes, it makes us human.

codetrotter · 7 months ago
I find grammatical mistakes in non-native speakers endearing. Either when they speak English and are non-native speakers of English (I am too), or when they speak my native language and they are not native speakers of mine.

Especially when it’s apparent that it comes from how you would phrase something in the original language of the person speaking/writing.

Or as one might say: Especially when it is visible that it comes of how one would say something on mother’s language to the person that speaks or writes.

knightscoop · 7 months ago
I think the author does cover their bases there:

> To be clear, I fault no one for augmenting their writing with LLMs. I do it. A lot now. It’s a great breaker of writers block. But I really do judge those who copy/paste directly from an LLM into a human-space text arena.

When writing in my second language, I am leaning very heavily on AI to generate plausible writing based on an outline, after which I extensively tweak things (often by adversarial discussion with ChatGPT). It scares me that someone will see it as AI slop though, especially if the original premise of my writing was flimsy...

adeon · 7 months ago
I hope the article didn't make you feel bad and discourage you from writing. IMO what you are doing is not slop, and the author saying "I really do judge those who copy/paste directly from an LLN to human-space text arena" is a pretty shallow judgement if taken at face value so I'm hoping it was just some clumsy wording on their part.

---

When the AI hype started and companies started shoving it the throats of everyone, I also developed this intense reflex of a negative reaction to seeing LLM-text, much like how the author said on the first paragraph. So much crappy start-ups and grifters, which I think I saw a lot because I frequented /r/localllama Reddit and generally followed LLM-related news so I got exposed to the crap.

Even today I still get that negative reaction from seeing obvious LLM-text but it's much a weaker reaction now than it used to be, and I'm hoping it'll go away entirely soon.

The reason I want to change: my attitude changed when I heard a lot more use cases kinda like you describe, people who really could use the help from an LLM. Maybe you aren't good with the language. Maybe you are insecure about your own ability to write. Maybe you aren't creative or articulate and you want to communicate your message better. Maybe you have 8 children and your life is a chaos, but you actually need to write something regularly and ChatGPT cuts out that time a lot. Maybe your fingers physically hurt and you have a disability and you can't type well. Maybe you have a mental or a brain problem and you can't focus or remember things or dyslexia or whatever. Maybe you are used to Google searching and now think Google results are kinda shit these days and a modern LLM is usually correct enough that it's just more practical to use. Probably way more examples I can't think of.

None of these uses are "slop" to me, but can result in text that looks like slop to people, because it might have easily recognizable ChatGPT-like tone. If you get judged over using AI as a helping tool (and you are not scamming/grifing/etc.), then judge them back for judging you ;)

Also, I'm not sure the definition of "slop" has an exactly agreed upon definition. I think of it as low-effort AI garbage, basically a use of LLMs as a misdirection. Basically the same as "spam" but maybe with a nuance that now it's LLM-powered. Makes you waste time. Or tries to scam or trick you. I don't have a coherent definition myself. The author has a definition near top of the page that seems reasonable but the rest of the article didn't feel like it actually followed the spirit of said definition (like the judging copy/paste part).

To give the author good faith: I think they maybe wrote thinking of a reader audience of proficiently English-speaking writers with no impediments to writing. Like assuming everyone knows how or can "fix" the LLM text with their own personal touch or whatever. Not sure. I can't read their mind.

I have a hope, that genuine slop continues to be recognizable: even if I get 10000x smarter LLM right now, ChatGPT-9000, can it really do much if I, as its user, continue to ask it to make crappy SEO pages or misleading Amazon product pages? The tone of the language with LLMs might get more convincing, but savvy humans should till be able to read reviews, realize a SEO page has no substance, etc. regardless how immaculate the writing itself is.

Tl;dr; keep writing, and keep making use of AI, I hope reading that sentence didn't actually affect you.

alkonaut · 7 months ago
False positives aren’t a big problem. There’s more content than I have time to read and my tolerance for reading anything generated is zero. So it’s better to label too much human content as generated and risk ignoring something insightful and human generated.
BlueTemplar · 7 months ago
Depending on the subfield it might not be true. It's also quite disheartening to find yourself in a social space where you realize that you are almost the only one human left (happened to me twice already).
krisoft · 7 months ago
> False positives aren’t a big problem.

You will think that until something your wrote with your own mind and hands is falsely accused of being AI generated.

“Sorry alkonaut, your account has been suspended due to suspicious activity.”

“We have chatgpt too alkonaut! No need to copy paste it for us”

“It is my sad duty to inform you that we have reasons to believe that you have commited academic misconduct. As such we have suspended your maintenance grant, and you will be removed from the university register.”

p0w3n3d · 7 months ago
Content written by non-native English speaker will have some errors (usually). Content generated by ChatGPT4 will have no errors but will give feeling as if the person who was writing was compelled to puke more and more words

Deleted Comment

tomjen3 · 7 months ago
I wrote (dictated mostly, but still its my words) some comment that I will eventually post on Hacker News[0], then ran it through ChatGPT with a prompt not much more complicated that "rewrite this in the style of a great hacker news comment".

The result hurt. Not because it was bad, but because it was better than I could do myself or even hope to do myself eventually.

I am sure the comment would be upvoted more after it had been run through the AI than before..

[0]: it addresses a common misconception that shows up often, but each time I see it I don't have the time to write the proper reply. I am not trying to astro turf HN.

mattigames · 7 months ago
The assumption that AI is gonna perfectly fill the gaps in the language abilities of anyone with a good idea but poor communication tools to explain it feels naive, among other issues the more original and groundbreaking an idea might be the harder it will be for the machine to follow it, as it may deviate too much from it's training dataset.
StefanBatory · 7 months ago
I'm not a native speaker.

I've been accused of being AI often because of that. :(

trod1234 · 7 months ago
You are right. There is very little utility.

These people are not domain experts, and they often latch onto structure or happenstance that is quite common (in the overall picture), and anything out of the ordinary they consider AI slop. Its a false justification loop, which breaks their perception.

Around the turn of the last century (1900-1940s), was a time where hyper-rationalism played an important role in winning WW2. Language use in the published works and in academia at that time had words with distinct meanings, they were sometimes uncommon words, but it allowed a rigorous approach to communication.

Today we have words which can have contradictory meanings in different contexts based in ambiguity, where the same word means two things simultaneously without further information. AI often can't handle figuring out the context in these cases, and often hallucinates, whereas the context can in some cases be clear to a discerning human reader.

I've have seen it more than a few times where people have misidentified these clear cut cases of human consistency, as AI generated slop. There is a lot of bias in perception that makes this a common issue.

In my opinion, the exercise of doing this as the article's author suggests, is simply fallacy following a deluded spiral to madness.

Communication is the sharing of a consistent meaning. Consistency plays a big role in that.

People can talk about word counts, frequency, word choices, etc, and in most cases its fallacy, especially when there is consistency in the meaning. They delude themselves, fueling a rather trite delusion that anything that looks different is in fact AI and not a real person.

It is sad that people can be so easily fooled, and false justification is one of the worst forms of self-violation since it warps your perception at a fairly low level.

computerthings · 7 months ago
> Using an author’s tools, or ethnicity, or sociowhatever as a proxy for quality

For me the rejection of it doesn't even depend on there being any author involved with it, it could just be running free so to speak.

And language is very close to the ability to think and to even see the world around us. To just poison that well nilly-willy because "it's hard" is not a great argument. It's hard because it matters, and that's why learning language and improving one's usage of it is rewarding.

Personally I view machine translation that happens in a process of communication, as part of an ongoing process between people or in a group (mathematicians) that involves feedback and clarification etc. as very different than using LLM to create static "content". We have been using DeepL and Google Translate long before any of this hype, and it was fine.

You asked, what if the writer isn't a native speaker of my language, but how would they even know my language? They only do in personal communication, in which case see above; and otherwise, I don't want to read it. That is, people should write in languages they know, because that's the only ones they can proofread. That's the only way they can make sure it's actually what they think it is. And others who are good at translating (be it software or a person) can translate it when needed. There is no need to destroy the original words and just have the translation, at least I have no need for that.

> If people cannot write well, they cannot think well, and if they cannot think well, others will do their thinking for them.

-- George Orwell

Yes, this is correlated with privilege. Life is still not fair. Which we fix or at least improve by making a fairer world where everybody has access to education and medicine, not by pretending you can just fake the process by having something that statistically could have been an outcome of the process, had it taken place.

The poorest and most vulnerable people will suffer the most in a world where money and bandwidth alone can buy you what people think, what they see, what drowns out any human voice trying to reach other humans. This is what billionaires clamor for, not the average person, at all.

afpx · 7 months ago
I have some anonymous accounts, and use AI to avoid being identified.
albert_e · 7 months ago
> Content that is mostly-or-completely AI-generated that is passed off as being written by a human, regardless of quality.

I think something does not necessarily need to be "passed off as being written by a human" -- whether covertly, implicitly or explicitly -- to qyalify as AI slop.

There are ample examples of content sites and news articles etc that shamelessly post AI generated content -- without actively trying to claim it as human generated. Some sites may even have a disclaimer that they might sometimes use AI tools.

Still slop is slop becuase we are subjected to it and have to be wary of it, filter through it to separate low quality low effort content, expend mental energy in all this while feeling powerless etc.

pockmarked19 · 7 months ago
Why care whether something is AI slop or human slop? It’s not worth reading in either case.

The arguments presented here look suspiciously like the arguments scribe classes of old used against the masses learning to read and write.

Seems like we’ve gotten to the point where sloppy writers are worse than LLMs and assume that all “meticulous” writers are LLMs. The only convincing “tell” I have ever heard tell of: characters such as smart quotes, but even those can just be a result of writing in a non-standard or “fancy” editor first. I’ve even seen people say that em dashes are indicative, I guess those people neither care about good writing nor know that em dashes are as easy as option + shift + hyphen on a Mac.

vunderba · 7 months ago
Because AI slop can be generated in massive quantities that dwarf anything prior in history. Sifting through these bales of hay is honestly exhausting.
nikau · 7 months ago
> Why care whether something is AI slop or human slop? It’s not worth reading in either case.

The problem is human slop is typically easy to detect by grammar and other clues.

AI slop is often confidently incorrect.

xmprt · 7 months ago
> human slop is typically easy to detect by grammar and other clues

I'm not sure this is true. There have been a lot of times where I see a very well made video or article about some interesting topic but when I go to the comments I end up finding some corrections or realizing that the entire premise of the content was poorly researched yet well put together.

brookst · 7 months ago
Nit: em dashes also appear when you type two dashes in a row in word or mostly any other MS product. That’s a terrible heuristic.
BrouteMinou · 7 months ago
I am a proof of that.

I write in word to correct my text when I use my pc. Additionally, it's a better editor than the one supplied for commenting...

BlueTemplar · 7 months ago
The new Azerty has a bunch of typographically correct punctuation too :

https://norme-azerty.fr/en/

Nullabillity · 7 months ago
> I’ve even seen people say that em dashes are indicative, I guess those people neither care about good writing nor know that em dashes are as easy as option + shift + hyphen on a Mac.

They are virtually indistinguishable from regular dashes (unless you're specifically looking for them), and contribute nothing of significant value to the text itself. They were only ever a marker of "this is either professionally edited, or written by a pedant".

Deleted Comment

voidhorse · 7 months ago
> Why care whether something is AI slop or human slop? It’s not worth reading in either case.

That's not always true, and this is one of the fundamental points of human communication that all the people pushing for AI as a comms tool miss.

The act of human communication is highly dependent on the social relationships between humans. My neighbor might be incapable of producing any writing that isn't slop, but it's still worth reading and interpreting because it might convey some important beliefs that alter my relationship with my neighbor.

The problem is, if my neighbor doesn't write anything other than a one sentence prompt, doesn't critically examine the output before giving to me, it violates one of the basic purposes of human to human communication—it is effectively disingenuous communication. It flies in the face of those key assumptions of rational conversation outlined by Habermas.

oneeyedpigeon · 7 months ago
I'm pretty sure that anyone saying "emdashes are a tell" doesn't mean the literal character, but also the double-hyphen or even single hyphen people often use in its place.
tokioyoyo · 7 months ago
Human slop is still written by humans, implying there was effort and labour. Not sure how to put it in words properly, just "the vibes" are different.
persnickety · 7 months ago
<Compose>--. on Linux.
aragilar · 7 months ago
That's an en-dash, not a em-dash. – vs —
pona-a · 7 months ago
Hyper + Shift + -
JohnMakin · 7 months ago
Respectfully, I disagree with some of the conclusions, but agree with the observations.

It seems, to me, it seems obvious the slop started ingesting itself and regurgitating and degrading in certain spaces. Linkedin has particularly been very funny to watch, that was very true. However - the gold mine companies that host spaces like this are realizing they’re sitting on isn’t in invasive user data manipulation, which they’ll do anyway, but in high quality tokens to feed back into the monster that devoured the entire internet. There’s such a clear obvious difference in training data quality from scraped content online depending on how bad the bot problem is.

So, all this to say, if you’re writing well, dont give it out for free. I’m trying to create a space where people can gather like the RSS feed mentioned, but where they own their own writing, and can profit off of it if they want to opt in to letting it be trained. It sounds a lot easier than it is, the problem is a little weird.

the weirdest thing to me lately, is that bad writing with lots of typos tends to get promoted more because, i think of the naive assumption it’s more likely to be a real “human,” kind of like a reverse reverse turing test. utterly bizarre.

teraflop · 7 months ago
> I’m trying to create a space where people can gather like the RSS feed mentioned, but where they own their own writing, and can profit off of it if they want to opt in to letting it be trained. It sounds a lot easier than it is, the problem is a little weird.

I mean, maybe I'm just defeatist, but it sounds near-impossible to me. The companies that train AI models have already shown that they don't give a damn about creator rights or preferences. They will happily train on your content regardless of whether you've opted in.

So the only way to build a "space" that prevents this is by making it a walled garden that keeps unauthorized crawlers out entirely. But how do you do that while still allowing humans in? The whole problem is that bots have gotten good enough at (coarsely) impersonating humans that it's extremely difficult to filter them out at scale. And as soon as even one crawler manages to scrape your site, the cat's out of the bag.

You can certainly tell people that they own their content on a given platform, but how can you hope to enforce that?

Retric · 7 months ago
The counter is poisoning the well of training data not trying to hide it.

Crawling the web is cheap. Finding hidden land mines in oceans of data can be next to impossible because a person can tell if something isn’t being crawled but they can’t inspect even a tiny fraction of what’s being ingested.

bsnnkv · 7 months ago
> Undoubtedly, the sloppification of the internet will likely get worse over the next few years. And as such, the returns to curating quality sources of content will only increase. My advice? Use an RSS feed reader, read Twitter lists instead of feeds, and find spaces where real discussion still happens (e.g. LessWrong and Lobsters still both seem slop-free).

I had not heard of LessWrong before - thanks for the recommendation!

Whenever I see a potentially interesting link (based on the title and synopsis if one is available) I feed it into my comment aggregator[1] and have a quick scan through (mostly) human commentary before committing to reading the full content, especially if it is a longer piece.

The reasons behind this are two-fold; one, comments from forums tend to call out AI slop pretty quickly, and two, even if the content body itself is slop, the interesting hook (title or summary) is often enough to spark some actually meaningful discussion on the topic that is worth reading.

[1]: https://kulli.sh

IshKebab · 7 months ago
Fair warning, the LessWrong people have a lot of very strange ideas. Even more than HN.
hatefulmoron · 7 months ago
I used to read a lot of LessWrong. These days I would recommend people to avoid it. The content is thought-provoking, written by well-meaning intelligent people.

On the other hand, it's like watching people nervously count their fingers to make sure they're all still there. Or rather, it's not enough to count them, we have to find a way to make sure we can be confident with the number we get. Whatever benefit you get from turning off the news, it's 10x as beneficial to stop reading LessWrong.

bsnnkv · 7 months ago
This and the child comment are a great example of why I always read the comments first :)
rednafi · 7 months ago
I use LLMs as a moderately competent editor, but AI can’t be a substitute for thought. Sure, it can sometimes generate ideas that feel novel, but I find the disinfectant-laced, sanitary style of writing quite repulsive.

That said, we give too much credit to human writing as well. Have we forgotten about the sludge humans create in the name of SEO?

inglor_cz · 7 months ago
I went to ChatGPT 4 and asked it about my writing activity. It hallucinated three books that I have never written (though the topics are mostly really what interests me), and never mentioned any of the nine I actually did.

Whoa. Worse than I would have thought.

Slop is too nice.

Deleted Comment