Readit News logoReadit News
karaterobot · a month ago
I don't think the results of the last two decades of mass information dissemination have been all that great. People didn't trust anything on the internet before GPT, and the internet was already a cacophony of screeching voices too. Smart people were already decoupling from social media, and moving meaningful interaction to enclaves well before ChatGPT became a factor. ChatGPT did not ruin the internet, it was unleashed on a broken internet to begin with.

If, as this article predicts, the result of GPT is that we don't trust information from the internet, and everybody moves away from it, that's great. Traditional journalism was better, as it turns out. Talking mainly to your friends rather than millions of people was better, as it turns out. I'm ready to go back to that, should it come to it.

But it won't. This essay is making a catastrophic prediction that won't come to pass. Whatever the future is, it's going to be something nobody is predicting yet. It'll be better than the doomsayers predict, and worse than what the cheerleaders say. It will be nothing like a simple magnification of the present concern over epistemology.

dns_snek · a month ago
You're confusing social media with the internet at large. "Traditional journalism" isn't a replacement for a niche blog writing about highly specific, highly detailed technical topics. What the blog post is describing isn't some bold prediction, it's already happening.

Today it's far more difficult (and personally quite frustrating) to find information written by actual people with actual experience on any given topic than it was 5 years ago, because for every one of those articles there are now 20 more written by LLMs, often outranking them. This frustration is only going to grow as the LLM proliferation continues.

Henchman21 · a month ago
Do you think this is also true of bookstores and libraries today?
andy99 · a month ago
I think it is an information virus, but differently - it's homogenized everything, and made people dumber and lazier. It's poisoned public and professional discourse by reducing writing and thinking from the richness of humanity to one narrow style with a tiny latent space, and simultaneously convinced people that this is what good writing looks like. And it's erased thought from board classes of endeavor. This virus is much worse than the relatively benign symptoms described in the article.
A4ET8a8uTh0_v2 · a month ago
Like most progress, it made some things easier ( and some things worse as a result ). What I do find particularly fascinating is that it is doing that even in professions that should know better ( lawyers, doctors ). That my boss uses it is no surprise me though. I always suspected he never really read my emails.
kldg · a month ago
I've definitely been surprised by how it's being used; it's replacing people in places I don't think (even as a closet AI/LLM enthusiast) AI should ever be used: elder care, customer support (even on phone lines), for homework grading. -But I shouldn't have been so surprised, because some were already using robots for these tasks (or maybe not robots explicitly, but making CSRs/similar stick to scripts); my daughter was taking college placement tests recently -- even the essay questions were graded by software, and she's watched by software as she writes it. These things still seem to me like jobs which fundamentally require a human touch -- it's been especially amazing to me teachers are using AI to detect AI; you can't determine whether or not a robot wrote it, but you can assign a grade to it? Huh??

I have a very vocally anti-AI friend, but there is one thing he always goes on about that confuses me to no end: hates AI, strongly wants an AI sexbot, is constantly linking things trying to figure out how to get one, and asking me and the other nerds in our group about how the tech would work. No compromises anywhere except for one of the most human experiences possible. :shrug:

melagonster · a month ago
Their way is similar to programmers. The proficient user can distinguish whether the output is correct at first look.
doctorpangloss · a month ago
People want AI lawyers and they really invented AI judges.
majormajor · a month ago
https://www.media.mit.edu/publications/your-brain-on-chatgpt... This seems relevant here in a "the results agree" way.
kordlessagain · a month ago
People have always tended toward taking shortcuts. It's human nature. So saying "this technology makes people dumber or lazier" is tricky, because you first need a baseline: exactly how dumb or lazy were people before?

To quantify it, you'd need measurable changes. For example, if you showed that after widespread LLM adoption, standardized test scores dropped, people's vocabulary shrank significantly, or critical thinking abilities (measured through controlled tests) degraded, you'd have concrete evidence of increased "dumbness."

But here's the thing: tools, even the simplest ones, like college research papers, always have value depending on context. A student rewriting existing knowledge into clearer language has utility because they improve comprehension or provide easier access. It's still useful work.

Yes, by default, many LLM outputs sound similar because they're trained to optimize broad consensus of human writing. But it's trivially easy to give an LLM a distinct personality or style. You can have it write like Hemingway or Hunter S. Thompson. You can make it sound academic, folksy, sarcastic, or anything else you like. These traits demonstrably alter output style, information handling, and even the kind of logic or emotional nuance applied.

Thus, the argument that all LLM writing is homogeneous doesn't hold up. Rather, what's happening is people tend to use default or generic prompts, and therefore receive default or generic results. That's user choice, not a technological constraint.

In short: people were never uniformly smart or hardworking, so blaming LLMs entirely for declining intellectual rigor is oversimplified. The style complaint? Also overstated: LLMs can easily provide rich diversity if prompted correctly. It's all about how they're used, just like any other powerful tool in history, and just like my comment here.

majormajor · a month ago
We could wait for further studies, but some already exist: https://www.media.mit.edu/publications/your-brain-on-chatgpt...

You say it's human nature to take shortcuts, so the danger of things that provide easy homogenizing shortcuts should be obvious. It reduces the chance of future innovation by making it more easy for more people have their perspectives silently narrowed.

Personally I don't need to see more anecdotal examples matching that study to have a pretty strong "this is becoming a problem" leaning. If you learn and expand your mind by doing the work, and now you aren't doing the work, what happens? It's not just "the AI told me this, it can't be wrong" for the uneducated, it's the equivalent of "google maps told me to drive into the pond" for the white-collar crowd that always had those lazy impulses but overcame them through their desire to make a comfortable living.

momento · a month ago
"The style complaint? Also overstated: L[...]"

This is how I know this comment was written by an AI.

joegibbs · a month ago
It’s the latest in a series of homogenising inventions - the printing press, radio, television, the internet - that will probably result in people of the future speaking and thinking more similarly than today. First went minor languages, then dialects, now regional differences within languages. Next will probably be the difference between different English accents - I think by 2100 English speakers will all be speaking with a generically American accent no matter where they are on earth. Then next will probably be other national languages - 90% of Swedes and Dutch people already speak English.
monkaiju · a month ago
The printing press (and the others listed) aren't homogenizing, they're if anything tools of diversification. They allowed far more novel ideas to be presented and distributed than before, AI on the other hand "distils" and "reduced" large heterogeneous information into a much more homogenous slop.
3willows · a month ago
Perhaps that is the real danger. Everyone except a small elite who (rightly) feel they understand how LLMs work would simply give up serious thinking and accept whatever "majority" opinion is in their little social media bubble. We wouldn't have the patience to really engage with genuinely different viewpoints any more.

I recall some Chinese language discussion about the experience of studying abroad in the Anglophone world in the early 20th century and the early 21st century. Paradoxically, even if you are a university student, it may now be harder to break out of the bubble and make friends with non-Chinese/East Asians than before. In the early 20th century, you'd probably be one of the few non-White students and had to break out of your comfort zone. Now if you are Chinese, there'd be people from a similar background virtually anywhere you study in the West, and it is almost unnatural to make a deliberate effort to break out of that.

3willows · a month ago
The point being: when you find someone who is tailoring all his/her/its attention to you and you alone, why bother talking to anyone else.
computerex · a month ago
To be honest I think you and others over play it. ChatGPT and LLM's in general sound pretty corporatey. A lot of the at least English written text online is pretty homogenous in style.
cal_dent · a month ago
I think there is some truth this. But there is also another plausible scenario, where styles now change far quicker than we probably expect. We as a society get bored after an x amount of time. That time has potentially shortened now as the pace of new output generated has increased so much. What probably would typically have taken lets say a decade and a half for people to get bored of (think about how all coffee shops started to copy Friends, and then the instagram minmial cafe aesthetic became a thing) is probably shorter because LLM means that it'll be oversaturated very quickly.

The current style & cadence of LLM output is already getting tiring for many so I'd expect a different style to take hold soon enough. And given LLM can mimic any style that is easy enough to do at scale and quickly. Then the cycle commences again until someone comes up with a novel style of writing, that people like, that the LLM dont know yet and then the cycle starts again....

Edit:

I also vaguely remember an article around the cultural impact of one of the image creation ai early on, maybe Dall-E if memory serves me well. I remember very little of the article now except a comment an artist made which was along the lines that in a few years the image generation would be so good & realistic, that inevitably a counter culture will emerge around nostalgia for the weird hallucinatory creations it used to make at the start simply because at least it'll be more interesting. In a similar way you get the nolstagia for things like vinyl & handcrafted toys etc. I think about that aspect of it broadly a lot.

echo7394 · a month ago
The movie Idiocracy comes to mind almost every day for me as of late.
sho_hn · a month ago
What's weird is that so many people shrug this off with "eh, it's what they said about the calculator".

Which to me is roughly as bad a take as "LLMs are just fancy auto-complete" was.

I feel it's worth reminding ourselves that evolution on the planet has rarely opted for human-level intelligence and that we possess it might just be a quirk we shouldn't take for granted; it may well be that we could accidentally habituate and eventually breed outselves dumber and subsist fine (perhaps in different numbers), never realizing what we willingly gave up.

Nevermark · a month ago
Our thumbs, ..., our intellect, and especially language, gave us an ecological/economic niche.

We became a technological species.

We observed, standardized and mechanized our environments to work for us. That is our niche.

But then things snowballed in the last couple of centuries. A threshold was crossed. Our technology became our environment, and we began adapting the environment for our technologies direct benefit, for own indirect benefit.

Simple roads for us at first, then paved for mechanized contraptions. Wires for talking at first, then optimized for computers. We are now almost completely building out a technological world for the convenience and efficiency of the technology.

And once our technology frees us from dependence on others, a second threshold will be crossed. Then neither others or the technology, will need us.

I don't see a species of devolving humans, no longer needed by their creations, in a world now convenient for those creations, finding a happy niche.

If there is a happy landing, it will need to take a different route than that.

saghm · a month ago
It seems like a stretch to argue that we have any clue what the evolutionary consequences would be for something that's been around only a couple of years. Human-level intelligence took millions of years to evolve even when the lifespans of our ancestors were shorter than they are now, so trying to predict how something so new will affect the biology of future generations seems like it would be pretty much impossible to reason about. Even trying to predict how technology will affect society in a single generation is hard enough, and that's hardly long enough for any noticeable evolutionary changes to our intelligence as a species to become noticeable.
audinobs · a month ago
I don't know or really care what other people are doing with LLMs.

I have learned so much the past 2.5 years it is almost hard to believe.

To say I am getting dumber is just completely preposterous.

Maybe this would be leading me astray if I had the intelligence of Paul Dirac and I wasn't fully applying my intelligence. The problem is I don't have anything like the intelligence of Paul Dirac.

nunez · a month ago
People who make that retort forget that the calculator was immensely helpful but _also_ antiquated the need for mental math, which in my opinion is a bad thing. (Everyone should be able to calculate 5% and 10% of numbers, given how easy it is to do)
makk · a month ago
It hasn't homogenized everything. It's further exposed humans for who they are. Humans are the virus.
mensetmanusman · a month ago
(1999)
jvm___ · a month ago
Agent Smith had it right when he was interviewing Morpheus in the Matrix.
crimsoneer · a month ago
This is how the church felt about the printing press.
nerevarthelame · a month ago
While the church feared people interpreting information on their own, with LLMs it's the opposite: we fear that most interpretation of information will be done through a singular bland AI extruder. Tech companies running LLMs become the pre-press churches, with individuals depending on them to analyze and interpret information on their behalf.
majormajor · a month ago
The church would've LOVED everyone asking the same one-to-four sources everything. ChatGPT is literally a controllable oracle. Quite the opposite of the printing press.

"Running your own models on your own hardware" is an irrelevant rounding error here compared to the big-company models.

toofy · a month ago
this would be the opposite. the llm situation may be heading back towards something similar the church age.

the church did all of the reading and understanding for us. owners of the church gobbled up as much information as it could (encouraging confessions) and then the church owners decided when, how, where and which of that information flowed to us.

XorNot · a month ago
Who is "the Church" in this analogy?
mensetmanusman · a month ago
This analogy is going places.
ayaros · a month ago
We're going to have to go in the opposite direction and rely on directories or lists of verified human-made/accurate content. It will be like the old days of yahoo and web-indexes all over again.
DaveZale · a month ago
A few years ago, some talk briefly circulated about local internet efforts, possibly run by public libraries.

Local news coverage has really suffered these past several years. Wouldn't it be great to see relevant local news emerge again, written by humans for humans?

That approach might be a good start. Use a cloud service that forbids AI bot scraping to protect copyright?

ceejayoz · a month ago
> Wouldn't it be great to see relevant local news emerge again, written by humans for humans?

That sounds a lot like Nextdoor. With all the horrors that come with it.

ayaros · a month ago
This doesn't seem to be structured differently than a standard-fare social media app. All the same issues with human verification on those apps would apply to this too.

Unless you mean a platform only for vetted local journalists...

Footprint0521 · a month ago
I feel like SEO trash has made this a must have for me for the past few years already. If it’s not stack overflow, Reddit, or stack exchange, I’m wasting my time
ayaros · a month ago
Or MDN, which is yet another site that seems to be constantly ripped off by parasitic AI-generated SEO sites...
MPSimmons · a month ago
I had the thought the other day that one of the most valuable things a human-driven website could offer would be a webring linking to other human-driven websites
JKCalhoun · a month ago
I'm a fan of bringing back Web Rings.

Perhaps a site could kick off where people proposed sites for Web Rings, edited them. The sites in question could somehow adopt them — perhaps by directly pulling from the Web Ring site.

And while we're at it, no reason for the Web "Ring" not to occasionally branch, bifurcate, and even rejoin threads from time to time. It need not be a simple linked list who's tail points back to it's head.

Happy to mock something up if someone smarter than me can fill in the details.

Pick a topic: Risograph printers? 6502 Assembly? What are some sites that would be in the loop? Would a 6502 Assembly ring have "orthogonal branches" to the KIM-1 computer (ring)? How about a "roulette" button that jumps you to somewhere at random in the ring? (So not linear.) Is it a tree or a ring? If a tree, can you traverse in reverse?

ayaros · a month ago
That's something I really enjoy about web 1.0: links pages. We need to bring back the days when every site had a giant list of links to other sites. I don't care if half of them end up as dead links. This is part of what made the web fun. You'd come across a site, see what it had to offer, and then you'd check the links page and find five, ten, or 20 other sites offering similar things. No need for algorithms tracking your every move to recommend things to you... the content itself would do that.
ayaros · a month ago
(To clarify, I'm not suggesting this is necessarily a bad thing)
johnnienaked · a month ago
It gives me recollection back to the Simpsons episode where Itchy and Scratchy writers go on strike. What follows was a beautiful scene of children rubbing their eyes in unfamiliar sunlight as they're forced to go outside, making up games, playing on playgrounds, all while Beethovens Pastorale hums in the background.

I'm all for it. Let big tech destroy their cash cow, then maybe we can rebuild it in OUR interest.

jcalx · a month ago
Alternatively stated:

> The only explanation is that something has coded nonsense in a way that poses as a useful message; only after wasting time and effort does the deception becomes apparent. The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus.

> Viruses do not arise from kin, symbionts, or other allies.

> The signal is an attack.

―Blindsight, by Peter Watts

TZubiri · a month ago
Is that the sci fi novel about an alien race that is annoyed and feels attacked by noise?
lelandbatey · a month ago
Nope it's the book about the concept of a true turing-machine like alien being which has no "consciousness", it's mechanistic but insanely complex, like the Borg but not even assimilating; it's like a very very sophisticated "grey goo" nanobot scenario
anthk · a month ago
There are public sources of information such as a curated WIkipedia, open content from Kiwix, Gutenberg Math books and OpenStreetMap for maps. Better, you can download offline and curated version of these so anyone can have a working snapshot anytime. That's good to avoid future AI tamperings. As long as these as AI free, we are potentialy in the right direction.
boredatoms · a month ago
We can only trust a snapshot from pre AI years, eventually everything will be contaminated
brookst · a month ago
s/AI/internet
aspenmayer · a month ago
Curious Yellow, anyone?

https://en.wikipedia.org/wiki/Glasshouse_(novel)

> "Curious Yellow is a design study for a really scary worm: one that uses algorithms developed for peer-to-peer file sharing networks to intelligently distribute countermeasures and resist attempts to decontaminate the infected network".

Hat tip to HN user cstross (as I discovered the idea via Charlie’s blog):

http://www.antipope.org/charlie/blog-archive/October_2002.ht...

These topics were first brought to my attention through his amazing novel Glasshouse. I’ve had the pleasure of having my first edition copy of the book signed by the author, and I then promptly loaned it indefinitely to a friend, who then misplaced it. The man himself is a friendly curmudgeon who I am happy to have met, and I have enjoyed reading about the future through his insights into the past and present.

Also I must acknowledge Brandon Wiley, who wrote the inspiration for Curious Yellow as far as I can tell.

https://blanu.net/curious_yellow.html

tomlockwood · a month ago
I want to suggest that the virus is even more insidious, and is an organism that feeds on VC money, and it is evolving via a substrate of human programmers to become more efficient at consuming it. And like an organism evolving towards survival, it gives no shits about the utility generated in return for the thing it eats.

And, as time goes on, it'll get more efficient at the consumption and waste less and less energy on the generation of utility. It is an organism that needs servers to feed and generates hype like a deep-sea monster glows its lure.