Readit News logoReadit News
ziddoap · a year ago
>Unlike many, I don’t sit on the fence. I hate it with a passion. I think it’s an affront to humanity and will ultimately cause our downfall. If an article uses AI, even just for header images, I don’t read it.

This seems just as silly of a take as the fervent supporters of sticking AI everywhere.

AI helps in some areas (e.g. assisting medical diagnoses) and is shitty in others (e.g. spam content).

I wonder if the author would refuse to receive a diagnosis from a doctor if the diagnosis was AI-assisted?

happytoexplain · a year ago
I'm somewhere between the author and a true-neutral stance. Something can be "an affront to humanity [that] will ultimately cause our downfall" and also "help in some areas". Those aren't mutually exclusive, logically, despite sounding that way.
pxc · a year ago
Indeed the latter is kind of a precursor to the former: AI has to be helpful enough for key use cases in order to become used in whatever critical ways cause it to serve as humanity's downfall.

Setting aside the apocalyptic prediction in particular, this sort of thing is already the case with many things that have become irritating or pernicious at scale, like automated phone systems and SEO. The latter of those examples already includes AI more and more. If AI weren't a helpful device for reducing writing costs, it wouldn't be playing a role in filling the web with worthless SEO-spam garbage.

JohnFen · a year ago
This.

The question isn't whether or not there are benefits to be had from the technology. The question is whether or not the cost/benefit ratio is favorable. It's not clear to me what the answer to that actually is.

tempfile · a year ago
It's coherent (just) for something to be an affront to humanity and helpful for some tasks. It's emphatically not coherent for you to consider something an affront to humanity and to condone using it.
thankyoufriend · a year ago
GenAI will just be a way to reduce labor costs and capture/monetize customer data even further. IMO, everyone should be against non-local GenAI in any product and they should be against even local GenAI when its downstream effects clash with human interests.

Does GenAI monetarily disincentivize the self-expression of humanity through art and other creative endeavors, such endeavors that were already difficult to make a living on pre-GenAI? I think it does.

nonrandomstring · a year ago
I thought so too. But then I changed my mind [0]. Especially after asking some other people what they thought. In a nutshell it's about association. People see an AI image and start thinking... hey maybe the prose and video had a little 'help' too. Still haven't got around to replacing all the generative thumbnails. Once AI stuff get's into your content it's like pollution and a royal PITA to sieve out.

[0] https://cybershow.uk/blog/posts/nomoreai

JohnFen · a year ago
> I wonder if the author would refuse to receive a diagnosis from a doctor if the diagnosis was AI-assisted?

I can't speak for the author, obviously, but personally my answer would be "it depends". If the diagnosis came from a doctor who happened to use AI as one of their tools, I'm OK with that (as long as it was a locally-hosted AI, but that's a different issue). If the diagnosis came from AI without a substantial amount of analysis from a doctor, then I'd absolutely reject that.

ziddoap · a year ago
That sounds like a completely reasonable approach!

What the author has written, in my opinion, is unreasonable because it is absolute in its hatred for AI.

jaredcwhite · a year ago
I'm 100% in agreement with the author, and to answer your question: if I found out my doctor had based their diagnosis on output from an AI, I'd find another doctor.
andrewinardeer · a year ago
How do you reconcile that you interact with AI when you may not even realise? This could be in the form of a recommendation feed, a newsreader delivering a story on the 6 o'clock news or using software that was built by it?
ziddoap · a year ago
I find that wild, but to each their own!

Deleted Comment

rsynnott · a year ago
> AI helps in some areas (e.g. assisting medical diagnoses)

Increasingly (and I think certainly in the above case) AI is used as shorthand for genAI (which is unsurprising, as up until recently most AI-ish things got called ML anyway). I certainly hope no-one's using LLMs for medical diagnoses...

> I wonder if the author would refuse to receive a diagnosis from a doctor if the diagnosis was AI-assisted?

It really depends on what you mean by 'AI-assisted', IMO. If you mean that the doctor had asked a chatbot, I'd very much be looking for a second opinion. What sort of AI assistance did you have in mind.

ziddoap · a year ago
>What sort of AI assistance did you have in mind.

The designed-for-medicine kind, not the ChatGPT kind.

yugffred · a year ago
Someone has to stop giving these sloppyjoes the clicks, because then they’ll stop making slop.

They are uninterested in the content they just want the clicks.

If a doctor just slops a diagnosis that results in malpractice via “hallucination” then people will stop going to them.

Will you continue to use that dangerous doctor just for the sake of your misguided principle?

ziddoap · a year ago
>Will you continue to use that dangerous doctor just for the sake of your misguided principle?

What a weird interpretation of what I said.

Obviously I mean a competent doctor that uses AI-assistance in a responsible manner, not a dangerous doctor that commits malpractice.

vouaobrasil · a year ago
Not sure if you mean me but I wasn't interested in clicks when writing this article. In fact, I was getting thousands of clicks and comments on Medium and now I get almost none because I deleted all my articles there. I wasn't making money either. So, not about clicks. I'm just genuinely concerned about the future of life on this planet.
nunez · a year ago
I think it's an incredibly valid take.

People are already using AI to write emails, essays, publications, and more. SWEs are using it to literally do their job.

This stuff brings us even closer to the last act of the movie "Up." It's insane to me that people _don't_ see this.

vouaobrasil · a year ago
> I wonder if the author would refuse to receive a diagnosis from a doctor if the diagnosis was AI-assisted?

If I had a choice, I would not allow the doctor to give me an AI assisted diagnosis in the first place.

packetlost · a year ago
> AI helps in some areas (e.g. assisting medical diagnoses)

Uhhh, I don't know about that one. Have we seen studies that show real predictive capabilities? Anecdotal evidence is not helpful, and it seems rather risky to depend on something that has not been thoroughly vetted when it comes to people's lives.

ziddoap · a year ago
>In 2020, Zhang’s team developed an AI imaging-assisted diagnosis system for COVID-19 pneumonia and published in Cell. Based on the 500,000 copies of CT images that the team studied, the system was able to distinguish COVID-19 from other viral pneumonias within 20 seconds, with an accuracy rate of more than 90%.

https://www.nature.com/articles/d42473-022-00035-y?error=coo...

>AI improves the lives of patients, physicians, and hospital managers by doing activities usually performed by people but in a fraction of the time and the expense. [...] Not only that, AI assists physicians in detecting diseases by utilizing complicated algorithms, hundreds of biomarkers, imaging findings from millions of patients, aggregated published clinical studies, and thousands of physicians’ notes to improve the accuracy of diagnosis.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8754556/

JTyQZSnP3cQGa8B · a year ago
In diagnosis there is no prediction. The AI can find a pattern somewhere and that’s it. It’s actually very efficient in some fields (like recognizing stuff in a scanned body) but doctors always have the final say. It’s like a hint, and it should stay like that.

Dead Comment

McBainiel · a year ago
I've been thinking a lot lately about why I don't like AI and ultimately I think it's because of its tone. I don't know why OpenAI made ChatGPT so wordy and almost unctuous.

I've realised I don't actually care about people using it for programming or brainstorming or whatever. It's just I feel so inslulted when I read something that is in the default AI voice.

So I don't know that I agree entirely with the writer of the piece but I get where he's coming from. AI writing is unpleasant to read. And I hope Medium reverses their decision.

poszlem · a year ago
The reason people "dislike AI because of its tone" is that they only recognize AI-written text when it's poorly composed. It's likely that you're already interacting with AI-generated text without realising it.
happytoexplain · a year ago
I disagree. The "tone" of the vast majority of AI to me has a clear identity, and is not related to poor composition. It is in fact "high quality", technically speaking, and is a separate problem from that of bad/strange genAI writing. In fact, some real people do write in a similar tone, which makes it hard to tell apart sometimes, which is the problem you describe, but that doesn't obscure the fact that it is the "AI tone".
jprete · a year ago
The possibility of well-written AI prose is a big reason why I dislike AI text generation. Writing for an audience refines the ideas being communicated. If the author doesn't do that refinement themselves, then I'm not reading what the author is thinking, I'm reading what the LLM could patch together from their ideas. If I want the LLM's opinion on how to make a concept work, I can ask it directly. If I'm reading something by an author, I want to know what the author is actually thinking!
zmgsabst · a year ago
I enjoy its tone when writing long form content — eg, this start of a book project. Certainly more than many long form blog articles, eg, on Medium.

Tastes vary.

https://zmichaelgehlke.com/misc/in-harmony.html

S0y · a year ago
Semi related but, googling a problem just to stumble on an 100% written by AI Medium article as the top result, is the bane of my existence.

So really, I can get behind the author.

stonethrowaway · a year ago
Assuming it answers the problem, what’s the issue?
bil7 · a year ago
usually I expect an instructional article/blog post/etc to have actually been tried and tested. If it has the format of "I had x problem, chatGPT suggested solution y, it actually worked, here it is and a bit about why it works" then of course that's fine and good. But you'll only have to google a few times before you see a 100% synthetic SEO optimised useless article that does nothing but waste your time.
Vegenoid · a year ago
This assumption would often be false, which is the problem. I already have access to LLMs (that I leverage often), I’m using a search engine because I’m looking for high-quality, detailed info or real-world examples from someone who knows what they are talking about.

This blind cheerleading of LLM-generated content filling the web is what pushes people to hate them.

sofixa · a year ago
It wastes your time because it's full of cruft. And you cannot assume to trust it will solve your problem, so you're risking losing even more time.
tempfile · a year ago
The answer is probably wrong?
ayaen · a year ago
Some people dont even bother with cleaning up the boiler plate:

"Sure, here are 11 examples of recursive islands..." Yeah, that's how some articles begin

andersco · a year ago
This seems like too much of an absolutist stance regarding AI. AI is a tool and as such has both good and bad uses. For example, maybe I have something I feel important to share with the world but I am not a very good writer. If AI can help me express my own ideas more clearly and clean up my grammar, that to me is a great use of AI. On the contrary, letting AI just churn out articles wholesale to me would be an abuse rather than good use. Correct me if I’m wrong, but I think that also is consistent with Medium’s policy.
Eddy_Viscosity2 · a year ago
Medium's policy does allow accounts where AI can churn out articles wholesale, they just can't monetized. And the author does make the point that he would rather read imperfect human writing than AI assisted or otherwise 'the flaws make the personality'.
ayaen · a year ago
I think it's a personal stance so the degree of absoluteness doesn't matter. It's what he prefers. We can tell him it's too harsh when he feels the pinch of cutting of all writers using LLMs. As far as use of LLMs to clean up ones language usage is concerned, there's a difference between editing the content it generates, and learning grammar patterns and word usages from it and applying them while writing on your own. When the latter is done, no one can tell.
coding123 · a year ago
Some things that may seem simple may actually be the cause of some people's demise.
vouaobrasil · a year ago
I don't really think AI actually helps people express their ideas. I think it makes them less human and so it's not even them expressing ideas any more.
mulhoon · a year ago
The quality of Medium articles (and comments) has really gone down over the past few years. Lots of attention grabbing headlines “5 ways to X” “Stop doing X” and less well-written content overall. I’m not sure if most writers hopped over to Substack but it feels like a cheaper place than it used to be.
Vegenoid · a year ago
There was a brief period where seeing a Medium article in a search result made me excited. Now, I avoid them, because of too many experiences with shallow, incorrect, or LLM-generated articles.
delichon · a year ago
>> AI assistance empowers an author to level up — to make their ideas clearer, for example, or help them express themselves in a second language

> And also, I vehemently disagree with this statement. Flaws express personality.

So for the same reasons, does he wish to not read content that has been assisted by spelling or grammar checking? Or an editor, proof reader or fact checker? Or a thesaurus or dictionary? Or is he only concerned when AI is applied to those roles?

vouaobrasil · a year ago
Well, I think that a spell-checker is a lot different than AI because AI can change the entire tone of an article whereas a spell and grammar checker generally does not.
helboi4 · a year ago
I sort of love that he takes a strong stance here. I think even if you think there are some applications for AI, you should be able to strongly state where it is not useful. Having AI churn out bloat text in the form of terrible blogposts and misleading listicles that makes it harder for genuine information to be found online, is not a good use case for AI. If you want to ask ChatGPT to summarise a topic for you, you can literally just ask ChatGPT to do that. There is zero benefit to having a third party pumping that into websites that we go to to find real human opinions and hopefully a few genuinely great, expert articles - neither of which ChatGPT can produce.
causal · a year ago
> I am absolutely against artificial intelligence.

I'm curious where someone like him draws the line. AI is an ambiguous term. He's an author and photographer, so perhaps AI has just come to mean LLMs and image generators?

I might even agree with the thrust of his concerns, but this kind of diatribe always come off a bit rage-blind, and maybe brings the strength of the discussion down a little.

rsynnott · a year ago
The industry only has itself to blame for this, because, for a decade or so, anything 'AI-ish' has almost always been branded as ML (presumably due to the previous AI winter, where the term 'AI' became poisonous to VCs). If people equate AI with generative AI, it is only because, well, _so does the industry_.
AStonesThrow · a year ago
The neural nets of the industry have been retrained: https://xkcd.com/2173/
vouaobrasil · a year ago
I avoid AI whenever I can, including machine-learning assisted noise reduction. But my argument against AI is not against specific technologies, but the conglomerate of technologies that represents an ideal that pushes efficiency beyond what I consider useful.

Contrary to your accusation of being rage-blind, I have studied this topic in detail and have carefully thought about it and read many articles in philosophy about it. I am still trying to articulate my ideas but my general feelings go way beyond emotions.