Readit News logoReadit News
Posted by u/embedding-shape 5 days ago
Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?
As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

tptacek · 5 days ago
They already are against the rules here.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

(This is a broader restriction than the one you're looking for).

It's important to understand that not all of the rules of HN are on the Guidelines page. We're a common law system; think of the Guidelines as something akin to a constitution. Dan and Tom's moderation comments form the "judicial precedent" of the site; you'll find things in there like "no Internet psychiatric diagnosis" and "not owing $publicfigure anything but owing this community more" and "no nationalist flamewar" and "no hijacking other people's Show HN threads to promote your own thing". None of those are on the Guidelines page either, but they're definitely in the guidelines here.

embedding-shape · 5 days ago
Thanks for a lot of references!

One comment stands out to me:

> Whether to add it to the formal guidelines (https://news.ycombinator.com/newsguidelines.html) is a different question, of course. I'm reluctant to do that, partly because it arguably follows from what's there, partly because this is still a pretty fuzzy area that is rapidly evolving, and partly because the community is already handling this issue pretty well.

I guess me raising this question is because it feels maybe slightly off that people can't really know about this unwritten rule until they break it or see someone else break it and people tell them why. It is true that the community seems to handle it with downvotes, but it might not be clear enough why something gets downvoted, people can't see the intent. And it also seems like an inefficient way of communicating community norms, by telling users about them once they've broken them.

Being upfront with what rules and norms to follow, like the guidelines already do for most things, feels more honest and welcoming for others to join in on discussions.

tptacek · 5 days ago
The rules are written; they're just not all in that one document. The balance HN strikes here is something Dan has worked out over a very long time. There's at least two problems with arbitrarily fleshing out the guidelines ("promoting" "case law" to "statutes", as it were):

* First, the guidelines get too large, and then nobody reads them all, which makes the guideline document less useful. Better to keep the guidelines page reduced down to a core of things, especially if those things can be extrapolated to most of the rest of the rules you care about (or most of them plus a bunch of stuff that doesn't come up often enough to need space on that page).

* Second, whatever you write in the guidelines, people will incline to lawyer and bicker about. Writing a guideline implies, at least for some people, that every word is carefully considered and that there's something final about the specific word choices in the guidelines. "Technically correct is the best kind of correct" for a lot of nerds like us.

Perhaps "generated comments" is trending towards a point where it earns a spot in the official guidelines. It sure comes up a lot. The flip side though is that we leave a lot of "enforcement" of the guidelines up to the community, and we have a pretty big problem with commenters randomly accusing people of LLM-authoring things, even when they're clearly (because spelling errors and whatnot) human-authored.

Anyways: like I said, this is pretty well-settled process on HN. I used to spend a lot of time pushing Dan to add things to the guidelines; ultimately, I think the approach they've landed on is better than the one you (and, once, I) favored.

Rendello · 5 days ago
This is the correct answer. If you're curious about what other sorts of things are disallowed by common law, look at dang and tomhow's comments that say "please don't":

dang: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

tomhow: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

Akronymus · 5 days ago
> This is the correct answer.

Where does that saying come from? I keep seeing it in a lot of different contexts but it somehow feels off to me in a way I can't really explain.

versavolt · 5 days ago
That answer is incorrect. Common law can only be created by courts.
josefresco · 4 days ago
This comment from Dang 5 months ago is a little more nuanced and allows for some usage: https://news.ycombinator.com/item?id=44704054
tptacek · 4 days ago
The key point seems to be "generated comments", not "use of LLMs".
recursive4 · 3 days ago
Like most things in life that make reference to information outside of one's context window, runtime linter warnings would go a long way.
ghtbircshotbe · 4 days ago
Maybe some of these things should be added explicitly to the rules so people will flag violations.

> no hijacking other people's Show HN threads to promote your own thing

This seems to happen a lot, so apparently not a very well enforced rule.

TomasBM · 5 days ago
Yes.

The pre-LLM equivalent would be: "I googled this, and here's what the first result says," and copying the text without providing any additional commentary.

Everyone should be free to read, interpret and formulate their comments however they'd like.

But if a person outsources their entire thinking to an LLM/AI, they don't have anything to contribute to the conversation themselves.

And if the HN community wanted pure LLM/AI comments, they'd introduce such bots in the threads.

christoff12 · 5 days ago
Good point
masfuerte · 5 days ago
Does it need a rule? These comments already get heavily down-voted. People who can't take a hint aren't going to read the rules.
eskori · 5 days ago
If HN mods think the rule should be applied whatever the community thinks (for now), then yes, it needs a rule.

As I see it, down-voting is an expression of the community posture, rules are an expression of the "space" posture. It's up to the space to determine if there is something relevant enough to include it in the rules.

And again, as I see it, community should also have a way to at least suggest modifications of the rules.

I agree with you in "People who can't take a hint aren't going to read the rules". But as they say: "Ignorance of the law does not exempt one from compliance."

tptacek · 5 days ago
Again: there already is a rule against this.
rsync · 5 days ago
This is my view.

I tend to dislike these type of posts but a properly designed and functioning vote mechanism should take care of it.

If not, it is the voting mechanism that should be tuned - not new rules.

dormento · 5 days ago
> These comments already get heavily down-voted.

Can't find the link right now (cause why would i save a thread like that..) but I've seen more than once situations where people get defensive of others that post AI slop comments. Both times it was people in YC companies that have personal interest related to AI. Both times it looked like a person defending sockpuppets.

al_borland · 5 days ago
I think it helps having guidelines and not relying on user sentiment alone. When I first joined HN I read the guidelines and it did make me alter my comments a bit. Hoping everyone who joins goes back to review the up/down votes on their comments and then take away the right lesson with limited information as to why those votes were received seems like wishful thinking. For those who do question why they keep getting downvoted, it might lead them to check the guidelines and finding the right supporting information would be useful.

A lot of the guidelines are about avoiding comments that aren’t interesting. A copy/paste from an LLM isn’t interesting.

BrtByte · 5 days ago
HN tends to self-regulate pretty well
notahacker · 5 days ago
I'm veering towards this being the answer. People downvote the superfluous "I don't have any particular thoughts on this, but here's what a chatbot has to say" comments all the time. But also, there are a lot of discussions around AI on HN, and in some of those cases posting verbatim responses from current generation chatbots is a pretty good indication of they can give accurate responses when posed problems of this type or they still make these mistakes or this is what happens when there's too much RHLF or a silly prompt...
flkiwi · 5 days ago
I read comments citing AI as essentially equivalent to "I ran a $searchengine search and here is the most relevant result." It's not equivalent, but it has one identical issue and one new-ish one:

1. If I wanted to run a web search, I would have done so 2. People behave as if they believe AI results are authoritative, which they are not

On the other hand, a ban could result in a technical violation in a conversation about AI responses where providing examples of those responses is entirely appropriate.

I feel like we're having a larger conversation here, one where we are watching etiquette evolve in realtime. This is analogous to "Should we ban people from wearing bluetooth headsets in the coffee shop?" in the 00s: people are demonstrating a new behavior that is disrupting social norms but the actual violation is really that the person looks like a dork. To that end, I'd probably be more for public shaming, potentially a clear "we aren't banning it but please don't be an AI goober and don't just regurgitate AI output", more than I would support a ban.

pyrale · 5 days ago
> "I ran a $searchengine search and here is the most relevant result."

Except it's "...and here is the first result it gave me, I didn't bother looking further".

giancarlostoro · 5 days ago
> 2. People behave as if they believe AI results are authoritative, which they are not

Web search has the same issue. If you don't validate it, you wind up in the same problem.

9rx · 5 days ago
> people are demonstrating a new behavior that is disrupting social norms

The social norm has always been that you write comments on the internet for yourself, not others. Nothing really changes if you now find enjoyment in adding AI output to your work. Whatever floats your boat, as they say.

sapphicsnail · 5 days ago
The issue isn't people posting AI generated comments on the Internet as a whole, it's whether it should be allowed in this space. Part of the reason I come to HN is the quality of comments are pretty good relative to other places online. I think it's a legitimate question whether AI comments would help or hinder discussion here.
terribleperson · 4 days ago
Has it? More than one forum has expected that commentary should contribute to the discussion. Reddit is the most prominent example, where originally upvotes were intended to be used for comments that contributed to the discussion. It's not the first or only example, however.

Sure, the motivation for many people to write comments is to satisfy themselves. The contents of those comments should not be purely self-satisfying, though.

WorldPeas · 5 days ago
I think it's closer in proximity to the "glasshole" trend, where there social pressure actually worked to make people feel less comfortable about using it publicly. This is an entirely vibes based judgement, but presenting unaltered ai speech within your own feels more imposing and authoritative(as wagging around an potentially-on camera did then). This being the norm on other platforms has degraded my willingness to engage with potentially infinite and meaningless streams of bloviation rather than the (usually) concise and engaging writings of humans
icoder · 5 days ago
Totally agree if the AI or search results are a (relatively) direct answer to the question.

But what if the AI is used to build up a(n otherwise) genuine human response, like: 'Perhaps the reason behind this is such-and-such, (a quick google)|($AI) suggests that indeed it is common for blah to be blah, so...'

0manrho · 4 days ago
> ($AI) suggests

Same logic still applies. If I gave a shit what it "thought" or suggests, I'd prompt the $AI in question, not HN users.

That said, I'm not against a monthly (or whatever regular periodic interval that the community agrees on) thread that discusses the subject, akin to "megathreads" on reddit. Like interesting prompts, or interesting results or cataloguing changes over time etc etc.

It's one of those things that can be useful to discuss in aggregate, but separated out into individual posts just feels like low effort spam to farm upvotes/karma on the back of the flavor of the month. Much in the same way that there's definitely value in the "Who's Hiring/Trying to get hired" monthly threads, but that value/interest drops precipitously if each comment/thread within them were each their own individual submission.

charcircuit · 5 days ago
>If I wanted to run a web search, I would have done so

While true, many times people don't want to do this because they are lazy. If they just instead opened up chatgpt they could have instantly gotten their answer. It results in a waste of everyone's time.

MarkusQ · 5 days ago
This begs the question. You are assuming they wanted an LLM generated response, but were to lazy to generate one. Isn't it more likely that the reason they didn't use an LLM is that they didn't want an LLM response, so giving them one is...sort of clueless?

If you asked someone how to make French fries and they replied with a map-pin-drop on the nearest McDonald's, would you feel satisfied with the answer?

allenu · 5 days ago
I think a lot of times, people are here just to have a conversation. I wouldn't go so far as to say someone who is pontificating and could have done a web search to verify their thoughts and opinions is being lazy.

This might be a case of just different standards for communication here. One person might want the absolute facts and assumes everyone posting should do their due diligence to verify everything they say, but others are okay with just shooting the shit (to varying degrees).

officeplant · 5 days ago
> If they just instead opened up chatgpt they could have instantly gotten their answer.

Great now we've wasted time & material resources for a possibly wrong and hallucinated answer. What part of this is beneficial to anyone?

droopyEyelids · 5 days ago
Well put. There are two sides of the coin: the lazy questioner who expects others to do the work researching what they would not, and the lazy/indulgent answerer who basically LMGTFY's it.

Ideally we would require people who ask questions to say what they've researched so far, and where they got stuck. Then low-effort LLM or search engine result pages wouldn't be such a reasonable answer.

munchbunny · 5 days ago
> 2. People behave as if they believe AI results are authoritative, which they are not

I'm not so sure they actually believe the results are authoritative, I think they're being lazy and hoping you will believe it.

flkiwi · 5 days ago
This is a big of a gravity vs. acceleration issue, in that the end result is indistinguishable.
Terr_ · 5 days ago
Agreed on the similar-but-worse comparison to to the laziest possible web-searches of yesteryear.

To introspect a bit, I think the rote regurgitation aspect is the lesser component. It's just rude in a conventional way that isn't as threatening. It's the implied truth/authority of the Great Oracular Machine which feels more-dangerous and disgusting.

flkiwi · 5 days ago
There’s also a whole “gosh golly look at me using the latest fad!” demonstration aspect to this. People status signaling that they’re “in”. Thus the Bluetooth earpiece comment.

It’s clumsy and has the opposite result most of the time, but people still do it for all manner of trends.

ozgung · 5 days ago
I think doing your research using search engine/AI/books and paraphrasing your findings is always valuable. And you should cite your resources when you do so, eg. “ChatGPT says that…”

> 1. If I wanted to run a web search, I would have done so

Not everyone has access to the latest Pro models. If AI has something to add for the discussion and if a user does that for me I think it has some value.

2. People behave as if they believe AI results are authoritative, which they are not

AI is not authoritative in 2025. We don’t know what will happen in 2026. We are at the initial transition stage for a new technology. Both the capabilities of AI and people’s opinions will change rapidly.

Any strict rule/ban would be very premature and shortsighted at this point.

stack_framer · 5 days ago
I'm here to learn what other people think, so I'm in favor of not seeing AI comments here.

That said, I've also grown exceedingly tired of everyone saying, "I see an em dash, therefore that comment must have come from AI!"

I happen to like em dashes. They're easy to type on macOS, and they're useful in helping me express what I'm thinking—even if I might be using them incorrectly.

anotherevan · 5 days ago
I have actually been using em dashes more mainly because of everyone whinging about them.
lillecarl · 4 days ago
I asked an LLM and he said that antisocial behavior is the coolest thing in 2025
tpxl · 5 days ago
I think they should be banned, if there isnt a contribution besides what the llm answered. It's akin to 'I googled this', which is uninteresting.
mattkrause · 5 days ago
I do find it useful in discussions of LLMs themselves. (Gemini did this; Claude did it too but it used to get tripped up like that).

I do wish people wouldn’t do it when it doesn’t add to the conversation but I would advocate for collective embarrassment over a ham-fisted regex.

MBCook · 5 days ago
That provides value as you’re comparing (and hopefully analyzing) output. It’s totally on topic.

In a discussion of RISC v5 and if it can beat ARM someone just posting “ChatGPT says X” adds absolutely nothing to the discussion but noise.

autoexec · 5 days ago
It's always fun when people point out an LLMs insane responses to simple questions that shatter the illusion of them having any intelligence, but besides just giving us a good laugh when AI has a meltdown failing to produce a seahorse emoji, there are other times it might be valuable to discuss how they respond, such as when those responses might be dangerous, censored, or clearly being filled with advertising/bias
dormento · 5 days ago
IMHO its far worse than "I googled this". Googling at least requires a modicum of understanding. Pasting slop usually means that the person couldn't be bothered to filter out garbage, but wants to look smart anyway.
tptacek · 5 days ago
They are already banned.
venturecruelty · 5 days ago
Weird that I keep seeing them then.
Ekaros · 5 days ago
I think "I googled this" can be valid and helpful contribution. For example looking up some statistic or fact or an year. If that is also verified and sanity checked.
sejje · 5 days ago
Yes, while citing an LLM in the same way is probably not as useful.

"I googled this" is only helpful when the statistic or fact they looked up was correct and well-sourced. When it's a reddit comment, you derail into a new argument about strength of sources.

The LLM skips a step, and gets you right to the "unusable source" argument.

TulliusCicero · 5 days ago
"I googled this" usually means actually going into a page and seeing what it says, not just copy-pasting the search results page itself, which is the equivalent here.
skywhopper · 5 days ago
In that case, the correct post here would be to say “here’s the stat” and cite the actual source (not “I googled it”), and then add some additional commentary.
zby · 5 days ago
The contribution is the prompt.
josefresco · 5 days ago
As a community I think we should encourage "disclaimers" aka "I asked <AIVENDOR>, and it said...." The information may still be valuable.

We can't stop AI comments, but we can encourage good behavior/disclosure. I also think brevity should still be rewarded, AI or not.

superfishy · 5 days ago
I agree. The alternative is prohibiting this practice and having these posters not disclose their use of LLMs, which in many cases cannot really be easily detected.
TulliusCicero · 5 days ago
No, most don't think they're doing anything wrong, they think they're actually being helpful. So, most wouldn't try to disguise it, they'd just stop doing it, if it was against the rules.
sinuhe69 · 4 days ago
I still want to read what the poster understood from the output of the AI, though. I don’t need reciting an answer from an AI because I (and everybody else) can do it, too. On Firefox and other browsers, it’s now integrated so asking an AI is no more than 1 click away. Actually, not even away, Grok can even answer right in the context on X. So merely an answer from AI had no value today, whatsoever.
gortok · 5 days ago
While we will never be able to get folks to stop using AI to “help” them shape their replies, it’s super annoying to have folks think that by using AI that they’re doing others a favor. If I wanted to know what an AI thinks I’ll ask it. I’m here because I want to know what other people think.

At this point, I make value judgments when folks use AI for their writing, and will continue to do so.

sbrother · 5 days ago
I strongly agree with this sentiment and I feel the same way.

The one exception for me though is when non-native English speakers want to participate in an English language discussion. LLMs produce by far the most natural sounding translations nowadays, but they imbue that "AI style" onto their output. I'm not sure what the solution here is because it's great for non-native speakers to be able to participate, but I find myself discarding any POV that was obviously expressed with AI.

SAI_Peregrinus · 5 days ago
If I want to participate in a conversation in a language I don't understand I use machine translation. I include a disclaimer that I've used machine translation & hope that gets translated. I also include the input to the machine translator, so that if someone who understands both languages happens to read it they might notice any problems.
kps · 5 days ago
When I occasionally use MTL into a language I'm not fluent in, I say so. This makes the reader aware that there may be errors unknown to me that make the writing diverge from my intent.
guizadillas · 5 days ago
Non-native English speaker here:

Just use a spell checker and that's it, you don't need LLMs to translate for you if your target is learning the language

emaro · 5 days ago
Agreed, but if someone uses LLMs to help them write in English, that's very different from the "I asked $AI, and it said" pattern.
parliament32 · 5 days ago
> I'm not sure what the solution here

The solution is to use a translator rather than a hallucinatory text generator. Google Translate is exceptionally good at maintaining naturalness when you put a multi-sentence/multi-paragraph block through it -- if you're fluent in another language, try it out!

AnimalMuppet · 5 days ago
Maybe they should say "AI used for translation only". And maybe us English speakers who don't care what AI "thinks" should still be tolerant of it for translations.
estebarb · 5 days ago
I have found that prompting "translate my text to English, do not change anything else" works fine.

However, now I prefer to write directly in English and consider whatever grammar/ortographic error I have as part of my writing style. I hate having to rewrite the LLM output to add myself again into the text.

justin66 · 5 days ago
As AIs get good enough, dealing with someone struggling with English will begin to feel like a breath of fresh air.
carsoon · 5 days ago
I think even when this is used they should include "(translated by llm)" for transparency. When you use a intermediate layer there is always bias.

I've written blog articles using HTML and asked llms to change certain html structure and it ALSO tried to change wording.

If a user doesn't speak a language well, they won't know whether their meanings were altered.

tensegrist · 5 days ago
one solution that appeals to me (and which i have myself used in online spaces where i don't speak the language) is to write in a language you can speak and let people translate it themselves however they wish

i don't think it is likely to catch on, though, outside of culturally multilingual environments

jampa · 5 days ago
I wrote about this recently. You need to prompt better if you don't want AI to flatten your original tone into corporate speak:

https://jampauchoa.substack.com/p/writing-with-ai-without-th...

TL;DR: Ask for a line edit, "Line edit this Slack message / HN comment." It goes beyond fixing grammar (because it improves flow) without killing your meaning or adding AI-isms.

hotsauceror · 5 days ago
I agree with this sentiment.

When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."

ndsipa_pomu · 5 days ago
To my mind, it's like someone saying "I asked Fred down at the pub and he said...". It's someone stupidly repeating something that's likely stupid anyway.
giancarlostoro · 5 days ago
You can have the same problem with Googling things, LLMs usually form conclusions I align with when I do the independent research. Google isn't anywhere near as good as it was 5 years ago. All the years of crippling their search ranking system and suppressing results has caught up to them to the point most LLMs are Google replacements.
JeremyNT · 5 days ago
In a work context, for me at least, this class of reply can actually be pretty useful. It indicates somebody already minimally investigated a thing and may have at least some information about it, but they're hedging on certainty by letting me know "the robots say."

It's a huge asterisk to avoid stating something as a fact, but indicates something that could/should be explored further.

(This would be nonsense if they sent me an email or wrote an issue up this way or something, but in an ad-hoc conversation it makes sense to me)

I think this is different than on HN or other message boards, it's not really used by people to hedge here, if they don't actually personally believe something to be the case (or have a question to ask) why are they posting anyway? No value there.

mikkupikku · 5 days ago
These days, most people who try googling for answers end up reading an article which was generated by AI anyway. At least if you go right to the bot, you know what you're getting.
MetaWhirledPeas · 5 days ago
> When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."

I have a less cynical take. These are casual replies, and being forthright about AI usage should be encouraged in such circumstances. It's a cue for you to take it with a grain of salt. By discouraging this you are encouraging the opposite: for people to mask their AI usage and pretend they are experts or did extensive research on their own.

If you wish to dismiss replies that admit AI usage you are free to do so. But you lose that freedom when people start to hide the origins of their information out of peer pressure or shame.

KaiserPro · 5 days ago
"lets ask the dipshit" is how my colleague phrases it
gardenhedge · 5 days ago
I disagree. It's not a potential avenue for further investigation. Imo ai should always be consulted
SunshineTheCat · 5 days ago
I am just sad that I can no longer use em dashes without people immediately assuming what I wrote was AI. :(
MarkusQ · 5 days ago
Go ahead, use em—let the haters stew in their own typographically-impoverished purgatory.
dinkleberg · 5 days ago
Some will blindly dismiss anything using them as AI generated, but realistically the em-dash is only one sign among many. Way more obvious is the actual style of the writing. I use Claude all of the time and I can instantly tell if a blog post I’m reading was written with Claude. It is so distinctive. People use some of the patterns it uses some of the time. But it uses all of them all of the time.

Deleted Comment

whimsicalism · 5 days ago
I think there's well done and usually unnoticeable and poorly done and insulting. I don't agree that the two are always the same, but I think lots of people might think they are doing the former but are not aware enough to realize they are doing the latter.
amelius · 5 days ago
"I asked AI and it said basically the same as you."
Balgair · 5 days ago
Aside:

When someone says: "Source?", is that kinda the same thing?

Like, I'm just going to google the thing the person is asking for, same as they can.

Should asking for sources be banned too?

Personally, I think not. HN is better, I feel, when people can challenge the assertions of others and ask for the proof, even though that proof is easy enough to find for all parties.

officeplant · 5 days ago
>Should asking for sources be banned too?

IMO, HN commenters used to at least police themselves more and provide sources in their comments when making claims. It was what used to separate HN and Reddit for me when it came to response quality.

But yes it is rude to just respond "source?" unless they are making some wild batshit claims.

Kim_Bruning · 5 days ago
I actually use LLMs to help me dig up the sources. It's quicker than google and you get them nicely formatted besides.

But: Just because it's easy doesn't mean you're allowed to be lazy. You need to check all the sources, not just the ones that happen to agree with your view. Sometimes the ones that disagree are more interesting! And at least you can have a bit of drama yelling at your screen at how dumb they obviously are. Formulating why they are dumb, now there's the challenge - and the intellectual honesty.

But yeah, using LLMs to help with actually doing the research? Totally a thing.

neltnerb · 5 days ago
I think what's important here is to reduce harm even if it's still a little annoying. Because if you try to completely ban mentioning something is LLM written you'll just have people doing it without a disclaimer...

Yes, comments of this nature are bad, annoying, and should be downvoted as they have minimal original thought, take minimal effort, and are often directly inaccurate. I'd still rather they have a disclaimer to make it easier to identify them!

Further, entire articles submitted to HN are clearly written by a LLM yet get over a hundred upvotes before people notice whether there's a disclaimer or not. These do not get caught quickly, and someone clicking on the link will likely generate ad revenue that incentives people to continue doing it.

LLM comments without a disclaimer should be avoided, and submitted articles written by a LLM should be flagged ASAP to avoid abuse since by the time someone clicks the link it's too late.

Semiapies · 5 days ago
It's at least a factor in why I value HN commentary so much less than I used to.
sejje · 5 days ago
This is the only reasonable take.

It's not worth polluting human-only spaces, particularly top tier ones like HN, with generated content--even when it's accurate.

Luckily I've not found a lot of that here. That which I do has usually been downvoted plenty.

Maybe we could have a new flag option, which became visible to everyone with enough "AI" votes so you could skip reading it.

fwip · 5 days ago
I'd love to see that for article submissions, as well.
manmal · 5 days ago
What LLM generate is an amalgamation of human content they have been trained on. I get that you want what actual humans think, but that’s also basically a weighted amalgamation. Real, actual insight, is incredibly rare and I doubt you see much of it on HN (sorry guys; I’ll live with the downvotes).
SoftTalker · 5 days ago
Agree and I think it might also be useful to have that be grounds for a shadowban if we start seeing this getting out of control. I'm not interested, even slightly, in what an LLM has to say about a thread on HN. If I see an account posting an obvious LLM copy/paste, I'm not interested in seeing anything from that account either. Maybe a warning on the first offense is fair, but it should not be tolerated or this site will just drown in the slop.
that_guy_iain · 5 days ago
There will be many cases you won't even notice. When people know how to use AI to help with their writing, it's not noticable.
delfinom · 5 days ago
It's kinda funny how we once in internet culture had "lmgtfy" links because people weren't just searching google instead of asking questions.

But now people are vomiting chatgpt responses instead of linking to chatgpt.

TheAdamist · 5 days ago
Same acronym still works, just swap gemini in place of google.
subscribed · 5 days ago
No, linking to chatgpt is not a response. For some sort of questions it (which model exactly is it?) might be better, for some might be worse.

Deleted Comment

ferngodfather · 5 days ago
Yeah like if I wanted to know what a particular AI says, I'd have asked it..
crazygringo · 5 days ago
I actually disagree, in certain cases. Just today I saw:

https://news.ycombinator.com/item?id=46204895

when it had only two comments. One of them was the Gemini summary, which had already been massively downvoted. I couldn't make heads or tails of the paper posted, and probably neither could 99% of other HNers. I was extremely happy to see a short AI summary. I was on my phone and it's not easy to paste a PDF into an LLM.

When something highly technical is posted to HN that most people don't have the background to interpret, a summary can be extremely valuable, and almost nobody is posting human-written summaries together with their links.

If I ask someone a question in the comments, yes it seems rude for someone to paste back an LLM answer. But for something dense and technical, an LLM summary of the post can be extremely helpful. Often just as helpful as the https://archive.today... links that are frequently the top comment.

zacmps · 5 days ago
LLM summaries of papers often make overly broad claims [1].

I don't think this is a good example personally.

[1] https://arxiv.org/abs/2504.00025

Rarebox · 5 days ago
That's a pretty good example. The summary is actually useful, yet it still annoys me.

But I'm not usually reading the comments to learn, it's just entertainment (=distraction). And similar to images or videos, I find human-created content more entertaining.

One thing to make such posts more palatable could be if the poster added some contribution of their own. In particular, they could state whether the AI summary is accurate according to their understanding.

BrtByte · 5 days ago
HN is the mix of personal experience, weird edge cases, and even the occasional hot take. That's what makes HN valuable
deadbabe · 5 days ago
On a similar sentiment, I’m sick and tired of people telling others to go google stuff.

The point of asking on a public forum is to get socially relatable human answers.

subscribed · 5 days ago
Yeah, but you get two extremes.

Most often I see these answers under posts like "what's the longest river or earth", or "is Bogota a capital of Venezuela?"

Like. Seriously. It often takes MORE time to post this sort of lazy question than actually look it up. Literally paste their question into $search_engine and get 10 the same answers on the first page.

Actually sometimes telling a person like this "just Google it" is beneficial in two ways: it helps the poster develop/train their own search skills, and it may gently nudge someone else into trying that approach first, too. At the same time slowing the raise of the extremely low effort/quality posts.

But sure, sometimes you get the other kind. Very rarely.

jedbrooke · 5 days ago
I’ve seen so many SO and other forum posts where the first comment is someone smugly saying “just google it, silly”.

Only that, I’m not the one who posted the original question, I DID google (well DDG) it, and the results led me to someone asking the same question as me, but it only had that one useless reply

delecti · 5 days ago
Agreed, with a caveat. If someone is asking for an objective answer which could be easily found with a search, and hasn't indicated why they haven't taken that approach, it really comes across as laziness and offloading their work onto other people. Like, "what are the best restaurants in an area" is a good question for human input; "how do you deserialize a JSON payload" should include some explanation for what they've tried, including searches.

Dead Comment

danielmarkbruce · 5 days ago
And yet people ask for sources all the time. "I don't care what you think, show me what someone else thinks".
delaminator · 5 days ago
While, I don't disagree with the general sentiment, a black and white ban leaves no room for nuance.

I think its a very valid question to ask the AI: "which coding languages is most suitable for you to use and why" or other similar questions.

stephen_g · 5 days ago
But if I wanted to ask an AI I would put that into ChatGPT, not ask HN. I would only ask that on HN if I wanted other people's opinions!

You could reply with "Hey you could ask [particular LLM] because it had some good points when I asked it" but I don't care to see LLM output regurgitated on HN ever.

zby · 5 days ago
I strongly disagree - when I post something that AI wrote I am doing it because it explains my thoughts better than I can - it digs deeper and finds the support for intuitions that I cannot explain nicely. I quote the AI - because I feel this is fair - if you ban this you would just lose the information that it was generated.
SunshineTheCat · 5 days ago
This is like saying "I use a motorized scooter at walmart, not because I can't walk, but because it 'walks' better than I can."
pc86 · 5 days ago
If an LLM writes better than you do, you need to take a long look in the mirror and figure what you can do to fix that, because it's not a good thing.
officeplant · 5 days ago
> if you ban this you would just lose the information that it was generated.

The argument is that the information it generated is just noise, and not valuable to the conversation thread at all.

i80and · 5 days ago
This is... I'll go with "dystopian". If you're not sure you can properly explain an idea, you should think about it more deeply.
simianparrot · 5 days ago
You have to be joking
dhosek · 5 days ago
Meh. Might as well encourage people to post links to search results then too.