Readit News logoReadit News
Posted by u/Oras 6 days ago
Ask HN: Please restrict new accounts from posting
I don’t know if I’m the only one, but I see lots of clearly AI generated posts recently in HN and mostly coming from new accounts (green), it is more noticeable in the Show HN section.

I wish the team can either restrict new accounts from posting or at least offer a default filtering where I can only see posts from accounts with certain criteria.

I don’t want to see HN becoming twitter, which is full of bots and noise, as this would be a really sad day.

dang · 6 days ago
We're going to at least restrict Show HNs for a while.

I do think this is relevant though: "HN can't be immune from macro trends" - https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

laborcontract · 6 days ago
Please do so. And, forgive me if I speak heresy, but there has to be more proof of work (friction) to create accounts. I was shocked at how easy it is for something like chatgpt atlas to create new accounts on the fly.
magicalhippo · 6 days ago
The problem is that we might lose some gold.

Not too seldom have I seen the author or a significant party of a story chime in through a fresh green account, as they were alerted by the story being posted here one way or another. And usually when they do it's very interesting.

As such I would find it detrimental if they had to jump through too many hoops so they don't bother or it takes too long so the thread dies before they can participate.

beautron · 6 days ago
Perhaps more proof of work is necessary, but it makes me sad.

I still remember creating my HN account. It stands out in my memory, because it was the smoothest, simplest, easiest, and quickest account creation of my life.

I had lurked here for around a decade before finally creating an account. Any urge to participate was thwarted by my resistance toward creating accounts (I just hate account creation for some reason). But HN's account creation process was a breath of fresh air. "You mean it can be this easy? Why isn't it this easy everywhere? If I had known how simple it was, I would have created an HN account years earlier, lol."

It was especially stunning to me, because I think the discourse on HN is generally of a higher quality than most other sites (which I wouldn't naturally associate with such an easy account creation process).

It's my only fond memory of account creation (along with maybe when I created an account on America-Online back in the 90s, since that was my first ever account and it was all so novel). Just a few quick seconds, and then I'm already commenting on HN. It was beautiful. I remain delighted.

brudgers · 6 days ago
My intuition is increasing the difficulty of account creation favors motivated actors and disincentivizes organic participation because:

1. ideological and/or economically motivated actors will just see it as a cost of doing business.

2. Ordinary sign-up friction is more likely to make HN appear ordinary to anyone who stumbles upon it.

3. Sign-up friction is a moat. The strength of HN is moderation of what gets in.

HendrikHensen · 6 days ago
I rotate accounts on "social media" (mostly Reddit and Hacker News, the others don't interest me) every few weeks or months to make sure not too much of my post history accumulates in one account. I would dislike it very much if there would be high friction to create new accounts. On the other hand my behavior is probably a major outlier.
rock_artist · 6 days ago
I really don’t like newbie has 0 trust. So some proof of work makes sense more than limiting new users.
apt-apt-apt-apt · 6 days ago
I was going to suggest emotional leetcode, but LLMs do well on this.

When given a conversation about Alice and Suzy having a one-upmanship conversation (my husband rich, my kid is a genius) and what emotions they are feeling, and what Suzy could have said instead to improve the conversation, it gave accurate responses (e.g. they're feeling insecure, competitive, envy).

TZubiri · 6 days ago
Seems to be a general problem right?

The standard solution is using an email to register account, maybe a cloudflare captcha, and then using good network logging to group accounts by IPs and chainbanning abusive accounts when they are caught by other mechanisms.

ls-a · 6 days ago
Wow! I might be witnessing the end of HN
nottorp · 6 days ago
But is there a connection between the front page being full of "AI" slop and "AI" worship and these new accounts? Or are the old timers also upvoting those submissions in the detriment of other, more interesting topics?
whh · 6 days ago
I echo this sentiment for all social media platforms today...

At least new accounts are more obvious here. This pattern has been increasingly used for scams, spam and AI slop on Instagram, X and Facebook for years.

Dead Comment

TheChelsUK · 3 days ago
Please don’t forget that some AI generated posts are helpful for those of us with disabilities who can hope to keep an online presence via a pos dictated to an agent, or need help formulating sentences.

By focusing or restricting human only use you risk dehumanising those he need technological support.

dang · 3 days ago
What you're describing is legit. I think the solution here is to understand that the rules are never fully specified, and not all-or-nothing. At such a general level of abstraction, it can't be.

More here in case useful:

https://news.ycombinator.com/item?id=47342616

https://news.ycombinator.com/item?id=47342761

https://news.ycombinator.com/item?id=47346798

AlexeyBrin · 6 days ago
Agree, HN can't be immune to what happens in the programming world. Would be great though if we can have a way to mute or hide accounts. This way each HN user will be able to clean his own feed of articles.
conductr · 6 days ago
That works for me so long as it’s not the main solution, as I personally don’t want to curate, I’d rather just partake in a sanely moderated forum and that’s my understanding of what HN has been it’s just facing a new challenge with ai spam

Dead Comment

mursu · 5 days ago
Greetings. Don't mean to come across as disrespectful. May I ask, have you decided on the criteria for new users to unlock restrictions? I apologize if it was already conveyed, but being new, I find myself a bit lost. I have read the guidelines and wanted to post in Show HN but then I got a message that stated that I do not have the clearance to do that yet. I must add I totally understand. I did not know about Hacker News until a few days ago when Gemini gave me the pointer to get my project visible to people here for real quality feedback. Again. I apologize if I am out of place.
dang · 4 days ago
The problem is that there are now so many attempts to get "real quality feedback" that the entire system is in danger of collapsing. Imagine how you'd feel if Gemini pointed the rest of the internet at your inbox! Or a thousand people showed up in your garden, all wanting your attention. This isn't so far from that; HN is not so big a place, and there are only two of us supporting it.

What would be best is for you to poke around the site a bit and get familiar enough with it to decide if you'd like to be a part of the community or not. If so, you're welcome! you aren't the first person to feel a bit lost here as a new user, because the site is rather minimal and cryptic—but your eyes will adjust if you keep reading it over time.

If, on the other hand, you're not interested, that's totally ok, but then please don't try to promote your projects here. HN is a community, and the way to get attention for your things is to first give attention to other people's things.

I don't want to specify X, Y, Z criteria technically because that would just be an invitation to game the system. Worse, Gemini will then tell you "first do X, then do Y and Z, and then you'll get that 'real quality feedback'".

What I want Gemini to tell you (and everyone else!) is "don't use Hacker News primarily for promotion - they have a rule against that. Instead, participate in the community for the intended reason—intellectual curiosity—and after a while, it will become clear how the culture works and how to share your projects there".

pinkmuffinere · 6 days ago
I was thinking of setting up a system to highlight sock-puppeters and other consistent-rule-violating accounts, as a 'fun project' that might improve the HN experience. But it strikes me that the HN staff probably already does something like this, they may not welcome a side-loaded project of this sort, and it would require some automated crawling of HN (which again may be unwelcomed). Finally, I don't actually have experience in this area. Is this something that would be welcomed, or unwanted?

My initial thought is to set up a devoted account like "sock_puppet_detector", and using the infrastructure from https://hackersmacker.org/, add any likely sock-puppets as 'foes'.

vunderba · 6 days ago
It'd be pretty easy to spot too, because most people don’t even bother trying to hide it (either out of laziness and/or ineptitude).

A lot of users don’t seem to realize that anyone can click on the domain in a "Show HN", and Hacker News will show you all the times that domain has been submitted. So you’ll see four or five different low karma sock puppets accounts that have all submitted the same site.

rdevilla · 6 days ago
I'm wary about new accounts such as yours wanting to censor and shape discourse by antagonizing people who hold diverse views that differ from your own here.

The HN culture has shifted drastically over the past 5 years.

Oras · 6 days ago
For all accounts or just new ones?
dang · 6 days ago
Just new ones for now.

I don't want to make HN harder for legit new users, but I do think a bit of community participation is reasonable before posting a Show HN, so it isn't just a box on some "how to promote your project" checklist.

kazinator · 6 days ago
A site can't easily be immune to macro trends in authentic dicussion, but it can be significantly immune to inauthentic uses.
xupybd · 6 days ago
That's sad there have been some really neat things shared that way but you gotta do what ya gotta do.
swat535 · 6 days ago
Why not let the users choose at settings? like "Show dead" ?
plagiat0r · 2 days ago
Unpopular opinion: Maybe the way to go is to create a separate Show HNs only for bots and put some instructions for the bots to follow, identify themselves and give them separate category. Similar to moltbook. If we can't stop it, maybe we could contain it in a dedicated space.

I'm not a fan of moltbots / openclaws (and any clones that popped up in the last moth). I don't use them and try to discourage their use. That being said, millions of them are running anyway...

dang · 2 days ago
I doubt that people would respect it, so we'd still have the problem of distinguishing wheat from chaff in the 'human' section, plus also have a 'bot' bucket to maintain.
jakejmnz · 4 days ago
Yep, just tried to post and I'm not able. Unfortunate. :/
karmakaze · 6 days ago
Here's an idea: allow downvotes for green posts with published guidelines on when downvoting is and is not appropriate. We can collectively filter out the pure spam efficiently to make it less worthwhile to post.
mancerayder · 6 days ago
Minimum karma perhaps?

It's easy for people to game but it's at least one more effort-based hurdle.

rvz · 6 days ago
I welcome this. Lots of AI slop has been thrown on to this site and the drawbridge needs to be eventually raised a little.

Can't allow low-quality posting from new accounts here but thank you for listening to the concerns.

Dead Comment

Dead Comment

Dead Comment

AussieWog93 · 6 days ago
Reddit has tried this approach and, IMO, it's failed.

A new human user will spend actual time creating a thoughtful and helpful post, only to be greeted by "sorry, your post has been removed by automod because you don't meet criteria". They get disheartened and walk away forever.

The spammers, on the other hand, know how the rules work and so will just build their bots to work around this (waiting 30days, farming karma).

The net result is that these rules ensure that much greater proportion of new accounts come from bad actors - who else would jump through hoops just to participate on a web forum?

stuartjohnson12 · 6 days ago
It failed on Reddit because Reddit is maintained by a bunch of volunteers to whom Reddit provides woefully, woefully, horrifically underdeveloped tooling to automate their communities in a more nuanced way. Hacker News has three advantages. First, it is moderated by the same people who build the tooling, so the incentives are aligned. Second, it is an enormous source of soft power for a venture capital firm with the resources, incentives, and likely the competence and capacity to keep it running smoothly. Third, the scale is smaller and is not tied to hardline revenue constraints like CPM, user LTV and DAU-maximization which restrict what Reddit can do.
spartanatreyu · 6 days ago
> It failed on Reddit because Reddit is maintained by a bunch of volunteers to whom Reddit provides woefully, woefully, horrifically underdeveloped tooling to automate their communities in a more nuanced way.

Not to mention reddit mass removed experienced moderators when all the moderators had a protest about reddit removing their access to good third party tooling.

That's the day the site started its death spiral.

qingcharles · 6 days ago
Moderating Reddit subs can be a huge money maker. I know people making $100K/year from it. There are cabals, especially in the adult sections. Reddit has tried to address this recently by limiting the number of subs a person can moderate, but that just causes these big accounts to create more user accounts and split all their subs up that way.
mschuster91 · 6 days ago
> It failed on Reddit because Reddit is maintained by a bunch of volunteers to whom Reddit provides woefully, woefully, horrifically underdeveloped tooling to automate their communities in a more nuanced way.

And on top of that, some of said "volunteers" are power-hungry, petty, useless fucking morons. Especially the large subreddits tend to be run by people I wouldn't trust to boil some pasta without triggering a fire alert, and yes I know people who manage that.

onionisafruit · 6 days ago
It’s worse than that. On r/news they shadow ban anybody who doesn’t have verified email. No message or anything. Just nobody sees your comments. I probably made 20 or more comments there over a few months before I figured it out. It felt humiliating.
qingcharles · 6 days ago
It's even worse than that. They preemptively ban you outright on lots of major subs for posting on other subs. For instance, I can't interact with r/pics because I once commented on r/redditachievements. And a housemate once upvoted a pic on there which got us both banned for a week because Reddit thought I was trying to do a run-around on the ban.

I still love Reddit for all its flaws though.

greazy · 6 days ago
There needs to o be a distinction between creating a post and replying.

IMO New accounts should be restricted from creating new posts, or at least certain kinds of new posts.

Replying shouldn't be restricted. That is how users interact with each other and learn the etiquette of HN.

admiralrohan · 6 days ago
I agree. I faced this in the psychology subreddit, forced to quit. They wanted karma to post comments, but without posting comments, how am I supposed to get karma specific within that community?
munksbeer · 6 days ago
Literally me on a DIY sub. I needed some advice, got auto removed, never went back.
throwaway2037 · 6 days ago
Same. Not DIY, but my first post was rejected and I was banned. LOL. I guess that is moderation in action!
a456463 · 4 days ago
Same for stack overflow. I tried it once. Never engage with stack overflow ever since. Whereas I am active here. If this goes, I am not posting here either. This is then another echo chamber
realaaa · 5 days ago
100% agreed on this

this is the reason I never was keen on StackOverflow etc

tried posting there several times, many times actually - every time some annoying condition was not met

well screw you too then! walked away and never bothered to contribute again

nurettin · 6 days ago
> waiting 30days, farming karma

If "farming karma" is a thing, maybe that forum deserves what is coming. Either the karma mechanic is inappropriate given the demographic, or it is too hard for the users to avoid upvoting bots.

Justkog · 6 days ago
you are indeed describing my reddit experience, hence why I did not participate there while being a human
alabhyajindal · 6 days ago
100%. Not sure what the solution is but I have lost interest in Show HNs these days. Part of it is because when someone posted before, it usually meant they spent a fair amount of time thinking, and found it worthwhile to spend energy on the project. This was a nice first filter for bad ideas and now no longer exists.

Even for posts that are interesting to me, I get the feeling that it's not worth looking at because it was probably made using LLMs. Nothing against them, but I personally thought of Show HNs as doing something for the love of it, the end result being a bonus.

tombert · 6 days ago
I certainly hope they do something.

I'm not opposed to AI automating away stuff no one liked doing, or even more utilitarian things in general, but robots posting on social media and discussion sites seems antithetical. I don't know what the point of talking to a robot would be when I could talk to Claude if I wanted to do that.

I'm not even 100% sure why people are doing Show HN for low-effort stuff shit that was done in 45 minutes in Claude. I guess it's trying to resume-pad or build a brand or something?

heavyset_go · 6 days ago
> I'm not even 100% sure why people are doing Show HN for low-effort stuff shit that was done in 45 minutes in Claude. I guess it's trying to resume-pad or build a brand or something?

Github star farming, SEO, etc

clbrmbr · 4 days ago
Here to say I'm one of those people who did my first Show HN recently, and it was 100% due to the lowered activation energy to build something awesome with Claude. Not 45min, but took about 6 hours of my time, and benefitted from testing against a 10yr old firmware codebase at my startup.

So I guess I'm saying, the ideal rate of Show HN posts has probably gone way up. Unfortunately its also resulting in lower SNR. Not sure what to do about it tho.

ex-aws-dude · 6 days ago
Someone telling you about their AI created project is like someone telling you their dream they had last night.
basilikum · 6 days ago
Don't be so hard on dreams. They are a creative work of a humans subconsciousness.
wolvoleo · 6 days ago
I'm not sure if LLM projects doesn't mean they were not made with love. It just makes programming accessible to more people, but essentially it's still just a tool.

It does take the handcraft out of it, in that sense an LLM-made tool would be more akin to IKEA stuff compared to a handcrafted work of art (though I struggle to call even hand-made electron crap a work of art, lol).

But yeah I know what you mean, they are usually half-finished solutions.

113 · 5 days ago
> I get the feeling that it's not worth looking at because it was probably made using LLMs

This is the big one for me. Small toy website someone has made as a passion project used to be the big draw of HN for me but now I just a assume it's a vibe-coded mess that'll 404 in 7 months.

Dead Comment

diacritical · 6 days ago
Some feedback and suggestions, in a somewhat rambling fashion:

I'm using a new account and will likely use one forever, as I don't want lots of posts linked together, nor do I care about points or karma or whatever it's called. My first few comments are always shadowbanned. I also see lots of dead posts for new accounts with "showdead" turned on. A lot of them are normal, useful comments, some are inflammatory or just plain stupid. I haven't seen many comments that seem to be AI generated. Maybe they are and I just don't see it, idk.

Anyway, if a comment passes some basic filter (doesn't post shady links or talk about VIAGRA or 11 INCH PENIS or something spammy), I hope they still show up, even as "dead". On this account I copied 1 dead comment to give it more visibility and I've done it before a few times, too. The comment is still dead, btw (id 47262467). And maybe instead of (shadow)banning new users/posts, just make a separate view for old/established account and another one for all posters.

I would also be glad if I could solve some CPU- or RAM-intensive task as PoW. If I really had to, I'd pay with Monero or something similar, as long as it's an anonymous currency with low fees so a payment equivalent to 25 cents wouldn't incur a big fee. I wouldn't pay more per account (especially when I rotate them), as I've been a lurker for years and only recently started posting, anyway (so I don't care that much if I can post).

Finally, thanks for letting us sign up over Tor. :)

ProllyInfamous · 6 days ago
You can use a longer-established /hn/account to vouch your burner's posts... unless you're worried about /hn/ linking your accounts, too (not just public scraping) — which they can, by IP address.
diacritical · 6 days ago
I'm using Tor, HN can't link the IP address. Naturally, I trust HN more than random visitors/scrapers, but it's better to trust less people/institutions/things.

In the current system people can vouch for dead posts from shadowbanned new accounts, if I understand correctly. It seems people do it, to a certain degree at least, because I rarely see good comments that stay dead forever.

BalinKing · 6 days ago
I furthermore wish that "posting an LLM-generated comment (i.e. and passing it off as your own)" was worthy of an instant ban, because I see this sort of behavior from non-green accounts as well.

EDIT: I meant (but totally forgot) to qualify that my "proposal" would only apply when the LLM-ness is self-obvious—idk, make up a "reasonable person" standard or something. Presumably, the moderators would err on the side of letting things slide. Even so, many comments I've seen are simply impossible for any reasonable person to claim as "human-written"—the default ChatGPT style is simply too distinct.

tomhow · 6 days ago
> I furthermore wish that "posting an LLM-generated comment (i.e. and passing it off as your own)" was worthy of an instant ban

It pretty much is. It’s not hard and fast (sometimes we’ll warn people or email them to ask if it’s not certain) and it takes time for us to see things and act, especially when people don’t email us when they see these comments.

But as a general rule, accounts that post generated comments get banned.

tasuki · 6 days ago
I think your comment was generated by an LLM and hereby vote for your immediate and permanent instant ban.
MikeTheGreat · 6 days ago
I think that your comment was generated by Eliza, and hereby vote for you to get a karma boost for being Legit Old School, then an immediate and permanent instant ban.

I'm joking, of course. If your comment was generated by Eliza it would have started with "How do you feel about 'I think your comment...'" :)

UncleMeat · 6 days ago
I've seen people admit it. I've even seen a commenter say that they were an agent. We can do these cases.
lich_king · 6 days ago
Many HNers strongly argue that it's absolutely impossible to distinguish between AI text and non-AI text. Some of it seems to be a knee-jerk reaction to some of the occasional, one-sided stories of people who were accused of using LLMs and fired from their jobs. And some of it seems to be just hedging so that we don't develop a culture that could penalize their LLM-generated posts or code.

We had people defending the fired Ars Technica guy, even though he admitted to using an LLM in some sort of a contrived non-apology along the lines of "I did it because I had a cold".

My main problem with that is that you can just generate an infinite supply of LLM op-eds about LLMs, and is this really what we want to read every day? If I want to know what ChatGPT thinks about the risks or benefits of vibecoding, I'll just ask it.

mschuster91 · 6 days ago
> Many HNers strongly argue that it's absolutely impossible to distinguish between AI text and non-AI text.

And it's becoming more and more difficult - not just by AI getting "better" (and training removing many of the telltale signs), but also because regular people "learn" to write like an AI does. We're seeing it with "algospeak" - young terminally online people literally say stuff like "unalived" in the meatspace nowadays.

We're living in a 1984 LARP.

wolvoleo · 6 days ago
Hmm, some LLM text is hard to detect, sure.

Some is also horribly easy. If the text is full of:

- Overly positive commentary and encouragement

- Constant use of bullet point lists, bolding and emoji

- This quaint forced 'funniness', like a misplaced attempt at being lighthearted

- A lot of blablah that just missed the point

- Not concise and to the point, but also not super long

Then that really screams ChatGPT to me.

I think it's because this seems to be the default styling of ChatGPT. When people tailor their prompt to be more specific about style it's a lot harder to detect but if they just dump a few lines of instructions about the content into it, this is what you'll get. So the low-effort slop is still pretty easy to detect IMO.

toraway · 6 days ago
Sure, it's obviously impossible to ID any single piece of writing as from an LLM without significant false positives.

But in practice, I frequently encounter a comment that either screams generic LLM slop or even just as a vague indefinable "vibe" due to one or more telltale signs, so that's red flag #1. Then, I go to the comment history, at that point if it's really a bot/claw/agent or a poster heavily using LLMs I'll usually find page after page of cookie cutter repeats of the exact same "LLM smell" (even if that account has been prompted to avoid em-dashes/lists/etc, they still trend towards repetition of their own style).

At that point a human moderator would have more than enough evidence to ban an account. It's not like we're talking about a death sentence or something. If no clear pattern of abuse from the long term commenting activity, then give them the benefit of the doubt and move on.

delichon · 6 days ago
The moderators are supposed to just know it when they see it? It's that black and white to you? Or are lots of false positives a price we have to pay?
andai · 6 days ago
Yeah it's weird, there was one case where I thought it was AI but wasn't sure. Several other comments pointed it out, too. Author claimed he wrote it manually. (Which is honestly even more concerning!)

Maybe there can be a dedicated 'flag botspam' button?

Then again it's a nuanced issue. I see AI used in a large percentage of writing now, so would this rule apply to the article as well?

lokar · 6 days ago
It’s only going to get harder has people continue to model their writing on LLM style.
shimman · 6 days ago
Something we need to remember that AI was trained on every public internet comment, the vast majority of which are legit terrible. The biggest tell that someone is using AI is having multiple paragraphs saying the same point over and over again. Even trolls are more succinct.
zahlman · 6 days ago
In some fraction of cases, it's really obvious.

I would argue that those cases are really the ones that cause an LLM-specific harm, i.e., which make people feel like they aren't exclusively among fellow humans.

If someone posts something that doesn't clearly read LLM-ish, but is otherwise terrible, it's not really different from if the same terrible thing had been written by hand.

I don't think anyone who objects to LLM comments is really demanding a super-low false negative rate. Just get rid of the zero-effort stuff. For example, recently I've seen a lot of comments from new accounts that are just sycophantic towards TFA and try to highlight / summarize a specific idea or two, but don't really demonstrate any original thought (just, like, basic reading comprehension and an ability to express agreement). And they'll take a paragraph to do so, where a human with the same level of interest in the material might just say "good post" (granted, there's an argument to be made for excluding that, too).

BalinKing · 6 days ago
Sorry, updated my original comment—I meant to qualify it to only those cases where it's blatantly obvious. Obviously a lot of ambiguous comments will slip through as a result, but I agree with you that false negatives are better than false positives.
Gabrys1 · 6 days ago
Can use AI to detect that
dmix · 6 days ago
People accuses everything of being LLM generated these days. That'd be a tough rule to enforce.
heavyset_go · 6 days ago
Do this with submissions, too. Or at least put some indicator that it's AI generated.
mapontosevenths · 6 days ago
I am more annoyed by the anti-AI luddites filling the comments with low value complaints than I am by quality content written partially by an LLM.

Those low value complaints add nothing to the conversation, and the content didnt make it to the front page because it was bad. If the sole objection is "AI bad", keep it to yourself....its boring.

ThrowawayR2 · 6 days ago
The guidelines haven't even been updated to say that AI generated posts and submission aren't permitted even though it's been the policy for a couple of years now if one searches for postings by the moderators. So outsiders and new HN users have no reason to know that it's not allowed. I'm sure there are reasons for it but the inaction is all very mysterious from an outsider perspective.
i_think_so · 6 days ago
This obviously should have been done years ago. @dang is there a reason it hasn't?
edanm · 6 days ago
I disagree with this policy.

Some people can really benefit from using LLMs to help them write. E.g. non-native speakers.

LLM-assisted-writing doesn't have to be low effort, it can help people express themselves better in many cases. I'd argue that someone who spent their time doing multiple passes with an LLM to get their phrasing just write, has taken obviously more care than the majority of people on HN take before commenting.

And if you don't like the way something is written? Just down vote it. That's true whether or not it's partially/wholly written by an LLM.

christofosho · 6 days ago
Aren't down votes on this forum restricted to 500+ karma? And how would those compare to flagging? I'd hate for people under 500 karma to think they need to flag a post in order to have it get any attention by moderation. And, with your idea that LLMs help folks write, wouldn't that make the community worse for them?

And what about users like this, whose comment are very much entirely LLM generated and possibly even a bot? https://news.ycombinator.com/threads?id=BelVisgarra

lovich · 6 days ago
> LLM-assisted-writing doesn't have to be low effort, it can help people express themselves better in many cases.

Hard disagree. I have been learning another language and wouldn’t pretend to write posts after an LLM rewrote it because it is literally lower effort than learning the language correctly.

Like definitionally, you are using a machine to offload effort. I don’t know how you could claim that is not “low effort” when that’s the point of the tool.

rukuu001 · 6 days ago
Absolutely this:

> Some people can really benefit from using LLMs to help them write. E.g. non-native speakers.

rerdavies · 6 days ago
I think all submissions to HN should be submitted via snail-mail, and must be handwritten. That would solve the problem.

/heavy sarcasm

That being said, my mother used to insist on hand-written cover letters from job applicants. Her rationale: it takes effort, so it weeds out all the applications from people who are just randomly spraying out applications for jobs they are not qualified for.

layer8 · 6 days ago
Unfortunately I don’t think that it would solve the problem: https://www.google.com/search?q=handwritten+mail+service&udm...
rjh29 · 6 days ago
Marking the sarcasm here really ruins your humour.
rpcope1 · 6 days ago
Other than this being probably challenging to enforce fairly, I think I agree that if you had strong proof of an account largely or completely posting comments/stories/whatever that was adulterated by an LLM, that is really probably ban worthy like you said.
AnimalMuppet · 6 days ago
I think you need (at least) one exception to that rule. We have many people here whose first language is not English, and this is an English-only forum. For at least some of those people, an AI translation may give better clarity than their own attempt at writing in English.

So I would propose that, in the ideal world where we could perfectly enforce the rules that we chose, that the rule would be "AI for translation only". If it wrote your content, your comment is gone. If it translated content that you wrote, your comment is still welcome.

Deleted Comment

Razengan · 6 days ago
How ironic, a comment advocating for banning LLM comments using em dashes

What if someone used an LLM to just translate?

throwaway2037 · 6 days ago
When I read comments like this, I think about the average Joe who says: "Most people are terrible drivers." Then, someone asks them: "Are you a terrible driver?" They respond: "Of course not. I am an excellent driver." A few people roll their eyes.

    > worthy of an instant ban
First, it is not always possible to identify an LLM-generated comment. There are too many false-positives. Imagine if this system was implemented, and one of your comments was identified as LLM-generated and you were instantly banned. How would you feel about it?

i_think_so · 6 days ago
Maybe we need a reverse Turing test and award -- humans write things that are indistinguishable from AI slop.

I have no idea what that could be useful for, but since the Turing test is now essentially beaten maybe its usefulness has come and gone too.

> Imagine if this system was implemented, and one of your comments was identified as LLM-generated and you were instantly banned. How would you feel about it?

It sounds like a fast, efficient, inexpensive and foolproof recipe for destroying a community. Let's use that as a future test: anyone who advocates for it is undeniably trying to destroy HN, so they get downvoted to 1 karma and permanently blocked from voting on anything else.

Dead Comment

jacquesm · 6 days ago
For now there is already a pretty effective mechanism in place, downvote and/or flag those comments that you think are across the line in that sense.

But in principle I agree with you, the rule for me is 'if it wasn't worth your time to write then it certainly isn't worth 1000x times other people's time to read'.

AnimalMuppet · 6 days ago
Exactly. If your LLM wrote it, then my LLM can read it. I don't want to.

Deleted Comment

delichon · 6 days ago
There is an epistemic silver lining. This is in fact a Red Queen's race that cannot be won. So in the end the only solution is to evaluate the text on its own merits without reference to the writer's status, because that status can no longer be reliably detected. For a public feed like this one, the only alternative is to ignore it. The fire hose of data will inevitably become ever more fecal. We can only walk away from it or be more careful about the pearls we pluck out. It ends well only if we get better at pearl detection.
dextrous · 6 days ago
One way that I could imagine a human-only HN could evolve in the coming AI wasteland: motivated individuals join small local groups and are validated face-to-face at meet-ups. Local trusted leads gatekeep their chapter’s posts, and this scalable moderation works up the tree. Bad leaves get culled out reasonably fast, maybe there’s some controls at the top level that let you see more content “lower down the tree” if you’re ok with lower SNR. Latency to get a post widely distributed grows but I don’t see that as a massive problem.
lagrange77 · 6 days ago
> coming AI wasteland: motivated individuals join small local groups and are validated face-to-face at meet-ups. Local trusted leads gatekeep their chapter’s posts, and this scalable moderation works up the tree. Bad leaves get culled out reasonably fast,

Wow this is really cyberpunk.

I'll bring my Yubikey!

ares623 · 6 days ago
I've been thinking the same. One way to moderate is to bring back physical consequences.

I'd also like to see an "Order of the White Lotus" community (or Fight Club if you prefer) where people who collectively agree to not use AI against each other can come together. They can still use AI (i.e. out of necessity) just not with other members knowingly.

I suspect whatever form it takes the stakes will be very high to hack yourself into and pollute the space. So the more successful the community becomes, the harder it is to keep in order.

patrickmay · 6 days ago
You're giving me flashbacks to PGP key signing parties.

I do like your idea, though.

Aurornis · 6 days ago
In my recent experience, local meetups and groups are unexpectedly more prone to self promotion and low effort spamming.

Local groups have a problem where members admit their friends or pressure others into inviting their friends who are not a net positive, but it feels too impolite to refuse or to kick someone out. Meeting someone in person also develops a sense of a social bond that makes it harder to downvote or flag their posts.

Local groups have always been a haven for affinity fraud, too. Running a scam is easier when you can smile, be charismatic, and pretend to be a personal friend before springing your ask on to your victims.

foobarian · 6 days ago
Bring back the key signing parties!

p.s. @patrickmay: jinx!

bakugo · 6 days ago
> So in the end the only solution is to evaluate the text on its own merits

This falls apart as soon as you realize that evaluating the text requires far more effort than generating it. If you're spending 2 minutes reading text that took 2 seconds to generate, you already lost.

delichon · 6 days ago
That just means that you can only evaluate a smaller fraction of the data. If your goal is to do more than sample it, you've already lost.
saulpw · 6 days ago
"cannot be won" "only solution" "only alternative". sorry, no, that's too black and white. There are other solutions, even if they will only work for a couple of days/months/years.
delichon · 6 days ago
Don't tell anyone, but I am secretly in charge and open to suggestions. Spill.
gozucito · 6 days ago
Agreed. Merit is the only fair solution. If OP noticed a garbage post, that means they evaluated a post on merit and decided it was garbage. So it works.

We have genAI generating videos and the quality sucks compared to human produced and filmed content. People call it out and nobody is going to watch a genAI movie at the theater or binge a genAI TV show. Merit based filtering.

GenAI for music is not as good as human-generated music either. Not a single AI song from Suno or Udio has reached the top40. Not even one. 100% of the songs are human because they are evaluated on merit.

We have SWE and agentic benchmarks to evaluate coding LLMs on merit.

Disclaimer: I am a new account.

delichon · 6 days ago
> Disclaimer: I am a new account.

Welcome. Illegitimi non carborundum.

zahlman · 6 days ago
The thing is, I can read something that's really terribly written and still extract useful information from it. (Suppose, for example, an LLM was directed to synthesize information from some sources that I wouldn't have thought of doing; or a submission simply makes me aware of a blind spot I had. Or I look up documentation and find something that's incredibly verbose and full of marketing-speak, but the code samples look reasonable and can be verified by testing and/or cross-reference.)
Aurornis · 6 days ago
This comment uses a lot of big words but it’s full of fallacies.

The HN user base is not perfect at detecting LLM content but a lot of it does get flagged and downvoted eventually. About once a day I’ll click on a link, realize it’s AI slop, and go back to HN to flag it but discover that it’s already flagged.

If you turn on showdead you can see all of the comments from LLM bots that have been discovered and shadowbanned.

The fallacy in the comment above is simple: It’s taking the current situation and extrapolating to an extreme future, then applying the extrapolated future prediction on to the current situation. The current situation does not represent the extreme future predicted. A lot of the LLM content is easily spotted and a lot of it is a waste of time to read, therefore it’s right to police and ban it. Even if imperfect.

zahlman · 6 days ago
Earlier today I found something that impressed me as awful slop, but I was hesitant to flag the submission because as far as I could tell it got the facts right (I didn't try to verify some details of who was involved with what, but I was familiar with the proposals the article was discussing).
verdverm · 6 days ago
I'm somewhat keen to adopt ATProto's feed generators and/or labeller concepts to create an alternative /new and comment prioritizer
AnimalMuppet · 6 days ago
> The fire hose of data will inevitably become ever more fecal. We can only walk away from it or be more careful about the pearls we pluck out. It ends well only if we get better at pearl detection.

I'm not sure we can. Imagine an AI that 1) creates multiple accounts, 2) spews huge numbers of comments, 3) has accounts cross-upvote, and then 4) gets enough karma on multiple accounts to get downvote privileges. That AI now controls the conversation. Anything it doesn't like, it can downvote to death.

I mean, I'm sure that HN has a "voting ring" detector, but an AI could do this on a sufficient scale to be too large to register as one cohesive group. And I think HN has a "downvote brigading" detector, but if the AI had enough different accounts, I'm not sure that would trigger, either.

The best chance to detect it is just on volume (or perhaps on too many accounts coming from the same IP address or block). But if the AI was patient, I'm not sure even that would work.

That's depressing. I don't want HN to become a bot playground, with humans crowded out. But I'm not sure we can stop it, if it was done on a large enough scale.

Dead Comment

castral · 6 days ago
I don't understand how this is supposed to solve anything, and I've seen it suggested as a solution multiple times. If you restrict comments to older accounts, all it's going to do is make the bot creators speculatively open and proactively age accounts for future use.
kn100 · 6 days ago
I would argue that we shouldn't let the perfect be the enemy of the good. Adding a cost to commenting that requires aging accounts I think might discourage fly by night operations and "experiments".
vunderba · 6 days ago
This already happens now. Go look through a few of the "Show HN" authors - you'll inevitably see around several accounts that are 50-100 days old with a karma of 1 to avoid a green label.

The OP is talking about posts, not comments. The simplest solution might be to prevent someone from posting a "Show HN" until they’ve earned twenty-five or fifty karma, to demonstrate that they’ve been actively participating on Hacker News rather than using it solely to promote themselves.

brewdad · 6 days ago
This leads inevitably to karma farming bots who upvote each other’s submissions à la Reddit.

It’s a speed bump at best.

zahlman · 6 days ago
I have seen accounts that were dormant for years suddenly start posting frequently, all with slop. (I don't know if this represents people having an epiphany about AI use, or accounts being compromised or just what.)
Oras · 6 days ago
I wish for karma based too if we managed to get filters. I want to see posts only by accounts with {x}+ karma points.
Springtime · 6 days ago
Would be fine for a personal filter but if used globally would incentivize karma gaming. You can get high karma from reposts of past popular submissions (an author who was in prison who reached the front page even half-joked/resented once how many common Wikipedia articles land on the front page for the nth time).
bakugo · 6 days ago
Have you taken a look at reddit recently? It's absolutely infested with bots farming karma, either by reposting old popular posts, or simply posting AI generated comments.

Actively encouraging this will only make things worse.

Deleted Comment

elpocko · 6 days ago
You want other people to deal with the things you don't like and filter stuff for you, to improve your own experience and shield you from the filthy masses. God beware you have to endure a comment you don't like, your royal highness.

I'd rather see you gone than the people you complain about.

dang · 6 days ago
And also invest more effort in karma farming. In other words, if we raise the bar for Show HNs we'll probably see more generated comments in the threads.
andai · 6 days ago
Several of the posts I've seen are from autonomous AI agents, which don't currently seem to have that kind of long-term planning.

Deleted Comment

rjh29 · 6 days ago
I don't understand why we put locks on bicycles, a determined person can just saw them off.
dom96 · 6 days ago
My prediction is that nothing short of human verification is going to solve this.
thutch76 · 6 days ago
I'm very wary of this request, though I understand it. I've been reading HN daily since around 2014. My involvement was purely passive (e.g., I have been a lurker) because I really didn't think I had much to contribute that wasn't already stated better by others.

I didn't actually create my account until 2021? 2022? I can't remember. And I didn't make my first post or even comment until just last week.

While I think a minimum post count or reputation metric could perhaps reduce the AI generated posts, introducing friction also makes it harder for real people to contribute anything meaningful.

Furthermore, what does it matter if it's "AI generated"? Is some AI content ok? What's the pass/fail threshold on human vs AI generated text?

I made a Show post last week where I heavily relied on AI. I'm sure there are some "tells." But even so, I spent more than three hours working on the content of my post and my first response. Would my post have been acceptable to you?

lagrange77 · 6 days ago
> Furthermore, what does it matter if it's "AI generated"? Is some AI content ok? What's the pass/fail threshold on human vs AI generated text?

If a human put his effort into it, is proud of it and wants to show it to the world, i'm happy to invest some time to have a look at it and maybe provide some helpful feedback.

I'm not willing to invest my time into evaluating the more or less correct sounding ideas of a ML model.

ramon156 · 6 days ago
I don't care if the code is generated, i care if the content is. I don't want to read another "No complexity. No fuss. No buzzwords". "It's not just a tool, its a lifestyle". Its sooooo boring...
pesus · 6 days ago
If you're going to spend 3 hours making a post, why not just write it yourself in the first place and avoid the issue and the reputational damage?
thutch76 · 6 days ago
This is awfully narrow minded. I had Claude give me an initial framework, based on the many many hours of context of chat across many different documents. It helped me organize my thoughts.

Some of us need assistance to communicate effectively. And for me, yes that took 3 hours even with this assistance.

blks · 6 days ago
Just write the text yourself, not many people enjoy reading AI-generated posts, even edited.