Earlier in the week there were probably about 10 posts on the front page that tempted me to post "Ask HN: Why are there so many Show HN & Ask HN posts today" - I refrained as it seemed a bit like replying all to tell people to stop replying all in an email bomb situation.
Glancing through the content it made me wonder if the newly launched Claude Cowork had a Show HN / Ask HN skill on launch ...
I actually conducted a similar analysis back in December. I was more focused on discovering the topics that most resonated with the community but ended up digging into this phenomenon as well (specifically focusing on the probability of getting over 100 upvotes)
The really interesting thing is that the number of posts were growing exponentially by year, but it was only in 2025 that the probability of landing on the front page dropped meaningfully. I attributed this to macroeconomic climate, and found some (shaky) evidence of voting rings based on the topics that had a unusually high likelihood of gaining 10 points and an unusually low likelihood of reaching 100 points given that they reached 10.
I did not conduct a deep dive into the specific examples: this was my takeaway from a slope plot comparing which topics clear a 10 point threshold (eg escape the new page) vs which topics clear a 100 point threshold.
> Nearly every AI related topic does worse once it clears the 10 point threshold than any other category. This means that either the people looking through the New and Show sections are disproportionately interested in AI. This is very possible, but from my interaction with this crowd from my posts, these users tend to be more technically minded (think DIY hardware, rather than landing-page builders).
It's good to know that this would be helpful. My tendency would be to dig in a bit more into the individual examples that fall into this more suspicious bucket before presenting this evidence formally, but curious if you think these high level results are sufficiently helpful?
It will become a ghost town.
It became a gallery of other people prompts.
It used to mean something else, one would expect care put into a passion project.
I think there should be an ai-assistance badge on every Github project. I don't want to look at Contribution graph and Commit history and then eventually the source code to find out the same information. What are we hiding now?
the reason they give a badge (Claude as author) is so you can showoff on LinkedIn how you are AI first.
using AI, from the braindead normies perspective, is cool. there is an economical reason for people to allow their AI usage to be perceived.
it is the equivalent of showing your support flag and pronouns in 2020.
if at any time people start using this information to filter out content, they'll hide it immediately.
nobody has your satisfaction as a high priority you know
My qualitative experience is that, far from lower quality submissions, the Show HN posts that make the front page seem to be increasing in quality.
There are likely to be a number of possible explanations for this that offset the lower average score. The obvious one is that the filtering effect of the front page with a higher amount of content. Perhaps we are also seeing higher standards—a project that used to take 6 weeks and a ton of conviction now wraps up in a few hours, and people are resetting their expectations.
I’ve seen a few submissions recently that look great - convincing landing page, complex app, modern design - but are almost unusable due to JavaScript bugs. I’m guessing it’s the power of AI.
When I use the search function for topics I'm interested in and encounter Show HN that way, it's dominantly slop (and has been for most of the time I've had an account). The actual Show HN page is not much better.
The submissions that actually get upvoted are indeed pretty good. I think it really is the filtering effect. Standards are whatever, since it's clear that a lot of these submissions are close to one-shot (and even when they would have required some refinement, people don't actually push a meaningful commit history) with an obnoxious LLM house style promotional README.
Often the submission also comes across LLM-generated, including heavy use of Markdown formatting. It gives the impression that people learn that HN is a place to promote themselves, but don't realize how blatantly obvious it is that they didn't actually do anything significant beyond thinking of something for Claude to do[1] and don't care about learning how the site works.
[1] I'm not claiming that work done with coding agents will always be blatantly obvious. I'm claiming that this is the default result for people who don't put in any effort, and lack of effort correlates with lack of understanding.
Possible, sure. But likely? Increased submissions with no change in average quality can fully explain an increase in front page quality, as can increasing average quality, of course. And scores aren’t a quality metric, they’re are a popularity metric. The decrease in scores can also be fully explained by increased submissions. So there doesn’t seem to be any reason to suggest quality is decreasing…
This is an unfortunate trend we will see across software going ahead. When the bar to make something is low, the market is inevitably flooded by cheap and mediocre stuff that overshadow everything else. Soon there won't be an incentive to make high quality stuff because even if you did, you wouldn't be able to grab anyone's attention with it because it's all taken away by the endless slop that won't stop.
Glancing through the content it made me wonder if the newly launched Claude Cowork had a Show HN / Ask HN skill on launch ...
Months ago, I didn't refrain: https://news.ycombinator.com/item?id=44780249
The really interesting thing is that the number of posts were growing exponentially by year, but it was only in 2025 that the probability of landing on the front page dropped meaningfully. I attributed this to macroeconomic climate, and found some (shaky) evidence of voting rings based on the topics that had a unusually high likelihood of gaining 10 points and an unusually low likelihood of reaching 100 points given that they reached 10.
Analysis here if anyone is interested: https://blog.sturdystatistics.com/posts/show_hn/
> Nearly every AI related topic does worse once it clears the 10 point threshold than any other category. This means that either the people looking through the New and Show sections are disproportionately interested in AI. This is very possible, but from my interaction with this crowd from my posts, these users tend to be more technically minded (think DIY hardware, rather than landing-page builders).
Last visual in the following section: https://blog.sturdystatistics.com/posts/show_hn/#digging-int...
It's good to know that this would be helpful. My tendency would be to dig in a bit more into the individual examples that fall into this more suspicious bucket before presenting this evidence formally, but curious if you think these high level results are sufficiently helpful?
Rotten lemons all the way down.
the reason they give a badge (Claude as author) is so you can showoff on LinkedIn how you are AI first. using AI, from the braindead normies perspective, is cool. there is an economical reason for people to allow their AI usage to be perceived. it is the equivalent of showing your support flag and pronouns in 2020.
if at any time people start using this information to filter out content, they'll hide it immediately.
nobody has your satisfaction as a high priority you know
There are likely to be a number of possible explanations for this that offset the lower average score. The obvious one is that the filtering effect of the front page with a higher amount of content. Perhaps we are also seeing higher standards—a project that used to take 6 weeks and a ton of conviction now wraps up in a few hours, and people are resetting their expectations.
The submissions that actually get upvoted are indeed pretty good. I think it really is the filtering effect. Standards are whatever, since it's clear that a lot of these submissions are close to one-shot (and even when they would have required some refinement, people don't actually push a meaningful commit history) with an obnoxious LLM house style promotional README.
Often the submission also comes across LLM-generated, including heavy use of Markdown formatting. It gives the impression that people learn that HN is a place to promote themselves, but don't realize how blatantly obvious it is that they didn't actually do anything significant beyond thinking of something for Claude to do[1] and don't care about learning how the site works.
[1] I'm not claiming that work done with coding agents will always be blatantly obvious. I'm claiming that this is the default result for people who don't put in any effort, and lack of effort correlates with lack of understanding.
Thus the rise of the influencer economy. What better way is there to learn about something than from somebody you trust?
However bad thing are or will be, trusting "influencers" is the last thing you should do.
Deleted Comment