If you have an example of a moderation action that you disagreed with (E.g., a particular story about DOGE or the administration that wasn't adequately discussed on HN), please share a link or something else concrete and we'll explain it or investigate it. You can post it here or email us (we have had email threads going back years with users who want to share feedback and learn about how we think about these things [1]).
There are plenty of ways of examining the data:
- https://github.com/HackerNews/API
- https://news.ycombinator.com/item?id=40644563
- https://news.ycombinator.com/front
If you have concerns about any future stories being hidden, you could set up your own API listener, monitor for new stories and then see which ones are flagged or killed.
For the record, I routinely undertake practices for evaluating and improving my own judgement, and am happy to do so regarding any specific case. But you haven't provided me with any specific feedback to respond to.
[1] https://www.newyorker.com/news/letter-from-silicon-valley/th...
I don't necessarily know that it's moderator malfeascence so much as people abusing HN tools to bury stories that they don't like, but I do think that there should be some consideration about how those tools are being abused and how that abuse can be effectively countered.
I get the impression that an effort is being made to correct the situation, but I've given up on the front page and only visit /active now, so I might be completely wrong.
They most definitely don't. We attach symbolic meaning to their output because we can map it semantically to the input we gave it. Which is why people are often caught by surprise when these mappings break down.
LLMs can emulate reasoning, but the failure modes show that they don't. We can get them to be coincidentally emulating reasoning well enough long enough to fools us, investors and the media. But doubling down on it hoping that this problem goes away with scale or fine tuning is proving more and more reckless.