1) an AI agent is less likely to notice than even a junior is when the docs are out of date from the code
2) AI boosters are always talking about using language models to understand code, but apparently they need the code explained inline? are we AGI yet?
3) I frequently hear how great AI is at writing comments! But it needs comments to better understand the code? So I guess to enable agentic coding you also have to review all the agents' comments in addition to the code in order to prevent drift
HOW IS ANY OF THIS SAVING ME TIME
buuut...
I will also mention that these agent files are typically generated by agents. And they're pretty good at it. I've previously used agents to dissect unfamiliar code bases in unfamiliar languages and it has worked spectacularly well. Far far FAR better than I could have done on my own.
I have also been shocked at how dumb they can be. They are uselessly stupid at their worst, but brilliant at their best.
Than other random "judges" would be asked if the reason given by the "accuser" are correct. There would have to be some "cost" in karma to flag a post (or limit of X flags / day for X karma status or smth) and some reward in karma for being chosen as a judge/jury.
Also the need to have a minimum flagging weight and a minimum of judging weight and to reconcile conflicting votes.
Anyway would love to talk about it more but tbh it's probably not gonna happen also because most people don't like jury duty... Maybe when ai gets over the "hallucinations" but well at that point we can also get our individual ai's to read everything and judge for us
for disabilities well... That one I dunno. I don't have a good concept of what kinds of UI are most convenient for each type of accessibility case.
And it's a little tempting to get lost in the weeds of who watches the watchers, but to be honest even if implemented in hacker news case, the mods themselves could vet flags for anomalies. Just this on its own would serve as a force multiplication for HN mods.
For more decentralized forms of moderation. One method might just be a simple flag appeal. Circles back to the community, they can discuss if the rule that is cited is fair, and if it wasn't possibly remove or limit flagging abilities of those who cited the rule incorrectly. And possibly some increased punishment if the appeal fails? There are lots of options there. Big wide design space.
I do think the direct text highlighting has a few important features. The Sybil attack resistance is one. That was one of the OP's primary concerns. Also, clarity on what rule was broken and why is very important, and a given rule can be verbose. It might not be obvious what specifically in a given rule was the reason for the violation. Direct highlighting lets flaggers more directly communicate what the issue is, without opening the communication channel up for a flame war.