It is my understanding that the HN code-base is pretty much write-only so it's probably a tall ask but I think it would help confidence in the site at this... turbulent time globally, if people could do their own investigation of which accounts are jumping on stories to kill them.
This would be useful irrespective of your political slant, e.g. on issues like Israel-Palestine.
For the example story there are a few possibilities:
- people are sick of 'political' stories and flag them out of tedium
- there is a prevailing pro-Trump, anti-science majority of active users on the site
- there are active influence campaigns using sock-puppet accounts to hide and prevent discussion of ongoing attacks on science
The most likely answer is all-of-the-above. But why should such anti-speech activity as flagging be private? This may already be possible via the API so I'd be interested to learn that if so.
[0]: https://news.ycombinator.com/item?id=44961584
I often flag submissions or comments when they go against the rules (sometimes written, sometimes unwritten) of the site.
I'm generally not willing to:
So, if these flags become public, I'll just stop flagging. I'm sure I'm not alone. I consider this a negative outcome of making flags public.a) exceed the likelihood of people doing this via commenting anyway
b) justify the opaque and powerful nature of flagging as-is
Perhaps you stopping flagging if you're not willing to justify a flag is a good outcome in aggregate? We have mods to kill threads which violate the guidelines already. But looking at the /active list there's certainly an amount of (probably organic) censorship of controversial threads in either direction (though my gut feel is it biases more towards censorship of articles about the latest outrages of US government).
I'm not really interested in say, Ruby, I think people should probably use languages which are type-safe if they want to avoid catastrophes in production and 1am pager calls. However if I see an article about Ruby I'm just going to not engage with it. Perhaps your existing interpretation of the unwritten rules is too broad and actually we ought to rein in the amount of flagging anyway?
I think a lot of us are generally happy with how the site operates—that's why we're here. I personally consider the moderation to be a feature—I think dang and team do a great job. I'm sure you could pick out some counterexamples but comments and posts that rise to the top tend to be thoughtful. There are exceptions. Nobody bats 1.000.
Posters don't have a right to be seen/read. That said, there are plenty of other communities that will embrace the types of posts/threads that would get flagged here.
If you have specific concerns about specific comments/stories getting flagged, it's reasonable to take each one up with the moderation team privately (there's a contact link in the footer). Just don't badger them—becoming a nuisance won't help you achieve your goals.
Deleted Comment
This would allow anyone to perform network analysis and reporting and full audit ability and is a minimal level of accountability for using this functionality to close discussions down.
And what would be the purpose of this? "Audits" are meaningless when you have no ability to affect procedures.
The mods already have this data and they already choose to allow what they will. Neither you or I or anyone else has the right to hold anyone here accountable for their behavior - indeed, the guidelines explicitly prohibit doing so in most cases, because it makes for "boring reading."
Flagging should be just that; a flag for the moderation team to review the submission/comment. It should not at all come with an immediate downranking of the content itself until the moderation team has reviewed the flag and upholds it as appropriately flagged.
If flagging wasn’t a simple way to kill young discussion threads and instead users had to downvote the submissions they don’t like; then discussions couldn’t be so severely impacted by a minority of users.
It would work like this: When you flag a post for breaking the rules, the community's guidelines will pop up. You are then asked in this window to highlight the relevant section or sections of those rules that this post has violated. And I don't mean just "select which rule was violated", I mean "use your cursor and highlight the text of the rules that were violated." (with support for highlighting multiple sections if so desired).
This serves the following functions:
1. Communicates why something was flagged (obviously).
2. Forces the person who's flagging the submission to actually read the rules.
3. The subjectivity of the highlighting system is used to make Sybil attacks more obvious. I'll explain why after this list.
4. It differentiates flagging from downvoting. Downvoting is for saying "I don't like this". Flagging is for saying "This violates our community's rules".
As to why this helps reveal Sybil attacks: There are several subjective points on what, where, and how people will highlight rules. Should punctuation be included or not? Should the key word in the rule be highlighted? The key sentence? The whole section? What about examples? Should we include them? Or only highlight them? Users operating in good faith will cluster around common points in common areas, but will have different ways of doing so. So, if a block of users all have: the same input, in the same way, clustered around the same time, then it was likely a Sybil attack.
This system doesn't require that it de-anonymize the people who submit flags, but it does provide a form of publicly visible transparency as to why something was flagged.
Edit: I forgot to make clear, you would be able to see a heat map of the rules that were highlighted for a flagged post.
I'd be interested to hear any thoughts on this idea.
Than other random "judges" would be asked if the reason given by the "accuser" are correct. There would have to be some "cost" in karma to flag a post (or limit of X flags / day for X karma status or smth) and some reward in karma for being chosen as a judge/jury.
Also the need to have a minimum flagging weight and a minimum of judging weight and to reconcile conflicting votes.
Anyway would love to talk about it more but tbh it's probably not gonna happen also because most people don't like jury duty... Maybe when ai gets over the "hallucinations" but well at that point we can also get our individual ai's to read everything and judge for us
for disabilities well... That one I dunno. I don't have a good concept of what kinds of UI are most convenient for each type of accessibility case.
And it's a little tempting to get lost in the weeds of who watches the watchers, but to be honest even if implemented in hacker news case, the mods themselves could vet flags for anomalies. Just this on its own would serve as a force multiplication for HN mods.
For more decentralized forms of moderation. One method might just be a simple flag appeal. Circles back to the community, they can discuss if the rule that is cited is fair, and if it wasn't possibly remove or limit flagging abilities of those who cited the rule incorrectly. And possibly some increased punishment if the appeal fails? There are lots of options there. Big wide design space.
I do think the direct text highlighting has a few important features. The Sybil attack resistance is one. That was one of the OP's primary concerns. Also, clarity on what rule was broken and why is very important, and a given rule can be verbose. It might not be obvious what specifically in a given rule was the reason for the violation. Direct highlighting lets flaggers more directly communicate what the issue is, without opening the communication channel up for a flame war.
Before the usual retorts come that I can only afford to think that way because I’m not a member of a “disaffected group”, my still living parents dealt with the Jim Crow south and my son who grew up in the suburbs all of his life still got looked at with suspicion walking around in our neighborhood.
But that doesn’t mean I want to see a dozen post a day about police brutality, BLM, the inequities in the justice system or whatever anti woke BS Trump was talking about today on HN.
What possible good discussion could come out of a post about Palestine vs Israel unless it was a technical “innovation” [sic] that one side or the other was using?
I think a lot of people agree with your reasons for flagging and wish politics didn't cross over into tech, but that doesn't really impinge either way on making flags public. (In the example article that prompted this a debate about the relative benefits of different vaccine research approaches seems patently tech/science based, but again it is not really relevant to a proposal to make the flags public record).
Makes sense to me why that story got flagged.
>- people are sick of 'political' stories and flag them out of tedium
Looking at active page, pretty minimal politics. So they are being flagged, the reasoning is unknown.
>- there is a prevailing pro-Trump, anti-science majority of active users on the site
lol the polar opposite is quite true. Virtually no support for trump on HN. Most of us arent in the USA, and those ive seen who are, are clearly democrats. Us Canadians hate trump pretty much, even the Maple MAGA crowd has disappeared.
>- there are active influence campaigns using sock-puppet accounts to hide and prevent discussion of ongoing attacks on science
<tinfoil> tags missing?
We have dang's word that he hasn't detected any funky behavior with respect to flagging and that these are organic events. But I don't see a reason that the information shouldn't be available. I struggle to think of a downside.
Not really relevant to your main point but the idea there aren't social media influence campaigns from all sides is more of a tinfoil position than acknowledging that there absolutely are, whether or not they are effective.