It immediately makes me think of the song "Redesign Your Logo" by Lemon Demon. Pure comedy.
Top-level moderation of any sufficiently cliquey group (i.e. all large groups) devolves into something resembling feudalism. As the king of the land, you're in charge of being just and meting out appropriate punishment/censorship/other enforcement of rules, as well as updating those rules themselves. Your goal at the end of the day is continuing to provide support for your product, administration/upkeep for your gaming community, or whatever else it was that you wanted to do when you created the platform in question. However, the cliques (whether they be friend groups, opinionated but honest users, actual political camps, or any other tribal construct) will always view your actions through a cliquey lens. This will happen no matter how clear or consistent your reasoning is, unless you fully automate moderation (which never works and would probably be accused of bias by design anyways).
The reason why this looks feudal is because you still must curry favor with those cliques, lest the greater userbase eventually buys into their reasoning about favoritism, ideological bias, or whatever else we choose to call it. At the end of the day, the dedicated users have much more time and energy to argue, or propagandize, or skirt rules than any moderation team has to counteract it. If you're moderating users of a commercial product, it hurts your public image (with some nebulous impact on sales/marketing). If you're moderating a community for a game or software project, it hurts the reputation of the community and makes your moderators/developers/donators uneasy.
The only approach I've decided unambiguously works is one that doesn't scale well at all, and that's the veil of secrecy or "council of elders" approach which Yishan discusses. The king stays behind the veil, and makes as few public statements as possible. Reasoning is only given insofar as is needed to explain decisions, only responding directly to criticism as needed to justify actions taken anyways. Trusted elites from the userbase are taken into confidence, and the assumption is that they give a marginally more transparent look into how decisions are made, and that they pacify their cliques.
Above all, the most important fact I've had to keep in mind is that the outspoken users, both those legitimately passionate as well as those simply trying to start shit, are a tiny minority of users. Most people are rational and recognize that platforms/communities exist for a reason, and they're fine with respecting that since it's what they're there for. When moderating, the outspoken group is nearly all you'll ever see. Catering to passionate, involved users is justifiable, but must still be balanced with what the majority wants, or is at least able to tolerate (the "silent majority" which every demagogue claims to represent). That catering must also be done carefully, because "bad actors" who seek action/change/debate for the sake of stoking conflict or their own benefit will do their best to appear legitimate.
For some of this (e.g. spam), you can filter it comfortably as Yishan discusses without interacting with the content. However, more developed bad actor behavior is really quite good at blending in with legitimate discussion. If you as king recognize that there's an inorganic flamewar, or abuse directed at a user, or spam, or complaint about a previous decision, you have no choice but to choose a cudgel (bans, filters, changes to rules, etc) and use it decisively. It is only when the king appears weak or indecisive (or worse, absent) that a platform goes off the rails, and at that point it takes immense effort to recover it (e.g. your C-level being cleared as part of a takeover, or a seemingly universally unpopular crackdown by moderation). As a lazy comparison, Hacker News is about as old as Twitter, and any daily user can see the intensive moderation which keeps it going despite the obvious interest groups at play. This is in spite of the fact that HN has less overhead to make an account and begin posting, and seemingly more ROI on influencing discussion (lots of rich/smart/fancy people post here regularly, let alone read).
Due to the need for privacy, moderation fundamentally cannot be democratic or open. Pretty much anyone contending otherwise is just upset at a recent decision or is trying to cause trouble for administration. Aspirationally, we would like the general direction of the platform to be determined democratically, but the line between these two is frequently blurry at best. To avoid extra drama, I usually aim to do as much discussion with users as possible, but ultimately perform all decisionmaking behind closed doors -- this is more or less the "giant faceless corporation" approach. Nobody knows how much I (or Elon, or Zuck, or the guys running the infinitely many medium-large discord servers) actually take into account user feedback.
I started writing this as a reply to paradite, but decided against that after going far out of scope.
For some people, it's not a matter of taste and it's not a matter of getting over it. Call it weakness if you want, call it mental scarring; either way movies should be entertaining, not traumatic. I can't speak to dogs dying, but flippantly watching "Last Night in Soho" with somebody who really didn't need to see that convinced me to start checking the IMDB parental guide before settling on anything.
The structure of the referenced website (long list of yes/no categories with explanation) seems a bit of a bad fit compared to just enumerating potentially problematic features. I guess it enables categorical searching, but it seems pretty bleak to browse through a filter like this.
The CAN traffic is unencrypted. It was pretty easy to MITM this module with a cheap arm Linux board and a can transceiver to enable writing a two way filter capable of blocking the traffic that didn't raise any DTCs (that I observed) and could be turned on/off by the user. I preferred this approach to complete disconnection of the module (which is noticeable via errors at the diagnostic port) or trying to faraday cage or disable the antennae on the TCU so it can't remotely send/receive. I can also turn off my module or completely remove it before I sell it.
I fear the next version of Miata will be an encrypted CAN like most other cars have moved to and even with my expertise I won't be able to access the latest safety features from new cars without surrendering what little privacy I've been able to claw back.
[1] https://www.mazdausa.com/site/privacy-connectedservices