I feel that moderating a platform as massive as Facebook is essentially impossible.
Automating exact matches for known bad content is easy enough. But when trying to automate without using exact matches, it becomes impossible when nuance and context can make a huge difference in the meaning of a word or sentence. Automation can get it wrong, and when it does, users want the ability to force a human to review it and make a decision, but bad actors will abuse and overwhelm the system.
> I feel that moderating a platform as massive as Facebook is essentially impossible.
Not impossible. A few years ago, one report found that Facebook would need to double the number of moderators, expand fact-checking, and take a few other actions. Facebook won't do it because they'd have to divert a small portion of their $39 billion in yearly net income toward that goal.
Moderation is one of those things that basically cannot scale (properly) without human level AI or an absolute TON of resources. Any generalised system lacks too much context to do a good job, and any specialised one is too expensive/resource heavy. You can somewhat get away with it on something like Reddit where communities moderate themselves, but that brings its own problems for both the platform and the communities on it.
Moderation absolutely can scale, platforms just don't want to pay for it. For two reasons:
- Moderation is a 'cost center', which is MBAspeak for "thing that doesn't provide immediate returns disproportional to investment". For context, so is engineering (us). So instead of paying a reasonable amount to hire moderators, Facebook and other platforms will spend as little as possible and barely do anything. This mentality tends to be enforced early on in the growth phase where users are being added way faster than you can afford to add moderators, but remains even after sustainable revenues have been discovered and you have plenty of money to hire people with.
- Certain types of against-the-rules posts provide a benefit to the platform hosting them. Copyright infringement is an obvious example, but that has liability associated to it, so platforms will at least pretend to care. More subtle would be things like outrage bait and political misinformation. You can hook people for life with that shit. Why would you pay money to hire people to punish your best posters?
That last one dovetails with certain calls for "free speech" online. The thing is, while all the content people want removed is harmful to users, some of it is actually beneficial to the platform. Any institutional support for freedom of speech by social media companies is motivated not by a high-minded support for liberal values, but by the fact that it's an excuse to cut moderation budgets and publish more lurid garbage.
Automating exact matches for known bad content is easy enough. But when trying to automate without using exact matches, it becomes impossible when nuance and context can make a huge difference in the meaning of a word or sentence. Automation can get it wrong, and when it does, users want the ability to force a human to review it and make a decision, but bad actors will abuse and overwhelm the system.
Not impossible. A few years ago, one report found that Facebook would need to double the number of moderators, expand fact-checking, and take a few other actions. Facebook won't do it because they'd have to divert a small portion of their $39 billion in yearly net income toward that goal.
https://static1.squarespace.com/static/5b6df958f8370af3217d4...
- Moderation is a 'cost center', which is MBAspeak for "thing that doesn't provide immediate returns disproportional to investment". For context, so is engineering (us). So instead of paying a reasonable amount to hire moderators, Facebook and other platforms will spend as little as possible and barely do anything. This mentality tends to be enforced early on in the growth phase where users are being added way faster than you can afford to add moderators, but remains even after sustainable revenues have been discovered and you have plenty of money to hire people with.
- Certain types of against-the-rules posts provide a benefit to the platform hosting them. Copyright infringement is an obvious example, but that has liability associated to it, so platforms will at least pretend to care. More subtle would be things like outrage bait and political misinformation. You can hook people for life with that shit. Why would you pay money to hire people to punish your best posters?
That last one dovetails with certain calls for "free speech" online. The thing is, while all the content people want removed is harmful to users, some of it is actually beneficial to the platform. Any institutional support for freedom of speech by social media companies is motivated not by a high-minded support for liberal values, but by the fact that it's an excuse to cut moderation budgets and publish more lurid garbage.
Dead Comment