Readit News logoReadit News
nindalf · 4 years ago
Huh, never thought I’d see XCheck in a news article. I used to work at Facebook and spotted abuse of this system by bad actors and partly fixed it. It’s still not perfect but it’s better than it used to be.

I think I might have agreed with the author of this article before working in Integrity for a few years. But with time I learned that any system that’s meant to work for millions of users will have some edge cases that need to be papered over. Especially when it’s not a system owned and operated by a handful of people. Here’s an example - as far as I know it’s not possible for Mark Zuckerberg to log in to Facebook on a new device. The system that prevents malicious log in attempts sees so many attempts on his account that it disallows any attempt now. There’s no plans to fix it for him specifically because it works reasonably well for hundreds of millions of other users whose accounts are safeguarded from being compromised. His inconvenience is an edge case.

With XCheck specifically what would happen is that some team working closely on a specific problem in integrity might find a sub population of users being wrongly persecuted by systems built by other teams located in other time zones. They would use XCheck as a means to prevent these users from being penalised by the other systems. It worked reasonably well, but there’s always room for improvement.

I can confirm some of what the article says though. The process for adding shields wasn’t policed internally very well in the past. Like I mentioned, this was being exploited by abusive accounts - if an account was able to verify its identity it would get a “Shielded-ID-Verified” tag applied to it. ID verification was considered to be a strong signal of authenticity. So teams that weren’t related to applying the tag would see the tag and assume the account was authentic. And as I investigated this more I realised no one really “owned” the tag or policed who could apply it and under what circumstances. I closed this particular loop hole.

In later years the XCheck system started being actively maintained by a dedicated team that cared. They looked into problems like these and made it better.

bo1024 · 4 years ago
Thanks a lot for posting these details and dealing with the critical replies.

I think that with your background and investment in improving these problems, it will be hard for you to understand the perspective many people have that Facebook is fundamentally rotten at this point. These conflicts arise from FB's core business model. It calls up a torrent of hate speech and misinformation with the right hand while trying to clumsily moderate with the left.

You can hire whole teams to prevent singed fingers or protect certain possessions, but the point of a fire is to burn. If there are no good solutions while maintaining FB's core approach and business model, then it would be better for the world if it were extinguished.

helen___keller · 4 years ago
> It calls up a torrent of hate speech and misinformation with the right hand while trying to clumsily moderate with the left.

Not a Facebook employee (or supporter for that matter), but I'm curious if you consider this an issue of Facebook or of social media in general.

Not saying it's OK for FB because everyone does it, but you generally see the same dynamic of the "torrent of hate speech and misinformation" on Twitter, on Reddit, on Youtube even (personal experience: I have a family member that was radicalized by misinformation on the internet. It was all on Youtube, she had never even used Facebook).

I've noticed that people go a lot harder on Facebook than on other tech companies. I think Facebook's reputation is well deserved, but I do think that reputation should be shared with really all social media in general.

cbsmith · 4 years ago
Note that moderation is not part of the business plan. FB was pretty much dragged kicking and screaming into that function.
smsm42 · 4 years ago
I think people that work on this feature mean well - or at least they think that they mean well. But as a result, we have a two-tier system where the peasants have one set of rules and the nobility has an entirely different one. It may have started as a hack to correct the obvious inadequacies of the moderation system, but it grew into something much more sinister and alien to the spirit of free speech, and is ripe for capture by ideologically driven partisans (which, in my opinion, has already happened). And once it did, the care that people implementing and maintaining the unjust system have for it isn't exactly a consolation for anybody who encounters it.
q1w2 · 4 years ago
You have to remember that high profile accounts get 10000x the number of abuse-reports than a normal account - nearly all bogus. The normal automated moderation functions simply do not work.

Many users will simply mash the "report abuse" button if they see a politician they don't like, or a sports player for an opposing team.

If the normal rules applied identically to everyone, all high profile accounts would simply be inactive in perpetuity.

Maybe a better system would penalize reporters that are found to have reported content that do NOT violent content policies?

nindalf · 4 years ago
Let me try to explain it again. Suppose an integrity system has a true positive rate of 99.99%. That would be good enough to deploy right? Except that when applied to millions of accounts, 0.01% is still a massive number of people. This is even worse when those people are unusual in some way. For example they might open conversations with hundreds of strangers a day for good reasons. But their behaviour is so similar to those of abusive accounts that they get penalised.

You might say that maybe 99.99% isn’t good enough and the engineers should try for more 9s. Maybe it’s possible but I don’t know how. If you have ideas on this, please share.

Your concerns about different treatment for some people is valid. But again, their experience is different. For example, if an account or content is reported by hundreds of people it’ll be taken down. After all, there’s no reason for accounts in good standing to lie right? Except celebrities often are at the receiving end of such campaigns. There needs to be exceptions so such rules aren’t exploited in such a manner.

jensensbutton · 4 years ago
Meh, we already live in a world with one set of rules for the peasants and another for the nobility. Seems like just another area where Facebook reflects the real world.
winternett · 4 years ago
I agree with the perspective of peasants and nobility being at play...

Remember back in the day when a friend got a new game console and invited you over to check it out with the ideal that you'd get to try it but they only had one controller? Really all they wanted to do was have you come and watch them play, and you sat there until you got bored of never having a chance to engage with it? That is the modern social media experience.

They play with your ability to be visible even to people that follow you for updates on your posts. The only way for non-elites (people deemed worthy of ranking) is to pay for ads, which appear as lower quality "promoted" content.

The model of social media started with everyone on the same playing field, but there are so many dimensions that can be manipulated to keep users thinking that it is functional while these sites change to serving the purpose of generating revenue for partners and paying interests underneath a façade of being fair communities. If you speak out against them, they censor you as well, all behind the scenes.

It's simply better to go back to creating independent sites, and then to hope you get ranked fairly on Google, and that people bookmark you... We become powerless when we allow corporate control of our communication, because governments are not aware nor vigilant of/to the impacts to regulate social sites until it's far too late, and because profit is king in that world over simply doing what is fair and positive. The business model feeds misinformation, chaos, disharmony, and conflict, just like reality TV does now, Why? Because it keeps people glued to their screens.

Even these platforms are terrified of instituting positive changes out of fear of losing their market share and user base... Overnight Twitter, FaceBook, IG, or any of these sites can lose their user base and reach just like Club House and Vine... That has to be said as well.

A big problem is that many accounts that post sensationalized (violent, graphic, sexual) content are really run by people that are building follower accounts to sell later on, as a symptom of heavily limited organic account growth (the ability to get followers) on these platforms... People build accounts by posting wild content and then sell them on the black market to others who start out looking like popular individuals because the accounts come with followers already included. Being successful on social media is no longer about having quality content, it's about how much you pay and how professional your image is... No wonder why class-ism has taken hold on it all.

There are still plenty of ways of maintaining fairness on any of these communities/platforms, and company leadership needs to go back and review the original promises they made to everyone in order to build their current user base (promises that they've all now totally broken) and fix those issues as the primary basis for resetting their flaws and oversight.

lmilcin · 4 years ago
I disagree.

Let's take your example of Mark Z.

What makes you think that this is unique case? What about people that suddenly come to fame, like viral video subjects?

A simple solution is to disallow logging in from new devices and the attempt being silently dropped so you are not bothered, unless you do some magic like generate one time key to complete the procedure on the new device.

I could think of a lot of people that would find it useful.

Or allow setting up 2FA token (other than mobile) correctly.

Instead what FB does is make it impossible to secure your account because they insist whatever you want you should always be able to recover your password with your phone number.

Years ago when I was still using it (I had reason) I tried to secure it with my Yubico. Unfortunately, it wasn't possible to configure FB to not allow you to log in on a new device without the key.

I understand how the discussion probably went: "Let's make it so that we can score some marketing points but let's not really make it requirement because we will be flooded with requests from people who do not understand they will never be able to log in if they loose the token."

But that's exactly what I want. I have a small fleet of these so it is not possible for me to loose them all but unfortunately most sites that purport to allowing 2FA can't do it well because they either don't allow configuring multiple tokens or if they do, they don't allow really lock your account so it is not possible to log in the next time without the token.

Ansil849 · 4 years ago
> unfortunately most sites that purport to allowing 2FA can't do it well because they either don't allow configuring multiple tokens or if they do, they don't allow really lock your account so it is not possible to log in the next time without the token.

This is a great point. AFAIK, Google is the only service which allows you to set mandatory U2F login requirements. Does any other service offer this functionality?

cbsmith · 4 years ago
The never allowing multiple tokens things drives me nuts.
dundermuffl1n · 4 years ago
Most of your response treats the service and its flaws as an engineering problem, whereas the ramifications in the real world aren't something Facebook gets to absolve itself from. They need to own the problem completely. If they can't solve the issue through engineering, it is their responsibility to hire hundreds of thousands of moderators.
otterley · 4 years ago
You haven’t really touched on the main problem discussed in the article, which is that to Facebook, there are special users - mainly celebrities and politicians - who get to play by different rules than the rest of us. Social media was supposed to help level the playing field of society, not exacerbate its inequalities.
nindalf · 4 years ago
I did touch on that problem. Like I pointed out Zuckerberg can’t log in on new devices anymore. That’s because of the thousands of attempts per second to log into his account. Those attempts happen because he’s a celebrity. His experience is objectively different because of who he is.

It’s the same with Neymar. How many times do you think his profile is reported for any number of violations by people who don’t like him? If an ordinary persons account got 100 reports a minute it would be taken down. Neymar’s won’t be.

I don’t know how every Integrity system could be modified to make an exception for any of these classes of accounts or how to codify it in a way that would seem “fair”. If you have an idea for a better way, you should share it.

gwright · 4 years ago
> Social media was supposed to help level the playing field of society,

Why do you think this? I mean it isn't like there was a plebiscite on what "social media was supposed to help".

As with most things of consequence in our world, social media is more of an emergent phenomena that any sort of planned effort. We have a legislative system that is there to provide a mechanism to adapt our legal system as needed.

dragonwriter · 4 years ago
> Social media was supposed to help level the playing field of society, not exacerbate its inequalities.

This is literally the first time I’ve heard anyone voice this expectation, and it is a ludicrous expectation to have had at any point in time.

zepto · 4 years ago
> Social media was supposed to help level the playing field of society, not exacerbate its inequalities.

Supposed by whom? Zuckerberg created Facebook when he was at Harvard and had never seemed interested in leveling any playing field.

acoil · 4 years ago
Facebook from the beginning has been about ranking people- its in the name. Who's face is prettier?
q1w2 · 4 years ago
The real underlying issue is that high profile accounts are targeted by groups of users who "report abuse" simply because they don't like that sports team/politician/etc...

High profile accounts cannot work under identical rules or they'd simply all be suspended all the time.

josefresco · 4 years ago
> Huh, never thought I’d see XCheck in a news article.

Is everyone at Facebook this naive? You didn't think a system that creates a secret tier of VIP accounts where the rules (and laws) don't apply while publicly claiming the opposite would end up ... in the news?!?!

anigbrowl · 4 years ago
One of the nice things about transparency is not having to engage in performative theatrics at a later date.
schwank · 4 years ago
This system also made it impossible for me to ever log in again. It had been a few years since I used FB but some friends tagged me at an event, so I figured what the heck.

I was presented with a system I had never configured, which asked me to contact people I don't know to get them to vouch for me. At the same time my FB profile was blackholed, and my wife and long time actual friends can't even see that I exist anymore. Just some person that astroturfed my name with no content (I have a globally unique name).

So I no longer exist from FB perspective, which made both my decision to not use FB as well as never use any FB products like Occulus much easier.

cheath · 4 years ago
One of my favorite things about HN is seeing people come out of the woodwork to raise their hand and say that they worked on a system and give their insight. Thanks for sharing this perspective.
sub7 · 4 years ago
"I was only following orders"
rwmj · 4 years ago
So when Mark Zuckerberg buys a new phone or whatever, what happens?
908B64B197 · 4 years ago
My bet is he emails the auth team and gives as much info as he can about the device. Then they create auth rules as if a regular login had happened.

I also assume if he's on the internal network it might be easier to just manually allow auth attemps from a few internal IPs for a few seconds.

tlear · 4 years ago
You are surprised that after blatantly lying for years the truth came out?

I mean unless you are the 5th directorate of KGB.. but even then shit like this always comes out.

Dead Comment

iammisc · 4 years ago
That's all well and great, but your comment as an insider directly implicates Facebook CEO of perjury by lying to congress. During a hearing he claimed all users were treated equally. This is clearly not the case.

Perjury before congress can result in jail time and I hope he's made an example of.

annadane · 4 years ago
The problem seems to be though, that while the company may have tools to detect abuse, if they're choosing selectively when to enforce things it defeats the entire point

Edit: downvotes from shills

nindalf · 4 years ago
That wasn’t my experience over several years. Whenever we found a new vector of abuse or detected systems misfiring we would take it seriously.
jmwilson · 4 years ago
Strong opsec that the supporting documents are actual photos of a computer screen from the visible moire (or somehow altered to look that way).

After numerous leaks, Facebook's internal security team became very good at identifying leakers. The person responsible for this 2016 post was identified within hours and terminated the next day: https://www.buzzfeednews.com/article/blakemontgomery/mark-zu.... The leaker was easily identified by the names of friends liking the post (that and part of their name was visible).

Facebook-issued laptops are filled with spyware, monitoring everything down to the system call level, and practically every access to internal systems is logged at a fine level. The only way to exfiltrate data with plausible deniability would be to photograph the screen with an individually owned device. The fact that you searched for the internal wiki page and viewed it are nothing, but that you shortly invoked the keyboard shortcut for a screen capture, then inserted a USB drive, and copied a file ("Screen shot ____.png" even!) to it (all logged) ... congratulations, you're caught.

nickysielicki · 4 years ago
> The leaker was easily identified by the names of friends liking the post (that and part of their name was visible).

Well that, and the fact that his face was immediately to the left of his name.

msteffen · 4 years ago
It seems like no one has figured out a good system for moderation on the internet.

IIUC, Facebook hired contractors to do it, then realized that that didn't work and created XCheck to cover the visible cases, and is now in trouble because XCheck also doesn't work and rubber-stamps everything. Even before this there were news stories about the horribleness of those contract moderator jobs. Reddit tried to federate moderation, but it's since become clear that all top subreddits are moderated by the same people. Even HN only works because dang busts ass to keep it good, and that has obvious limits (what happens when dang goes on vacation or retires?)

solveit · 4 years ago
The only systems that have figured out moderation at scale are Wikipedia and StackExchange. But see what HN thinks about that.

Nobody wants to admit that the only type of moderation that actually works at scale is an entrenched group of somewhat-expert overly-attached users gatekeeping contributions with (what looks like to the novice and sometimes even to the established user) extreme prejudice on a website with intentionally highly limited scope.

anshumankmr · 4 years ago
StackExchange’s moderators have a huge bias issue against newcomers to the field (some can say it is justified) and sometimes, (though I have only personally noticed this), there is a huge bias against those who can’t speak English well. I have noticed at times, people with high rep make rude remarks as they misunderstand what the original author had to say.

For me, I take time to edit questions with poor grammar and help people solve their problems from time to time.

Akronymus · 4 years ago
I don't think that the wiki moderation is good at all for anything where opinions really matter. (Politics, certain movements and such)
alanlammiman · 4 years ago
If the moderation is extremely prejudiced, then it doesn't work.

Though I don't disagree that lots of well-intentioned humans could be a path to moderation that works.

908B64B197 · 4 years ago
> The only systems that have figured out moderation at scale are Wikipedia and StackExchange.

Some edit wars on Wikipedia suggest the opposite.

h0nd · 4 years ago
Wikipedia is horrible regarding moderation. Trust is mostly gone.
vorpalhex · 4 years ago
I don't think universal moderation (a moderation standard across all users) is possible or even desirable.

Different users want different things. There are users who never want a single even mildly insulting word. There are users who want unlimited freedom.

The best you can do is to break down moderation and let people opt into a level and form of moderation. Tell them upfront what they are getting and let them pick (or let them make their own moderation rules that apply clientside).

10000truths · 4 years ago
The common denominator in platforms going to shit is scale.

Most social media platforms get their initial users by targeting a specific niche or demographic. Forums of olde typically revolved around some specific subject matter (e.g a particular game, or band, or subculture). Facebook targeted college students. Reddit targeted techies. But once the platform reaches some critical threshold of popularity, the platform strays from its vision to realize some commercial potential. The admins and moderators, in the interest of growth, try to appeal to a lowest common denominator, which ends up alienating the now-veterans, and the original purpose of the platform is diluted into obscurity.

idrios · 4 years ago
Ironically, HN's great moderation caused it to become very popular, which has made the task of moderating it all much more difficult, which is having a noticeable effect on discussions and which articles make it to the front page.
nradov · 4 years ago
HN only works because YC doesn't sell ads and explicitly treats it as a loss leader to support their investing business.
asdff · 4 years ago
I wouldn't be surprise if they turn a profit on YC running sentiment analysis. The website seems like it isn't the most expensive to host either.
ErikVandeWater · 4 years ago
Reddit could change their TOS tomorrow to prevent users from moderating more than 2 subreddits if they wanted; others would take their place. But the mods of subreddits that have not been banned are advertiser friendly.
bambax · 4 years ago
I think part of the problem of "moderation" is exposition, and incentives to maximize user engagement. Posts that nibody sees don't need to be moderated. The problem comes from the fact that platforms offer the most visibility to the worst content, because getting users riled up, excited or upset is the core of their business. It's their only business.

Maybe moderation could be solved by regulating the number of likes or reposts a given user can make or a givzn post can receive. Seems a little far-fetched but worth thinking about.

maccolgan · 4 years ago
I fear the day when dang retires.
stjohnswarts · 4 years ago
I would vote that whenever dang is on vacation that HN shutsdown. Everyone could probably use a HN vacation occasionally.
TeeMassive · 4 years ago
> It seems like no one has figured out a good system for moderation on the internet.

I use locals.com; lots of small disjointed communities, where posters have to pay (or not) a small fee per month (1$ to 5$). which keep the trolls and influence campaigners away.

literallyaduck · 4 years ago
"All animals are equal, but some animals are more equal than others" Animal Farm - George Orwell
exikyut · 4 years ago
Hmmm. That got me thinking a bit.

So there's (for want of a more precisely nuanced way to put it) the default/knee-jerk outrage response to this, aka "how dare there be people above me in society that get the final word on what I can/cannot do without lawful recourse" etc etc.

But then... "more equal than others", taken in strict isolation, kind of goes off on an interesting tangent about how certain personality types are intrinsically socially compatible with each other; less jarring/grating, and more... resonant.

Stream-of-consciousness question: at a fundamental level, is there an exact point that this resonance, which is arguably benign at face value, can end up enabling social ${in}equality?

Not quite talking about individual scenarios of powerful person A taking an effectively entropically arbitrary liking to random person B and elevating them, ideally without [requirement for] compromise.

I mean more in the generalized sense, looking more from the perspective of network/emergent effects.

sdrawkcabmai · 4 years ago
Facebook is the most explicitly duplicitious sociopathic company in the tech sector. Many companies are sociopathic, especially as you get into pure finance companies like PE firms, but few are as duplicitous as Facebook.
actually_a_dog · 4 years ago
That's why they're on my "wouldn't work for them if they wee the last tech company in the entire world" list. That list isn't long, but, FB is near the top of it.
KaiserPro · 4 years ago
This should have been obvious during the election when trump clearly violated the "don't mislead the public about how elections work" when he claimed that postal votes are what ever it is he said it was.

That is a clear ban. It says so in the "community guidelines"

(side note, you should really read the community guidelines, they are a great set of rules for keeping a community vibrant and happy, assuming they are enforced....)

I can see why facebook did it, you don't want to obviously piss off a capricious party with the power to fuck with your bottom line. It doesn't make it any better.

disgruntledphd2 · 4 years ago
Trump should have been banned after his Mexican speech, in 2016.

Not sure how you can ban a candidate for the Presidency of the US from a US-based service, though.

donatj · 4 years ago
I don't understand the way their enforcement works. I've reported videos of people literally setting live animals on fire and been told there was no violation, but my wife called someone a "loser" and got a week long ban.
mtnGoat · 4 years ago
I once made fun of Justin Bieber(said he acts like a baby) on IG, and got a warning. Some guy threatened to Hunt me and my family, kill us and do bad things to our bodies and IG said it didn’t violate any rules, when I reported it. My account can now not even post the word “chump” without warnings. Talk about backwards.

It’s very safe to say there is no adult in the room at FB/IG when it comes to rule enforcement. I simply cannot wait until they get the whip from some governments.

gordon_freeman · 4 years ago
I don't want to do victim blaming or shaming here but why would you use any fb product knowing well about their awful business practices like these?
vernie · 4 years ago
And Bieber himself said "I was like baby, baby, baby oh", so it's not like your were saying anything controversial.
josefresco · 4 years ago
My wife (an American) also got flagged for saying "Americans are selfish". She then made a post about our RV (camper) asking about sewer "hook ups" at a campground and was flagged for posting what looked like a sex ad.

We (the kids and I) now lovingly call her "hate speech Mom".

Sunspark · 4 years ago
I got warned for hate speech on FB for saying in a comment that Americans have the memory of a goldfish. I appealed it, the appeal was declined and my hate speech warning remains on my permanent FB record as being against community standards.

Pretty comical, considering it was accurate in context, and while you'd think American 1st amendment free speech rights would count, they don't, because FB is the private property of Zuckerberg.

No need for someone to point out that it's a publicly traded company. Zuckerberg controls 57.9% of the voting shares of FB. It is his personal property that he allows others to have an inconsequential piece of and everything that is wrong on the platform is because of him.

adolph · 4 years ago
> asking about sewer "hook ups" at a campground

No lie, that is dirty talk.

bambax · 4 years ago
> We (the kids and I) now lovingly call her "hate speech Mom".

Hook-up mom would be better.

shadowgovt · 4 years ago
Especially after Jan 6, there are a couple of things you can say in an ordinary spirited political debate that will cop you a ban on FB. One is several flavors of "Americans are X," another is variants on "Kill the filibuster" (which I assume is pattern-matching to '[violence-word] the [congress-word], which they probably up-sampled in the threat modeling for, uh, obvious reasons).
q1w2 · 4 years ago
"hate speech" has become so watered down.
radu_floricica · 4 years ago
This is actually discouraging non-brigades from reporting. I reported obviously spam accounts and got the same feedback after a few weeks. Now I don't bother.

Brigades on the other hand have the motivation to play the numbers.

tonfreed · 4 years ago
Same, I've reported a ton of death threats only to be told they're not in violation. Only for my mum to cop an autoban for calling someone a spring chicken.

Their moderation is a complete joke.

tjpnz · 4 years ago
This is also the same company that allowed a terrorist to livestream a killing spree for 17 minutes despite it being reported over and over again. To add insult to injury they allowed copies of the same footage to proliferate across their platform for weeks.

Facebook spend a lot on PR talking up their AI capabilities and how it's being applied towards moderation. Would be nice if it actually worked.

MattGaiser · 4 years ago
I suspect it is automated. A computer can easily flag calling someone a loser. Not sure if FB has burning animals as a automated flag yet.
donatj · 4 years ago
I had the option to have the post re-reviewed, which took two days. I mean it could just be theatre, but I assumed on the second round a human reviewed it.

From the support response:

> The post was reviewed, and though it doesn't go against one of our specific Community Standards, you did the right thing by letting us know about it.

Setting squirrels on fire and watching the poor things scurry around I guess is cool with Facebook's Community Standards.

chefandy · 4 years ago
Some of the actions are automated based on some NN algorithm score, and then the appeals are human-powered. They have large third party content review offices that are operated like call centers in which humans review these things. I understand they're real meat grinders to work in.

I've reported clearly racist, harassing content before and had the reviewer report it as confirming to their standards. I know people that were banned for bullying for wishing people happy birthday. As much as I suspected a bunch of people are just quickly mashing random buttons to pump up their score, I read that they're evaluated based on the success and failure of appeals to their judgements, so I can't imagine they would be. There are clearly deep-seated problems with this process.

datavirtue · 4 years ago
Why can't users moderate the post so that Facebook does know about the animal torture? -5 Animal Torture
smcl · 4 years ago
Yeah I gave up reporting. I’ve reported some people being extremely racist in comments, no action in either case. It’s either moderated by racist people, some poor AI or “rand()%2==0”

Dead Comment

KaiserPro · 4 years ago
simple: if you are important, then you get a moderator.

If you are a pleb, its up to the AI. so unless that video has been fingerprinted, then it'll be approved.

if the sentiment analysis AI says you were being abusive: ban. if you appeal (if you can) it might, perhaps 1/100,00 times be looked at by a human

Dead Comment

himinlomax · 4 years ago
I got a 48h ban for calling the Japanese military "the japs" in the context of the Rape of Nanking. Wouldn't want to offend the group that raped and murdered millions now, would we?

A friend got a 48h ban for calling herself a "rital," a term for an Italian immigrant in France that used to be derogatory a century ago.

mox1 · 4 years ago
Is it really that bad that they apply slightly different sets of rules to accounts with more notoriety?

For example, do we (as facebook consumers) want newly created accounts with @hotmail email treated the same as a new account with @doj.gov, as the same as a Celebrity with a million followers?

Do we want the same set of rules for a suspected Russian troll account to be applied to a major politician? (well..some here might, but I don't).

I think as your account age, status and popularity grows, you should be given *some* flexibility under the rules. Imagine a points system behind the scenes, where bad things get you points, and other things remove points. At a certain point threshold you are banned, suspended, etc.

rosmax_1337 · 4 years ago
The system is not simply based on notoriety, as some kind of aggregate of follower count or likes, which would be sane and fair step in the right direction. But rather on a case by case basis, where according to the article "whitelist status was granted with little record of who had granted it and why, according to the 2019 audit.".

It easily ends up being a case of "im a moderator at facebook, and i like this person, and i put them on xcheck". Terrible ofcourse.

The larger problem at hand is that companies like Facebook are given such a gigantic power over discourse and politics because of their gatekeeping. We would often laugh at policies in China which bans people talking about Tiananmen Square, while seeing more or less the same happen in the west about our own controversial issues.

[sarcasm not directed at you] But it's ok. In the west companies are doing this, and companies are allowed to do business with whoever they want. It is not censorship therefore. [/sarcasm not directed at you]

bmhin · 4 years ago
> The system is not simply based on notoriety, as some kind of aggregate of follower count or likes, which would be sane and fair step in the right direction. But rather on a case by case basis, where according to the article "whitelist status was granted with little record of who had granted it and why, according to the 2019 audit.".

This was my takeaway as well. I 100% agree rules cannot be applied evenly across every user. A person sharing posts with their 300 "friends" and someone blasting messages at their millions of "followers" are frankly engaging in completely different experiences. The regular person might expect none of their comments to ever get reported and any report could be cause for something actually bad. A popular politician on the other hand might see every single thing they post reported a ton every single time.

And rather than applying rules based on say the reach (which Facebook knows) or any other metric, it seems that they just chucked people into the special people list and that's that. The article stated there are millions on that list. A catch all for all the people who are having the greatest impact seemingly. The fact that the list had considerations for potential blowback to FB is even worse. I get that in percentage terms of 2.8 billion users a multimillion person list is in outlier territory by most measures, but that group is also wildly influential and thus shouldn't be in the "too weird" category.

I'm not even opposed to a general whitelist, some people (like a President of the US) truly are gonna be really weird to apply any broader ruleset to. But a free for all and catch all bucket for anyone of "notoriety" is really bad. It should be a very special remedy that is not done lightly. The article made it seem like the policy for this particular remedy was non-existent.

Part of me thinks the solution is just to cap it. If the central conceit is "connecting people" then no person realistically knows more than say 10,000 people and shouldn't need the microphone scaled to global proportions. That'd never happen, but it seems like a root answer.