I'm having a hard time reconciling all this right now. On the one hand, from the outside, I can see the actions that Facebook takes and they seem awfully guilty of what they are accused of. But on the other hand, I personally know and have previously worked with some of the people who work on trust and safety, specifically for kids. Good people who have kids of their own and who care about protecting people, especially children.
The best I can come up with is that Facebook is so big that the "evil" is an emergent property of all the different things that are happening. It's so big no one can comprehend the big picture of it all, so while the individuals involved have good intentions with what they are working on, the sum total of all employees' intentions ends up broken.
So maybe Zuck is telling the truth here, that they are trying to fix all this. But no one can see the forrest from the trees.
The person who mentions the banality of evil, dannykwells, has an excellent point.
But there's more at play here. I briefly worked on Twitter's anti-abuse engineering. Many of the people on that team cared a lot about protecting people. I sure did. But we didn't have the necessary power to actually solve the problem.
The people who did have that power were senior execs. They might say that they cared. In their heart of hearts, perhaps they even did. But their behavior demonstrated that they cared about other things much more.
My boss's boss, for example, was an engineering leader who had a climber's resume: quickly advancing through positions of more and more power. In my view, he cared about that a great deal, and did not give a shit about the actual harm to users. As soon as he got the chance, he pushed out my boss, laid off the team's managers, me included, and scattered the people to the wind.
I presume the same was true about the senior execs. They were aware Twitter was causing harm to people. If they wanted to know the details, we had plenty of research and they could have ordered more. Did they care? Impossible to know. But what they focused on was growth and revenue. Abuse was a big deal internally only as long as it was a big deal in the press.
I think this hits the nail on the head. It's not that Facebook or the many people who work there don't care about kids or a deleterious political climate. They do care. It's just about what happens when those concerns conflict with other concerns, such as maximizing user engagement. In my opinion Haugen's testimony and Zuckerberg's response simply confirm this: Haugen talks a lot about the research that was done and how that research was ignored; Zuckerberg points out a lot of (somewhat lacking in context) facts about the size of Facebook's investments in trust and integrity or openness to regulation.
I don't necessarily subscribe to the Gervais Principle[1] other than thinking it's an interesting lens through which to reexamine motives and motivations of coworkers, but sometimes the terminology is damn apt (at least for one group...).
Bingo. I never interacted with a person on the FB integrity teams who didn't care deeply about these problems - but their solutions never seemed to make it into production. Whether that was because of the unintentional friction of bureaucracy, or the explicit wishes of execs, is somewhat immaterial in the final analysis.
This is a place where the founder-CEO-demigod like Zuck should able to make better decisions than a professional team like Twitter. The long term profit maximization strategy was to maximize profits only up to the point where you risk getting regulated by government. With all the fawning praise of him as a kid I don’t think Zuck envisioned that one day both Democrats and Republicans would be united in their desire to fuck him over.
> I presume the same was true about the senior execs. They were aware Twitter was causing harm to people. If they wanted to know the details, we had plenty of research and they could have ordered more. Did they care? Impossible to know. But what they focused on was growth and revenue. Abuse was a big deal internally only as long as it was a big deal in the press.
Could this just be an issue of too many problems to care about and not enough time to solve them all or do you think the indifference was intentional?
There is a phenomenon I have witnessed working both in high growth startups and traditional Fortune 500s. At some point, the company starts attracting Dark Triad personality types that cement themselves in upper management positions, usually starting at Director level. These people are extremely dangerous, one of them had access to my corporate laptop (as was standard policy for that company) and would torment me by screwing with me on a daily basis.
When an organization becomes too large or bureaucratic, these Dark Triad types hide and typically exert their influence and power and will behind the scenes. This is why these companies seem “evil”, but it’s usually not the founders’ fault, a lot of times they’re unaware of it, or one of the founders is also a sociopath and will protect the evil cabal. That’s my two cents about it anyway.
> they might say that they cared. In their heart of hearts, perhaps they even did. But their behavior demonstrated that they cared about other things much more.
Isn't it just about that in the end? I think being good or not is about whether you give yourself the room to do the right thing even when other pressures exist – because they always exist.
Being good can be hard, because sometimes it means you have to abandon your usual priorities and stand up to the consequences which will emerge from that decision.
Just curious - given your boss's boss is so self-interested, what advantage could he gain from pushing out a subordinate and laying off all the people below?
Whenever I am having a hard time understanding a situation, someone's motives etc in the world of business/politics, I start with follow the money, and it helps. It might sound cliche, but it is also true in a majority of the cases
This is exactly the point Frances Haugen is making, and it's why this is so different and so much more significant than the other Facebook scandals and leaks in the past.
Haugen repeated over and over again her testimony today that Facebook is full of smart, thoughtful, kind, well-intentioned people, and that she has great empathy for them, even Mark Zuckerberg. Her point is that they have created a system of incentives that are inexorably leading to harmful outcomes. It is not about good and evil people, it is about the incentives. It's exactly as you are saying.
That's why she is not advocating to punish Facebook for being evil, but rather to force Facebook to reveal and permit research so we can understand the system and fix it, because Facebook is too deeply trapped in its own tangle of incentives to fix itself. In this I think she is absolutely correct.
"Facebook has created a system of incentives that are inexorably leading to harmful outcomes" Exactly right. The solution baffles me. "Force Facebook to reveal and permit research so we understand the system and fix it" Basically keep the harmful system in place, but pass the reins to an unspecified cabal hiding under the innocuous word "we". Hard pass.
It may be that facebook can't fix itself, but what makes anyone think an even larger and more powerful organization is the answer and won't itself succumb to its own system of incentives? She is pushing for the equivalent of The Ministry of Truth.
Remember, this is the system of incentives that had us spend 20 disastrous years in Afghanistan, across both parties. And has failed to deal with climate change. And healthcare. And education. And wealth inequality. And housing. And... Siri, what's the definition of insane?
By the way let's give a name to that system, it's called "PSC". Google it. It's the most absurd and ineffective performance management system I've ever witnessed.
It creates a Hunger Games mentality within teams and makes doing anything that actually matters virtually impossible, generating an infinite sequence of half assed 6 months projects that get systematically abandoned as soon as the people responsible manage to get promoted or switch teams.
> they have created a system of incentives that are inexorably leading to harmful outcomes
If the people inside are "smart, thoughtful, kind, well-intentioned people", they would have tried to work around the incentive, influence them, denounce them, or quit.
It rarely happened. Most of the time, the just take the money, and goes with the flow.
How is giving access to user data for "research" is better than that whole data privacy scandal with Cambridge Analytica.
these days research comes with a set of politically charged assumptions, for example the definitions of "hate speech" and "misinformation" are different based on which political camp you ask
So giving access to Cambridge Analytica is bad but to some other partisan "think tank" is fine? who would make those decisions?
I don't think it's an emergent property, I think it's a by-product of the constraints. It's all well and good that they want to make Facebook safe and healthy, and I honestly believe plenty of people working there are trying to do just that. However, they are operating under the constraint that they cannot move backwards on profits, and therefore engagement.
Imagine if you were trying to fix climate change, but under the condition that you weren't allowed to burn fewer fossil fuels. You may try very hard, and very sincerely, but it's fool's errand.
> I don't think it's an emergent property, I think it's a by-product of the constraints.
> Imagine if you were trying to fix climate change, but under the condition that you weren't allowed to burn fewer fossil fuels.
There is one person who controls all the constraints: Zuckerberg. He even went so far as to enforce that through his stock classifications. It’s entirely understandable and acceptable to have empathy for those working at FB who are attempting to solve the problems. But Zuckerberg made the decision to be the single source of the constraints that bind everyone below. And his constraints are: profit over all else. He should face consequences for setting those constraints, just as anyone should who set a constraint of “address climate change without adversely effecting GDP”.
Separately, and as the “revelations” of Zuckerberg’s immoral behavior continues year after year, those who work for him but are attempting to solve the problems, should recognize at some point in the future, now, or in the past that the problems are insurmountable within the confines of the constraints. As that knowledge spreads, then the question becomes whether those idealistically earnest individuals are justifiably ignorant of the reality: that all their best intentions are moot in the face of the constraints as were determined by Zuckerberg. And when or if they are no longer justifiably ignorant, they become culpable.
> Imagine if you were trying to fix climate change, but under the condition that you weren't allowed to burn fewer fossil fuels. You may try very hard, and very sincerely, but it's fool's errand.
A rule more analogous to Facebook's presumed position would be, "you can fix climate change, but you can't do anything that would reduce GDP per capita". Which in practice means that while some useful tools would be on the table, others would definitely not be.
I think a more apt analogy has advertising as Facebook's fossil-fuel burning, but then I expect severely curtailing fossil-fuel use will severely reduce GDP, which I am guessing is not a common belief around here. (I'm guessing that many around here think that it is essentially just the stubbornness of those in power that keeps fossil-fuel use high, and that even if we force the whole world economy to transition to 100% renewals over, e.g., the next 5 years, things will turn out fine.)
> So maybe Zuck is telling the truth here, that they are trying to fix all this. But no one can see the forest from the trees.
Ah, this is what I think of as Schrodinger's Accountability. Zuckerberg and Facebook's senior execs are simultaneously: A) so brilliant for running Facebook that they deserve to be incredibly rich, and B) so normal that they can't possibly be expected to understand the consequences of their actions, and so are morally blameless. Heads they win, tails we lose.
I say it's one or the other. If Facebook is too big to be understood, it should be broken up into small enough units that mere mortals can see the forest and tend it responsibly. And if not, the execs should be morally and legally culpable for the harm it does.
You may be missing the point if you think your point is orthogonal to theirs. Mark Zuckerberg doesn't have to be painted as a reptiloid in order for his actions to be bad, or for those actions to cause harm. More than blaming shit needs to get fixed right? We can still hold people culpable, but we don't need to do that, don't need to indict anyone before trying to fix a problem that is self perpetuating due to individual incentives and a complete lack of oversight.
What does breaking up a company actually do for the consumer? I don't think telecoms are any better for the consumer decades after we broke Bell. There is strong incentive to just form a cartel like telecoms today rather than a competitive environment that is beneficial for the consumer.
'Deserve to be rich' is the wrong frame. What is a sensible procedure to decide who deserves to be rich and who doesn't? The say so of powerful politicians? 'Raised to the top via a combination of skill, luck and shrewdness' is more accurate. The fundamental problem is that the world is governed by power laws. As the size of the ecosystem grows (hello globalization) at some point it becomes obvious that no humans can effectively control the largest of the emergent entities. We need to break up Facebook, we need to break up the Internet, we need to break up the global economic system. We need to add friction back into the world. A lot of friction.
> I personally know and have previously worked with some of the people who work on trust and safety, specifically for kids. Good people who have kids of their own and who care about protecting people, especially children.
Those same people are protecting their children with $300k+ salaries and buying property in area where they can send their children to Gunn HS. While I empathize with these people the direct opportunity to protect your kin should not be understated. Do they mean well? Sure. Are they putting their best effort to fix things? Sure.
Here's the most important part:
Do you they know deep down inside that the only way to fix these things is to hurt Facebook financially? Probably. But they also know this means risking to protect their own children as a result (forced to move, lose job, less pay, etc.). What would you do? (I think I know the answer)
This can't be understated any further - in the end it doesn't matter what individual people at FB think because no one person or group of people has any legal, economical, or logistical ability to control the company except Mark Zuckerberg. He is figuratively and literally impossible to fight. Well, unless everyone deleted their accounts.
> Do you they know deep down inside that the only way to fix these things is to hurt Facebook financially? Probably.
The crazy thing is that FB has taken steps to improve things in past that also hurt them financially (eg post cambridge analytica) . They just make so much money and so fast that its like 1 or 2 bad quarters and its over.
So (1) mark being all powerful means he alone can decide its worth lower profits - he's done it before.
(2) The loss of profits probably wouldn't even matter.
I've been framing this whole thing as a universal property of human society and it seems to fit pretty well for me.
Outrage attracts attention in all group interactions. I can't think of a single large scale group forum where this isn't true. It's integral to an absurd degree in our news cycle. Howard Stern exploited this property in his rise to fame. It's a core element in state propaganda, well documented throughout human history.
I'm old enough to remember when the internet was a lot more free - when there generally wasn't some parent corporation imposing content censorship on what you put on your homepage, or what you said on IRC. All of the complaints regarding Facebook were true of internet communications back then too (on the "sex trafficking" issue, compare to Craigslist of yore!)
The big difference seems to be there's an entity we can point a finger at now. Communications on Facebook aren't worse than what was on the internet two decades ago. In fact, they're far, far more clean and controlled.
What I look to is whether Facebook is more objectionable than alternative forms of communication, and I can't find any reason to believe that this is the case. Is twitter better? Is reddit? Is usenet? No.
So why does Facebook draw such ire?
Are people calling for controls on Facebook also calling for controls on self-published websites? On open communication systems like IRC or email? Where is the coherent moral philosophy regarding internet speech?
To be honest, my biggest concern when I read the news surrounding this issue is that most of the internet might not be old enough to remember what it means to have a truly free platform, unencumbered by moralizing. Why are people begging for more controls?
I think a lot of folks forget that Facebook wanted to come in and clean up some of the filth in social media. They felt that by attaching your _real_ name to your posts, instead of a handle as was the traditional practice, that you would have something to lose (social standing, esteem, etc) and so you would be more thoughtful about your actions. The contrasts at the time were reddit, SomethingAwful, and 4chan. There was _definitely_ extant toxicity on the internet and there were funny posts in the early days of GMail that you could stop them from displaying ads by inserting lots of expletives and bad words in your email (and so some would have GMail signatures that just lumped bad words in together and explained it as an ad circumvention thing).
But I think there are a few key innovations that make FB worse for human psychology than previous iterations. Chief among them is the algorithmic newsfeed designed to drive engagement. Outrage certainly provokes responses, but in a chronological feed situation, eventually threads would become so large that the original outrageous situation would be pushed far back and the outrage would go away. Algorithmic newsfeeds bubble these to the top and continue to show them as they get more comments/retweets/shares/etc. They reward engagement in a visceral way that offers perverse incentives.
Secondly is the filter bubble. By showing you content hyper-relevant to your search interests, you can easily fall into echo chambers of outrage and extremism. Internet communities, like IRC channels, had huge discoverability issues. Each community also usually had disparate ways to join them adding another layer of friction. Even if you were an extremist it took dedicated searching to find a community that would tolerate your extremism. Now mainstream platforms will lump you into filter bubbles with other people that are willing to engage and amplify your extremist posts.
Combine horribly narrow echo chambers with engagement-oriented metrics and you'll have a simple tool for radicalization. That way when you're thinking of committing a violent act because of the disenfranchisement you feel in your life and your community, you'll be funneled to communicate with others who feel similarly and enter a game of broad brinkmanship that can quickly drive a group to the extreme. Balkanization and radicalization.
We used to solve this problem by teaching people to have thicker skin so that we control the outrage regardless of the forum in which it occurs.
However for the last 10 years or so grievance culture has taken root and not only excused outrage, its proponents have actively encouraged it.
It makes me think of that scene in Star Wars where palpating is like “good, good. Let the hate flow through you”, expect we now have millions of people encouraging this.
How I wish we could rewind things to a world where foregiveness was still a virtue and we were all taught that sticks and stones may break our bones but words will never hurt us. Without such virtues, a world with outrage is inevitable.
No one is calling for the internet to be less free, or have more constraints. They're calling for specific platforms to alter their interactions model to discourage toxic group behaviors at scale.
Seems like you more than anyone would see that solving the types of problems FB is trying to solve eg: freedom of speech vs user safety / harm reduction is not some super simple problem, no? I wouldn't call Reddit evil despite the fact that many powermods are both amazing contributors doing free labor and curating great communities while also simultaneously abusing their power every day to silence people they disagree with, shaping narratives in human culture, automating blanket unappealable bans on users for participating in unrelated subreddits (even if you were participating in that subreddit in order to combat its views), making snap judgments on content moderation that might ruin someone's day when they make a bad call on a ban or delete, or unilaterally self appointing themselves as mouthpieces for their broader communities via subreddit blackouts or preachy pinned posts.
It's unfortunate that when you build a product so close to the ground of human communication and human nature you're never going to be able to get everything right, and you're no longer solving technology problems alone but trying to basically combat basic human moral failing itself. We don't ask that of the telephone company.
^ That being said, we can only excuse some of their failures with the above line of thinking. Others we can blame on greed or recklessness, or ignoring the social costs of something like ML recommenders optimizing for engagement. Not sure if those things deserve to be called evil, but I'd still hold back personally. Misguided, overcome by greed, or reckless, perhaps.
Point of order: the issue with Facebook is the various engagement algorithms that they are and have been perfecting. This is unlike anything humans have ever seen before. We are no longer anywhere near to 'the ground'.
As other have said, my experience with Facebook just doesn't mirror the "angryness" and hatred that other people are seeing. My Facebook stream is just every day things from friends I have made through the world. It is very useful for me to maintain a bit of a touch with people I esteem so much but with whom I've list touch through the years.
The "angry facebook" experience to me seems like the moms against heavy metal / twisted sister case: People are seeing a reflection of what their peers share.
If their circles are angry and share disinformation, that's what they will see.
I've also never had an issue with Facebook. I've been online through usenet/irc, AIM, livejournal, and then forced to join Facebook because everyone at the university was using it for class correspondence. Later, I have exactly your sentiments, that it has allowed me to stay in touch with people I would have lost touch with over the years. I take advantage of some of the groups for my industry, and my hobbies. I use our company's Page to interact with a whole segment of our international customer base, that would never think to call our support telephone number or e-mail. It's never been a negative experience for me. Although I only look at it when I get home from work at night on my desktop computer. And don't ride around all day with the app running in my pocket. I don't quite know if that would make a huge difference though.
What I hear from Zuckerberg over and over is "we're good people and working on it, look at A and B things we're doing" with an implication that that's good enough, so what's everybody up in arms about? That's the core of his tone-deafness to me. If Zuckerberg is fully honest, it means he basically just doesn't have a grip on reality and he isn't fit to lead a corporation this big and impactful. And I tend to believe that, because he's ultimately just a college kid with a laptop who ended up in some circumstances that snowballed.
> ultimately just a college kid with a laptop who ended up in some circumstances that snowballed
When will this “just luck” characterization of Zuck die?
His entire company was certain they should sell for $1B, and most executives resigned when he didn’t. He maneuvered control of the majority of voting shares, how many other founders have done that? Instagram and WhatsApp were genius acquisitions everybody at the time clamored were too overpriced. Even Oculus has turned out to be the leading VR platform. All of the people close to him attest to his extreme intelligence.
Whether malicious or not, Zuck didn’t just “aw shucks I got lucky” into the majority owner of a $1T company, cmon…
> So maybe Zuck is telling the truth here, that they are trying to fix all this.
Except they are just playing around with the outrage algorithms, the problem is created by Facebook, not some natural occurence. If they wanted to "fix" anything they would make their algorithmic timelines opt-in, or at least an option, for starters.
It is of course very much in the interest of the people working at Facebook to make this seem like a problem that is just there and that it is some "difficult to solve", that "moderation doesn't scale" etc.. These are deflections to make everyone ignore that Facebooks tampering is where it starts.
This, their entire premise of modifying their engagement-optimization to try to account for wellbeing but still optimize engagement is flawed. It’s clear that outrage and anger drive engagement over all else. If they want to fix things they can just bring back chronological feeds; but they won’t because the incentives are just too misaligned.
I know YouTube’s just recommends based on what you just watched/search (you can disable this aspect by clearing or disabling your histories), channels you have subscribed to, (I believe) videos you have “Like” or commented on, and videos you have marked as “Not interested -> I do not like this video”.
Is Facebook’s as “viewer driven”? Or does it recommend based other criteria? e.g. like what’s generally popular.
Good people have gone to work at facebook (and google) on jobs like privacy engineering and really try to do good work.
however, no matter how capable and ethically sound they are, the incentives are forever misaligned with the profit models for both, and adtech over all, as it currently stands. truly good people can chase money and hope to do good things in the process. it's as easy as this.
the writing was on the wall when alex stamos, by all measures the best example of the type of person you're referring to and FB's chief security officer... left. started in 2015 and was out of there by 2018. not many c-levels walk away from a job like that for the reasons he did, and when they do that should be the even to pay attention to (looking at you, sheryl "lean in" sandberg). this was the marker event if people were looking.
If by "established" you mean that it's well known, then yes, you're right. If instead you mean that it's agreed-upon or widely accepted, you'd be wrong. There's a lot of great debate / critique, both about how well the phrase actually applied to Adolf Eichmann himself (Arendt was famously only at the trial for like 5 days), and whether evil in general is ever, in fact, all that banal. Sadly the conversation around "the banality of evil" hasn't received a fraction of the attention that the phrase itself has.
I don't think this idea matches what parent poster is saying.
Banality of evil is about how ordinary people can work on evil things while not being sociopaths and still being considered ordinary people. But it also presumes that there is some truly evil / sociopathic force driving this through authority, such as Hitler himself in case of Eichmann.
On the other hand the parent poster is saying that Facebook is simply too big to not end up evil, that evil is an emergent property of the million different processes that is Facebook. That view absolves not only regular workers of Facebook who are helping the company achieve evil things, but also the people who are actually in control of the company – Mark Zuckerberg and his senior executive team.
Personally I'm not buying either of these absolutions, but especially not the grand universal absolution that the parent poster affords to the whole company.
Ultimately it is someone's decision to put profits above everything else. Engagement doesn't excessively optimize itself. Users' contact books aren't getting stolen by themselves. Shadow profiles don't fill up themselves. "Just doing my job" is a choice, not an excuse. Many people are complicit in making and implementing these decisions for their own benefit, and they are all responsible for the outcome.
I don't think that's what the OP means, though. It's not "decent" people doing evil things. It's great people doing great things, within an organization that also does bad things.
There are some amazing people on their safety and moderation teams. They're also fighting marketing algorithms, I'm sure.
Eichmann in Jerusalem is the book that coined the phrase for anyone passing through, and it's a pretty wild story.
It's essentially Arendt, a Jewish exile from Berlin who fled the holocaust, wrestling with her realization that Eichmann, who reported to Hitler and organized major portions of the holocaust, wasn't a psychopath, but a completely mundane and thoughtless career focused bureaucrat who was trying to rise in government and believed in doing what you are told, who then organized one of the most evil acts in human history without reflecting on what he was doing.
It’s like saying that the people working in the slaughter houses are actually kind folk who do like animals and care deeply for their well being. That can be absolutely true, but they still work for a slaughter house. Your care and trust doesn’t matter a bit because the fundamental nature of the organization is that it profits from cruelty. I understand it pays well, and that maybe they are trying to be nice and all, but yeah there’s only so much purity of heart you can insist while still working for the slaughter house that is Facebook.
I actually think this analogy is the very opposite of what you may be trying to explain.
A lot of people that work at slaughterhouses do so because they have no other choice. It is the best opportunity that's afforded to them. It is a job that causes trauma for many, often has long, grueling hours, and doesn't pay well.
Working at Facebook couldn't be further from that situation. Never mind the obvious perks (the tech, the white collar work, the gourmet food, I hear there's also a wood shop where you can go do woodworking on your break, the half a million dollar salary, etc etc etc). But the overwhelming majority of these people have the whole world of job opportunities to choose from, if they're willing to take a pay cut from an INSANELY HIGH salary to just a VERY HIGH salary.
So in that sense, they couldn't be further away from working at a slaughterhouse. The fact is, they could quite literally work anywhere else (any other company or any other city/country with remote work now), and they choose not to. It's not desperation but the textbook case of golden handcuffs.
It's very, very difficult to say no to 500k a year. I'm not even sure I could say no if I were in that position. I'd probably tell myself "Just coast for two more years and both my kids won't have to pay for college" or something like that, and keep going.
> So maybe Zuck is telling the truth here, that they are trying to fix all this. But no one can see the forrest from the trees.
Don't fall for words!
Frances Haugen was able to see the big picture. The documents she presented had Facebook employees mentioning it. Facebook didn't act on what was known. It is not that it wasn't known.
To paraphrase John Roberts - the only way not to do a thing is not to do that thing.
It's systems-level thinking applied to people. Reader beware though, once you start down this path, you can become adept at spotting this pattern emerge in many other human systems.
The statement that made it really clear to me was facebook has moderators for 50 languages... while supporting 111 different languages [1]. It's wildly irresponsible to offer services in a language you can't moderate in.
And it's sure seems an intentional part of their fig leaf denial strategy -- viz the recent revelations about human trafficking on fb in arabic [2]. Or armed groups in Ethiopia inciting violence on FB in ways that fb chooses not to monitor because of language issues [2].
A company with 21Q2 revenues of $28.5B can't hire moderators in languages spoken in countries with low costs of living... It reflects a thirst for growth with no thought given to the people affected by their growth.
That’s sort of how things are where I work. The systems are so complicated and the interactions are often algorithmic and machine learning based. We try to maintain documentation and architecture artifacts with as much accuracy as possible. But in some cases things may as well be magic because no one really understands the whole process.
The human element is not a variable we define in code. There are things that, by the nature of how they're used, become harmful. Intent does not matter. Good people can intend that their new free anonymous file sharing service will be amazing. Until it's used by bad actors. The concept is good, the intent is good, but in practice it doesn't work that way.
There's also another concept, the reality that people do not actually care as much as we think they do. There's a program every public school in the U.S. has where kids run at each other, at speed, knock each other to the ground with concussions, tear their muscles, break their bones, and have terrible behaviors towards one another. Yet every school still has said program. Parents encourage their kids to join. We just don't care about whats right.
Not all evil looks that way to outside observers, unfortunately. I believe that the assumptions of FB that allowed it to get so big, "optimize engagement above all else", built a system that in many ways is at odds with the values of our society when everyone is a user.
Internally at FB, everything looks good, you hit all your OKRs and believe users are better off. Maybe you don't, but you're bonus is huge so you'll put your head down and keep on. Externally, it's an entirely different picture. Connecting people is a comically small issue society needs FB to solve, relative to our need for them not to harm children, or promote extremism, or hide research when testifying to congress.
> The best I can come up with is that Facebook is so big that the "evil" is an emergent property of all the different things that are happening
> so while the individuals involved have good intentions with what they are working on, the sum total of all employees' intentions ends up broken.
I think, honestly, that a huge thing is that when you put together basically the entirety of the internet, and society into a giant conversational feedback loop you're bound to spin out the worst, especially if FB wasn't 100% of the time trying to filter it (which they weren't because its a business and the problems weren't always equally known).
What I'm saying is that I know people working on this projects, and they are good people who want to make things better. They wouldn't work on these things if they didn't think it made a difference, as they all have plenty of options on other places to work.
It's ridiculous to think a platform that most of humanity uses can be controlled to the liking of the left, the right, the upside down and etc ... cause all those groups make up humanity and we all do not nor will ever think exactly the same and we all have different motives and biases.
Misinformation that's been around forever... ever play the telephone game in school .. you tell one person a story they tell the next and the next and the next soon that story is no longer factual. Stir all that in with bias and things get even murkier!
I'm reminded of Terry Pratchett's image of the row of mugs (with cute little sayings) owned by the torturers of Omnia's Quisition.
This is a generally hard problem but it's as significant now as it was in the aftermath of WWII. I'd say it speaks to the reality of human subjectivity, and it never goes away: I can only wonder if the same will be true of AI, and whether it's possible for a thinking being to really internalize the concept of hard limits to their perception, and build that into their model of the world.
You could say the God concept is a way of trying to internalize the limits to perception: 'something is vastly significant and it's not me, and my understanding does not and cannot encompass it'.
With OR without this concept we as humans are exactly as evil as each other. That's the secret. There isn't a qualitative difference between 'us' and history's great monsters. It's about the choices we've made and how we've acted on them: the rest is rationalization, which we are all subject to in one way or another.
Grappling with this is the Nuremberg moment: the question is 'never mind whether you feel you've been good, what have you done?'
The complexity of the system is too great. It’s similar to how the economy runs, there are many very intentioned, intelligent (event brilliant) people who study and focus on it. The problem is it is so complex that no one can fully understand all the components. Not to mention the amount of people in Facebook and in the economy who are intentional bad actors.
I’m not saying they are blameless I just always have a tough time laying all the blame on a couple people.
I have a friend who started work there in the last three years.
It’s so big and so organized, they can come up with an idea for a new service or policy they want to implement and it takes roughly two weeks to get all the channels to approve and move forward on the idea. Implementation is different, this is just getting all the approvals from legal, finance, marketing, etc..
They are definitely in a position to make changes quickly should they need to.
Sometimes it is not the virtue of people in the organization, it is a function of the structure and incentives of the organization, the "emergent property" that you reference.
Imagine if a company had invented methamphetamine, but the ill effects weren't as readily apparent. Then they built an empire on the belief that the societal benefits of millions of people running around in a seemingly ultra-productive manic state were a godsend to society, and that they had truly changed the world. Then realize that the effects of Facebook are worse than that--it has the opposite effect on productivity, has maybe worse mental health effects, and is nevertheless highly addictive. The reality would never sink in inside that bubble. Worse, the tens of thousands of people whose jobs and wealth depending on tuning said meth to be as addictive as possible are...what? Pawns? Believers? Accomplices? Delusional? Regular people. They are regular people.
Its been said that this psychopathic behaviour is an emergent property of many corporations and emerges due to the nature of their very legal structure. In other word, the people may be fine but the outcomes can turn out not to be. See...
"The Corporation attempts to compare the way corporations are systematically compelled to behave with what it claims are the DSM-IV's symptoms of psychopathy, e.g., the callous disregard for the feelings of other people, the incapacity to maintain human relationships, the reckless disregard for the safety of others, the deceitfulness (continual lying to deceive for profit), the incapacity to experience guilt, and the failure to conform to social norms and respect the law"
They're in a tough spot by design. Much of Facebook is private. How can they possibly be transparent enough to satisfy critics about what actions they take? Share too much and another Cambridge Analytica situation pops up. Share too little and researchers decry coverup over lack of access.
The problem with facebook is that it plays with fire every day. Kills innocent people every day. But they have a fire department, so they can't be all bad right?
If you can't do what you do in a way that isn't this harmful to the world, then you always have the choice to just stop it.
These are all smart people. They could be working on anything else and be successful at it. But they are scared of change. The money is too good.
I just wish the people working at facebook that are decent, that they would just leave and go work somewhere else.
And that we stop debating whether Zuckerberg is redeemable. He is not. He is a psycho. He is why it escalates this much. He is a monster. Beyond all the lies, he intends the damage he does to the world. Maybe someone bullied him as a child. Maybe he is just not well. I don't know.
It’s easy for me to reconcile after living through COVID. There are people in my own family who have emotionally told me they’d never do anything to hurt their family. Meanwhile throughout the pandemic they have purposefully hidden when they have been sick and spent full days with their elderly family and immune compromised 3 year old, touching food and participating in cooking. There is a big difference between emotional and cognitive empathy.
I also think the people who make the biggest show of how much they care tend to be the same who don’t actually act in a caring way at all.
No, I’m not surprised at all that FB employees say they really care. And that they do so very convincingly.
Couldn't it be possible that the people you know try hard but are limited with what they can do because of policy and decisions that come from above? Stuff like hiding research that looks bad isn't something that a dev or even a manager decides.
Who are the people to be bold enough to speak for "everyone". You are definitely not speaking for me. I personally get a lot of value from facebook. I never had any problem with it in any respect. Use it to communicate with my family around the world. Used it to rent my apartment, sell thing on marketplace. I keep in touch with people I know. And have very thoughtful and enlightening political discussions that help me do the right choice who to vote for and stay informed. (The only other place with better discussions is hacker news thought)
There is only one way to fix this, prevent anyone from influencing what is shown more prominently to users. The simplest solution for that would be simply chronological order only from your friends.
"The best I can come up with is that Facebook is so big that the "evil" is an emergent property of all the different things that are happening."
I half agree. I do in fact think its ben baked in from the get go, just that there was a period where it was not an obvious pillar; you could in fct do all kinds of essentially innocuous things and accept some surveillance capitalism with awareness of your wn liabilities.
It's now become so much larger and problematic, that the 'emergent property' is that every move adds weight to the need for the firms dismemberment into smaller units, or punishing regulatory limits. And I mean truly brutally, snap-noise making bone-breaking regulations.
"So maybe Zuck is telling the truth here, that they are trying to fix all this."
Nope. He knows if he wants to stay on top, he had t keep doing mpre of what hes done, and that his choices are otherwise to actually adapt, which he will not do.
Shouldn't it then be possible to account for and correct for the emergent evil? That's the point of government regulation is it not? Maybe then an appropriate, self-critical response from Facebook would be, "Yeah, our system is broken. How can we help?" instead of immediately going on the defensive. If they claim to care about the bigger picture, they need to acknowledge it without excuses.
At this point of the climate disaster those are getting very rare especially in oul. Maybe those, if they exist, who are working hardest at stopping most oil production and enacting cap and trade.
If a company desired to be able to sow doubt if its impacts on society ever came under a microscope... one gambit (and an effective one, based on your reaction) would be to hire people who genuinely and passionately research and work on trust and safety, then systematically under-resource their teams and gaslight them into thinking there are fundamental reasons their recommendations must be ignored.
For instance, contrast Zuckerberg's statement here:
> And if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?
> The memo is a damning account of Facebook’s failures. It’s the story of Facebook abdicating responsibility for malign activities on its platform that could affect the political fate of nations outside the United States or Western Europe. It's also the story of a junior employee wielding extraordinary moderation powers that affected millions of people without any real institutional support, and the personal torment that followed.
> She soon grew skeptical that her team could make an impact, she said. Her team had few resources, she said, and she felt the company put growth and user engagement ahead of what it knew through its own research about its platforms’ ill effects.
The fact of the matter is that if Zuckerberg were to say "I'm going to pour our profits into trust and safety and abuse avoidance in order to ensure that our position as a trusted brand is sustainable for generations to come," his high levels of voting control and clear defense to any allegations that this was against long-term shareholder interest would fully make that possible. The fact that quite the opposite has happened should be considered with much more weight than his words in a reactive press statement.
There is a German saying: "Der Fisch stinkt von Kopf her". 90% of all employees may very well have the best intentions, but this doesn't mean anything if the decision makers have not.
A company is not a democracy.
Indeed, we have probably just seen one of the (former) employees with good intentions struggle to stay true to them.
I mean, that's what it might be. This might be the "banality of evil", an emergent property of social networks themselves. If this is the case then we have a harder question ahead of us as an entire world: how do we fix the problems of pandora's box?
So if it a form of a modified Hanlon's razor, "never attribute to malice what can be explained by lack of capability" particularly because they are too big, it sounds like the answer is to break them up so they aren't too big. Is that the solution?
So, anecdotally, the most amoral programmers that I've ever worked with have ended up at Facebook. I'm sure there are decent people there, but I couldn't personally work there in good conscience.
I don’t think the existence of good people or what any one’s intentions are matter at all. I doubt anyone can change the course or Facebook. The stock price runs the show.
The disconnect is that Facebook is coming at this with the assumption that it is right and proper for Facebook to exist. The rest of us don’t make that assumption. So “how can Facebook best serve kids” might be “withdraw from routing tables permanently” but that isn’t on the whiteboard in Zuck’s office.
FB seems guilty, only because their internal findings were leaked.
I have no empathy for them. They bring out the worst in Humanity. They build walled silos of festering hate and anger, all driven by "user engagement", "hours on site" and money.
Just remember Rationality is Bounded, ie there are problems chimps with 6 inch brains cant solve. Its the classic Jurassic Park story, where man says he can control anything. And then realizes he cant. By which time its too late.
This is why the road to hell is paved by "good people who have kids" with their good intentions.
FBs issues did not appear yday.
Like the endless war the issues where there right from the start. So why are we talking about it today? Cuz lots of good people didnt do anything, not because they arent good or skilled, because the problem is too complex for them.
This is where Bounded Rationality helps resolve issues. If the problem is too complex, pick a simpler problem.
This is hard for some chimps to do for various reason. So entertaining them is a recipe for disaster. Their narrative will always be- "people are good. People experienced World War 1. They know whats at stake. They lost family, friends, body parts. Many are great Heroes. Trust them. They know what they are doing". And still we got World War 2.
Why? Cause rationality, skill and experince doesnt matter for some problems. All the "good germans" from politicians, to religious leaders, to military and intelligence leaders knew Hitler had to go long before any notion of war entered the minds. Every coup and assassination they ploted they second guessed themselves. All of them ended up dead.
(Note: "Connecting people" here does not mean providing communications services, it means using behind-the-scenes, unconsented, and sometimes deceptive tactics to figure out whether and how people are connected to each other IRL.)
Andrew Bosworth
June 18, 2016
The Ugly
We talk about the good and the bad of our work often. I want to talk about the ugly.
We connect people.
That can be good if they make it positive. Maybe someone finds love. Maybe it even saves the life of someone on the brink of suicide.
So we connect more people
That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.
And still we connect people.
The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good. It is perhaps
the only area where the metrics do tell the true story as far as we are concerned.
That isn't something we are doing for ourselves. Or for our stock price (ha!). It is literally just what we do. We connect people. Period.
That's why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable
by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.
The natural state of the world is not connected. It is not unified. It is fragmented by borders, languages, and increasingly by different products. The best
products don't win. The ones everyone use win.
I know a lot of people don't want to hear this. Most of us have the luxury of working in the warm glow of building products consumers love. But make no mistake,
growth tactics are how we got here. If you joined the company because it is doing great work, that's why we get to do that great work. We do have great products
but we still wouldn't be half our size without pushing the envelope on growth. Nothing makes Facebook as valuable as having your friends on it, and no product
decisions have gotten as many friends on as the ones made in growth. Not photo tagging. Not news feed. Not messenger. Nothing.
In almost all of our work, we have to answer hard questions about what we believe. We have to justify the metrics and make sure they aren't losing out on a bigger
picture. But connecting people. That's our imperative. Because that's what we do. We connect people.
When this statement leaked, he made the bullshit claim that he was playing devil's advocate. He certainly wasn't. This post was made at the same time as another leaked one about messenger adding a deceptive interstitial to get people to agree to share their number and contacts with FB.
The hard irony is that Facebook is just another mechanism to fragment people. It is no different than these other "borders, languages, and increasingly by different products".
It seems that the author is operating under the assumption that if everyone is inside of their product, the world won't be fragmented anymore. People will be connected.
Yes. They will be connected. To the product.
We can do better than this dreary future. It is possible to connect people as peers, without the exploiting hands of intermediaries like the executive who wrote this statement.
Zuckerberg is making an almost entirely emotional appeal in his statement. Most of his claims are not backed up / buttressed with facts, numbers, and specifics. The statement is designed to make a reader feel bad for Facebook as if Facebook was a friend, and not a corporation with billions of dollars in quarterly profits.
Though the statement seems well-meaning, etc., it is weaselly and manipulative. It also conveniently doesn't address some of deeper issues from Frances Haugen's testimony.
For example, Haugen focused on the fact that Zuckerberg has created a relatively flat organization, where if decisions help the core metric they must be good, and vice versa. Haugen testified that Zuckerberg was made aware that instituting a newsfeed tweak would entail a) small ding to the core engagement metrics and b) would decrease violence in Ethiopia... He chose the metric over the decreased violence.
There comes a point where blindly pursuing metrics -- be it money or engagement -- without regard to the effects on society are hard if not impossible to distinguish from sociopathic behavior.
Also, let's not forget that researchers and renown statisticians employed by / sponsored by Big Tobacco (e.g. R.A. Fisher) convinced themselves that smoking didn't cause cancer. [0]
How so? He asserted several apparently factual claims that would basically undermine or make irrelevant most of the commentary in this thread, for example:
- Social media can't cause "polarization" because the measurements of that are going down in most of the world, except the USA. But social media is heavily used everywhere.
- It makes no sense to claim an organization doesn't care about X when it heavily funds research into X.
- If you react to a company researching the harms of its products by leaking everything and publicly accusing the company of being evil, other companies will simply not do research into the harms of its own products.
The second two are just logic. The first would benefit from a citation but I'll take his word for it.
"Ethiopia violence: Facebook to blame, says runner Gebrselassie" This is the headline from BBC in 2019. It makes me so angry and upset. If facebook was run ethically how much smaller would it really be? 10%? 20%? I can't help think that although they would lose some customers they would also gain others.
> Now that today's testimony is over, I wanted to reflect on the public debate we're in. I'm sure many of you have found the recent coverage hard to read because it just doesn't reflect the company we know.
Yeah, we know you don't know, because you're looking at it from on top of a mountain of 100 billion dollars, Mark. There isn't a single damned thing that can change your picture of it being the absolute greatest thing ever.
> We care deeply about issues like safety, well-being and mental health.
What you care about, and what you say you care about, are nothing compared to your actions.
> It's difficult to see coverage that misrepresents our work and our motives.
We don't know your motives, other than the obvious ones. We know your actions.
> At the most basic level, I think most of us just don't recognize the false picture of the company that is being painted.
That's because you live in a big dumb bubble where chat apps are somehow world-changing innovations and creepy stalker behavior is completely fine to you. You are out of touch and people are screaming it at you. You think you are entitled to encroach on everyone's private lives, intermediate on every interaction and mine it for vulnerabilities to auction off to advertisers. Your entire model of the world is broken, Mark. No wonder nothing makes sense.
Stopped reading after this point. I'm sick of billionaires with megaphones blaring their virtues.
> I'm sick of billionaires with megaphones blaring their virtues
As a society, the US has shifted its values from intellectually sound principles, to what ever rich people shout out.
I vomit in my mouth when I see videos of people showing currency, of people talking to you about "doing the hustle", etc etc.
US has fallen into an abyss of moral decline, where the value of your words are directly proportional to the amount of wealth you have managed to gather, no matter the means.
> As a society, the US has shifted its values from intellectually sound principles, to what ever rich people shout out.
Nothing exemplifies this as much as the whole situation surrounding public transport in the US.
A topic that's generally scoffed at "Everybody has a car, why would the US need a high-speed rail network?!"
At least until some billionaire presents his newest "innovation" by putting people in some pipe or another and allegedly making them go 600+ mph with magic inertia dampeners, then everybody loses their their collective poop about this amazing idea, by that amazing entrepreneur!
Then they end up with a bunch of cars being driven trough a tunnel, still no high-speed rail, but can't wait to chase after the next billionaire promising them to shoot people trough tubes at deadly speeds.
Would be excellent satire if it wasn't actual reality.
So agree. Comes off as super cringe. Silicon Valley has lost amy moral high ground from the early days and should act like any other corporate... "its not illegal and it would lose us money to change, why would we change it?" That would be honest, logical and frankly refreshing
Sorry, what "moral high ground" did it ever have? SV is just a place for smart people to create interesting new toys, and has developed an incredibly predatory and monopolistic VC-funded business practice in the process. For some reason, its inhabitants have determined that this makes them morally superior to everyone else.
I believe Zuck became billionaire for creating value for people.
For creating something from nothing. So if he has 100 billion, it's just a tiny fraction of the value the society got from him in return. I wish there were more people like Zuck, Elon musk etc.. these are the people that advance society.
I read so much hatred towards rich people here instead of praising them, it somehow gives me the chills to know there so many people around me that are full of baseless hatred to the point that they are "sick".
> And if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?
That's a point I've made myself before, and I think there is some truth to it. Social media can't explain the high and increasing degree of US political polarisation, because other countries (including other English-speaking countries) consume social media about as much and yet have less polarisation, and don't show the same degree of increase in it either. The real explanation must lie in other aspects of US culture, or the US political system
That is because most other countries have multi party systems.
The polarization is still there, but spread thin amongst various factions.
In the US, people are shoehorned into R or D.
Edit : I would also like to point out that the OP is a bit confused between cause and effect. In the US, the effect is deep polarization. However, the cause is the power of mass communication, especially misinformation and blatant lies, that FB enables and does not bother to control. The cause is common to all the countries in the world, the effect varies due to various other factors, one of them being the presence of a multi-party system.
Take the example of India. FB has a large and active user base. However, India being a chaos of various identities, cultures, regions, languages, etc, divisions in society are less pronounced as there are a large number of players (politically, regionally, locally, etc)
Other than that, the effect of FB in Europe is also less visible due to the same reason. Every EU country has mostly multi party systems, leading to spreading thin of the hate and focus.
The UK doesn’t have a true multi-party system. First-past-the-post elections all but ensure there are only two dominant political parties. It’s really only been a game between Labour and Conservatives for longer than most of us have been alive.
Not sure if I fully agree with that argument. With the US being a superpower, the stakes are high and this invites unique foreign influence via social media. That foreign influence could be state coordinated or just come from regular people who have an emotional stake in US politics. I've seen it myself, outside of the US people are more concerned about US politics than their own and engage with it on social media which only adds to the pile of insanity. Also, because of limited bandwidth, only a small handful of topics tend to monopolize, and as such they are usually US topics.
The political climate in the UK is pretty savage right now, Brexit pretty much split the country in two. I don’t think this is just a US phenomenon, especially given a lot of what happens in US politics bleeds into the rest of the world (which is why a lot of us follow it so closely).
...or, did tabloid-style "journalism" bleed from the UK to the US as they both reach new heights in fear-mongering and blaming the out-group to get rage-clicks as news organizations' incomes decline?
>The real explanation must lie in other aspects of US culture, or the US political system
First-past-the-post voting leads to a two party system which leads to more polarization. If you have multiple political parties, polarization can only go so far because there will be centrist groups working against polarization. In the US, there is no force pushing for centrism.
I despise first-past-the-post, but it does not force two party systems. Look at the UK and Canada, who have two parties able to form a government, but multiple viable parties.
If you want a culprit, blame the lack of whipping that both US parties do as a matter of political culture. Since you have historically had all these cuckoo crazy subgroups within both parties, it has meant that other parties simply do not have the bandwidth to exist.
However, in the last four years, the Republicans have started to blindly follow their leader as a matter of pride. I see that as the one positive change the Trump era has brought; yes the man to bring it about has been a disgrace, and has used it for vile purposes, but if both parties stick to following their political platforms and leaders far more, the US can end up in a better place.
This will sound dramatic, but could it be because the US is the most important market for both advertising and political disinformation?
Regarding advertising, for companies like Facebook, they may have billions of DAU but still derive the majority of their revenue from rich countries like the US. In 2020 the US and Canada were 45% of Facebook's revenue according to this random website I found: https://statstic.com/facebook-revenue-by-geography/ That's way more than the other regions in terms of $/user, so Facebook is a lot more incentivized to over-optimize for engagement based on US users. Depending on how much the algorithmic feed changes from country to country, it's possible that other countries experience less polarization simply because their culture is weighted less in training the feed's engagement optimization algorithms.
Regarding disinformation and political destablization, most countries simply aren't relevant enough on the world stage to be worth investing money in targetting in this area. The US, China, Russia, UK, Germany, France, Japan, India are all probably relevant enough. China and Russia effectively don't use facebook and are the most obvious non-US-aligned bad actors. They would also get way more bang-for-their-buck targeting the US than other countries. Note, I don't think is as widespread of a problem as many people think it is, but I bring it up because it's relevant in the context of political polarisation since there is strong evidence that it has occurred at least in the 2016 election: https://www.theguardian.com/technology/2017/oct/30/facebook-....
Actually, the two combined can be scary. If you can use outrage (great for engagement) to drive engagement in your content which is designed to politically destabilize the US, you can get a huge reach. This is effectively what you see a lot of the time in highly-engaged US content on facebook anyway: politically inclined outrage.
It's the US legacy media that is the main reason for the polarisation - the two party system is a factor, but the regular media is the one pitting "us vs. them" in every single minute of every broadcast.
Social media has downsides, but to lay the blame firmly at the feet of Facebook is to willfully ignore the culpability of CNN, Fox, the NYTimes, Washington Post, and many other legacy media outlets that are making tons of money otherizing the part of the country that is not their readership.
The premise of this argument is false. In Germany, where I live, polarization has increased dramatically, especially since COVID. I don't know about Facebook, but the tone on Twitter is harsh, the hate is palpable. The difference is maybe that unlike in the States it's not a 50:50 split, but maybe 80:20.
NBER's study [0] found (West) Germany had the biggest decline in political polarisation over the 1980-2020 period of all the countries they included in their study (12 OECD countries).
Doesn't entirely contradict your position, given that it was specifically measuring polarisation in terms of attitudes towards political parties, and so may not be good at measuring forms of polarisation that do not map straightforwardly to political parties; and looking at long-term trends over 40 years doesn't tell us much about how people have reacted to something which has only happened in the last 18-24 months.
The study failed to find any statistically significant correlation between political polarisation and proxies of social media use (Internet penetration and online news consumption; the study authors did not have data on social media use itself)
I'd also like to see references on which these countries with declining polarization and high FB/IG-usage would be. It's not a trivial or uncontroversial thing to quantify but hey, MZ brought it up.
It’s interesting that you don’t even agree with the fact that Zuckerberg claims. That polarization is declining or flat outside the US. Your claims that it’s less than in the US, but still increasing outside the US is a lot more aligned with reality.
But the fact is that this isn’t the defense Zuckerberg thinks it is. In fact, it may even suggest the absolute opposite.
Facebook has never been as popular outside the US as it has within it. The best indication of this fact is FB’s $19Bn purchase of WhatsApp which was largely driven by the fact that FB Messenger was basically only popular within the US, with the rest of the world preferring WhatsApp, which was also an indication of how FB’s network was simply not as entrenched outside the US as it was within it.
The more likely reason, however, is that assuming FB or social media in general increase polarization, it would almost certainly worsen it in the US more than anywhere else because of the US’s fairly unique 2 party structure combined with the primary system, both of which would almost certainly exacerbate any polarization effects caused by an external factor.
According to one researcher (https://www.brown.edu/news/2020-01-21/polarization) polarization is increasing as parties become more closely aligned to ideologies (eg. religion, race). Looking across the aisle, your opponents looks more different then they did a decade ago.
Why? My theory is that data mining and software is identifying and targeting seams of ideology that are most readily influenced, so in effect campaigning efforts are efficiently widening the divide between parties. Social media just happens to be the choice source of this data, as well as the medium to influence.
This is a good observation, but I don't think it absolves social media. There are likely many factors increasing polarization, and social media may amplify those factors and/or turn otherwise harmless factors into harmful ones.
The core idea behind social media is that it amplifies various voices, instead of those few who's job it is to participate in the media. In the US, this has effectively turned our right to freedom of speech into a right of freedom to broadcast.
The total size, wealth, and political influence of the US are much larger than the most culturally comparable nations. The value for those within and without the nation to spend effort using social media to influence discourse and election results is high, and is likely at least a partial factor.
That's worded to tip toe around the truth which is that in other countries FB is increasing polarization, that FB knew about it, and tried to ignore it - from the third^ file:
>In Poland, the changes made political debate on the platform nastier, Polish political parties told the company, according to the documents. The documents don’t specify which parties.
>“One party’s social media management team estimates that they have shifted the proportion of their posts from 50/50 positive/negative to 80% negative, explicitly as a function of the change to the algorithm,” wrote two Facebook researchers in an April 2019 internal report.
>Nina Jankowicz, who studies social media and democracy in Central and Eastern Europe as a fellow at the Woodrow Wilson Center in Washington, said she has heard complaints from many political parties in that region that the algorithm change made direct communication with their supporters through Facebook pages more difficult. They now have an incentive, she said, to create posts that rack up comments and shares—often by tapping into anger—to get exposure in users’ feeds.
The Facebook researchers, wrote in their report that in Spain, political parties run sophisticated operations to make Facebook posts travel as far and fast as possible.
>“They have learnt that harsh attacks on their opponents net the highest engagement,” they wrote. “They claim that they ‘try not to,’ but ultimately ‘you use what works.’ ”
>In the 15 months following fall 2017 clashes in Spain over Catalan separatism, the percentage of insults and threats on public Facebook pages related to social and political debate in Spain increased by 43%, according to research conducted by Constella Intelligence, a digital risk protection firm.
Facebook researchers wrote in their internal report that they heard similar complaints from parties in Taiwan and India.
Add to that the revelations from yesterday's^^ hearing regarding FB's role in violence in Myanmar and Ethiopia plus repression in PRC and Iran and there is no other interpretation than Mr Zuckerberg is lying.
Is it true that polarization has not increased in other countries? Is there published data on it? Would be interesting to see which countries are experiencing historically high levels of polarization.
Their main results are on the page numbered 20 (page 21 of the PDF) – their data shows political polarisation has grown (over the period 1980-2020) in six OECD countries and declined in six OECD countries. The six countries where it is has grown (in order from greatest to smallest growth) are the US, Switzerland, France, Denmark, Canada, and New Zealand. The six countries where it has declined (in order from greatest to smallest decline) are (West) Germany, Sweden, Norway, Britain, Australia and Japan.
They don't have any direct measures of social media use in their source data; the closest things they have are Internet penetration and consumption of online news, but they found no statistically significant correlation between those and polarisation. The only clearly statistically significant correlation they could find was a positive correlation between polarisation of societal elites and that of the general population (p=0.011). They also found a positive correlation between increasing racial diversity and political polarisation which was of borderline statistical significance (p=0.052).
What gives you the impression that countries besides the US are not becoming more polarized? My experience in Europe and South America make me think it's getting worse everywhere.
Obviously it depends on the actors on social media what level on harm social media can cause. If there weren't political forces, foreign governments and troll media such as Fox and friends then social media could be much less harmful.
It requires considerable expertise and resources to spin up a disinformation campaigns continuously. The US leads here because of the size of its market and determination of its adversaries.
Mark tries to spin the responsibility away in the quoted sentence.
The fact that people such as Peter Turchin predicted a peak in US political polarization starting in around 2020 (a prediction made in a published paper in an academic journal in 2010) shows that surely FB is not the immediate cause. But that's kind of like how gasoline doesn't start fires. It doesn't, but it accelerates them, and makes the consequences worse.
Given the late 1800s and early 1900s US were also very polarized, part of me wonders if the post-Cold-War world is returning to some sort of polarized state. It can feel like there are a lot of different perspectives on the way things should be, and thus there's less certainty where to go.
That said, I'm not a historian so take my shower thoughts with a teaspoon of salt.
One possible explanation is that Facebook in US is subject to more adversarial actors than other countries.
A propagandist-for-hire choosing to astroturf, conduct false flag campaigns, form vote brigades, etc. is more likely to target the US political market because -- globally speaking -- there's more political power at stake.
Whether FB can claim that as an excuse is another question.
How would you categorize FB/whatsapp-driven genocide in Myanmar[0], polarization(+) or polarization-lite?
The semantic acrobatics in statements like this drive the discussion into pedantic details that successfully lose sight that on net, this is a cancerous product, with a pathology unique to each culture it touches.
like, there's been leaked FB PM discussions on this very topic and phrasing it like a product challenge versus the moral calamity that it is.
Or it could be the enemies of democracy can leverage social media for division and the enemies of authoritarianism can’t. And some countries don’t have as big of targets on their backs as the US.
Because the rest of the world is behind the curve, the US is merely leading the charge as its established political landscape perfectly plays into the filter bubble polarization FB feeds on.
Something that's quite observable in places like Germany: Election campaigns in Germany used to be quite boring, there used to be no such thing as "political attack ads", parties kept their campaigning to topics they stand for, instead of trying to attack other parties or candidates on their particular positions.
At least that's how it used to be for the longest time, but during the last decade the whole tone around German elections became noticeably more hostile, something that directly correlates with the rise in popularity of the AfD.
Said AfD has hired American Harris Media for their campaign strategy, the same company that won Trump his presidency [0]. They not only introduced the wonders of micro-targeting, which already contributed to the Obama presidencies, but also added their American flavor of running political campaigns with these extremely hostile overtones.
The latest highlight of that escalation, during the recent election, has been far-right parties literally calling to "hang the Greens" [1]
Or to give another, often overlooked, example; The US wasn't the only country that saw their capitol stormed in recent times, an attempt was also made in Germany, but there police actually stood their ground [2]
And while most of the big established parties condemned what happened there, the rising far-right ones did nothing like that, they were right there riling people up.
You don't need to go anywhere else to find polarization. Right here on HN, conservative points are downvoted, flagged and shunned despite of having excellent credibility, just a different perspective of core values. This was not the case, note the number of downvoted comments in this highly debated topic in 2016, they were very few: https://news.ycombinator.com/item?id=12926678
It's one of those things that says a lot about the people on HN and their tolerance to wide range of opinions. And yes, we all agree that rude, disrespectful and violence has no place on HN. But, disagreeing on Tax policies? Property laws? Patents? Let people talk about it.
I really don't see it different than anti-liberal hostility on conservative forums. It's the same. Exactly the same.
Let's change this and exercise some restraint. Most people on either fence have good faith.
You need at least a 3rd viable political party to fix this. A two party system is polarized on issues. Democrats believe conservatives are homophobic, racist womanizers and conservatives believe liberals are communist, devil-worshipping pedos. It doesn't matter if either of them have facts the other party is so opposed they won't listen.
With the caveat that I don’t know much about Facebook’s research projects… the question “If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place?” sets alarm bells ringing for me—because I can imagine a cigarette company saying the same thing about the scientists they hired to argue that smoking isn’t so bad after all, or an oil company saying the same thing about the scientists they hired to argue that it’s too soon to act on climate change (especially since they’re “leading the charge” on renewable energy research).
(I’m influenced here by Naomi Oreskes and Erik M. Conway’s work in “Merchants of Doubt.”)
I think that the point here is that all the controversy is because of a study from Facebook's own research. If a cigarette company came out with a study showing cigarettes were dangerous, perhaps it would be a different story.
Of course, I wouldn't say that being aware of the problems your product causes makes you less culpable; just the opposite.
At the same time, I think that when it comes to assessing whether the company is acting in good faith or not, it is important to consider not just what research they do or don’t do, but also which findings they choose to promote and which they choose to keep internal…
(Part of the fiasco with tobacco or climate is that those companies knew the risks associated with their products well enough, but didn’t publicize the findings that might cast them in a bad light in the same way that they publicized the findings that supported a “let’s wait and see” approach to policy. See for example https://www.scientificamerican.com/article/exxon-knew-about-...)
Sure, but if a cigarette company commission a whole load of research, sat on the "bad" results and publicized the "good" ones, and then a whistleblower came forwards and showed the world the bad ones they had sat on...
Google handled this in the right way: fire researchers who are clearly pursuing anti-company agendas, always. It's worse to pretend to support research (or "research", which TBH is what most of the anti-AI shite at Google was, bad actors working with external groups to "take down" big tech, funnel docs to DSA, etc) that hurts you than it is to just stop pretending you care what a bunch of hard-left extremist assholes think.
Facebook has a tough problem, though, because a lot of their criticism is not just one wing of loonies, it's from every direction including normal sane people, and a lot of them agree with each other for once. I don't think they're breaking any laws, but I don't doubt someone will invent a shady case or charge them under a new law just because everyone hates Zuck. Not sure how you dodge that.
he can step down from active management, he doesn't really need to run it anyway. FB is only big tech company that is still founder run (largely because it's younger ) .
They could Split /diversify /dilute the brand /products the way google has done making it harder to associate with the bad things directly.
> If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place?
I mean...precisely to be able to control that research. Exactly like companies in other sectors (tobacco|oil|pharma|AI). It's asinine that this is presented as a rhetorical question [1].
For those (myself, as well) comparing portions of this response to tobacco, oil, and pharma companies controlling (and manipulating/suppressing) research, an example to keep in mind that's much closer to home would be Google censuring negative AI research [2].
[1] See for example, The Influence of Industry Sponsorship on the Research Agenda: A Scoping Review
> If we attack organizations making an effort to study their impact on the world, we're effectively sending the message that it's safer not to look at all, in case you find something that could be held against you.
Someone on HN actually linked to the slides, and it does seem like there's a lot of FUD being created.
This reminds me of moxie talking about damaging research focusing on Signal:
> To me, this article reads as a better example of the problems with the security industry and the way security research is done today, because I think the lesson to anyone watching is clear: don't build security into your products, because that makes you a target for researchers, even if you make the right decisions, and regardless of whether their research is practically important or not. It's much more effective to be Telegram: just leave cryptography out of everything, except for your marketing.
Three minutes of my life I'm never going to get back. You can stop reading by paragraph two at "We care deeply about issues like safety, well-being and mental health." All Facebook cares about is engagement.
Yup. I once saw an accountant write, "Don't tell me your priorities. Show me your budget and I'll tell you your priorities."
Facebook's whole history is not really caring about those things. What they care about is not looking bad. Which is why they trot out those lines when things get too awful for the public, the press, and legislators to ignore.
"We are so, so sorry you caught us {doing, allowing} this thing. We are deeply embarrassed that we didn't hide it well enough from you. We promise to take the time to do the work so that in the future you won't discover us still doing it. We deeply value {word spew of the month} and look forward to you believing our apology well enough to continue using our service."
I have NO doubt that if you asked Mark to push a button in which all the engagement were healthy, he'd do it after being assured the engagement amount wouldn't change.
He probably does care about safety, well-being and mental health" (though not about serial commas apparently). He cares about them in the same way that the vaping companies care about lung health (they exist to reduce cigarette smoking, right?)
Same sentiment after reading this. He is basically saying "we have industry leading research team so even if we don't make decisions align to their recommendations, we care deeply about their work?"
I handled Mark's rambling post like many others similar to it, I reported it as 'false news - politics' (in this case^) and then sent a message to the author politely detailing why.
^ After niceties I started with a quote from his post which was false and the the explanation from reliable news sources of why it was 'false news - politics.'
Unfortunately most of us unknowingly are giving away our time, attention and data to the likes of Facebook, etc. and don’t know what it’s costing us. Join the club and delete, take away their power! You can socialize and share just as easily without these jokers.
I did think the fact that his "we're not actually pure amoral engagement maximisers" article was gated behind first signing up and logging in was just perfect under the circumstances.
Honestly seemed to be quite sufficient of a summary of whatever the text was, to a great extent.
I’m with you. Facebook brings way more good to the world than bad. And the bad it brings is just amplification of what already exists. And with the Internet that amplification will happen with or without facebook.
Also the fact that much of the research / “leaks” are showing most of the issues are only happening in the US is a sort of a glaring hole in most of the arguments being made against FB. The US is just borked at the moment. Everything is red vs blue, and no one even cares what the issues are at this point.
Most people completely have their head in the sand about how utterly awful the average person is. A massive chunk of what people complain about is simply attributable to the fact that people are monsters, and that global connectivity means we can now all talk to each other without distribution being tightly bottlenecked and controlled (eg publishers, broadcasters, radio, etc). There's little that Facebook is accused of that seems as salient as this fact, and little they could do that would avert much of what we're seeing.
Oddly enough, I've felt for a long time that Facebook is among the most unethical companies out there. It's very weird to see it facing a potential reckoning on the back of such incredibly weak claims.
Well to start, Facebook is neither good or nor bad - it is not a person and does not have morality. The human mind is not really capable of dealing with an abstract entity such as a corporation so we anthropomorphize the large mess of people, software, and hardware presenting us a little app on our phones as imbued with a personality and morality. Then we debate whether this mess, which has Zuch's vague face over it, is a personification of good or evil. It's neither - it is a morality-free phenomenon.
Facebook's properties have benefits for certain slices of the population in certain instances, and negative effects for other parts of the population. Businesses rely on Facebook's properties for communication, people rely on Messenger and especially WhatApp for communication, artists rely on Instagram for inspiration and social outreach, teens use the properties to connect with each other and to feel sad or happy about themselves in social contexts (as teens tend to anyway). In some cases the teen are happier, in others sadder.
I got to know my gf over Messenger, I have wasted tons of time on Facebook, Instagram has been a medium influence on me. Life happens, people happen, and now social media happens.
Zuckerberg is the public face of the organization now and it is his life's work. He has an entire society's worth of social media interactions he is tasked with controlling, censoring, and maintaining. People tell him they want it free for some speech, closed for other speech, they want Facebook to hire tons of people to censor and curate the content, they want great decisionmaking, they want to be be free to say anything they want to say (so long as it matches their specific moral values), they want their kids to only see good things online, they want their kids to find what they need but see only what they should see on all of Facebook's multifaceted properties. It is a huge task.
Why would Zuckerberg intentionally want to fail at this task and have people hate his company? To make more money? He is not an idiot and knows that the success of the company is dependent on its reputation. How exactly could he NOT do his utmost best to mitigate the negative effects of his properties, enhance the positive effects and thereby positively influence everyone - the users, the company, himself? I don't understand the cynicism which drives all these comments either
Your argument is the same as “Guns don’t kill people. People kill people”. It’s still something with potential to cause a great deal of harm and should be regulated.
> Why would Zuckerberg intentionally want to fail at this task and have people hate his company?
Nobody wants to fail at anything. Doesn’t stop people from failing all the time. Nobody wants to get fired but if you suck at your job you get fired all the same.
You’re approaching this way too logically without considering the social aspects. Stop trying to think like a robot.
I get why people hate Facebook, but to me Facebook is no worse (or better) than what a top social network service would be, in this format (a timeline that you can post anything) at least.
If FB shuts down today there would be another one that is occupying the exactly same space, having the same influence on the society soon.
They do a lot of good bringing people together, but it’s a double edged sword with massive potential for abuse. It’s not up for debate that Facebook causes harm. The only question is how much is too much? At what point do we step in, tell them their mitigations are ineffective, and break up or regulate their systems?
The question is how many alternative systems and services people could use to keep touch with their loved ones Facebook steamrolledby network effect and questionable business practices.
Services that might not require you to expose your real identity to the whole internet, that might not exfiltrate your full contact list from your phone or would not apply questionable morals and censorship on the content you exchange with your loved ones.
A messenger app brings people together. Such an app doesn't need an algorithmic feed to hook people on junk/harmful content. That was added for one reason, money.
> In fact, in 11 of 12 areas on the slide referenced by the Journal -- including serious areas like loneliness, anxiety, sadness and eating issues -- more teenage girls who said they struggled with that issue also said Instagram made those difficult times better rather than worse.
All the reporting has been about how the research found that Instagram was so terrible for teenage girls, but that seems to be a total mischaracterization. Honestly, it seems like if you ask teenage girls about anything (clothing stores, schools, television) there's going to be a mix of positive and negative experiences. Is the bar we are holding facebook to that no matter how much good they do that any negative experiences outweigh that? Is that a bar we would hold anything else to?
> if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?
I have a kind of unpopular belief that we'd still have similar polarization if misinfo were just distributed on Fox + MSNBC, websites, and Twitter.
I think Facebook is “that bad”, but one point I thought was believable was about major advertisers not wanting to be associated with hate mongering posts in the first place.
I’m prone to believe that, but also not sure that is has anything to do with the other issues at hand.
I have never signed up for Facebook but if my family members have ever "uploaded their phone contacts to find more Facebook Friends" or if my relatives have ever uploaded pictures with me in the picture ((Facebook has detected a face we don't recognize. Please tag the person in this picture)), or if I have a US Government Name and/or Social Security Number, there's a company that is building a DATABASE DOSSIER with information about me. I cannot ask them to please DELETE information they have about me because those Shadow Profiles are SECRET, I'm not even supposed to KNOW that these creepy data-scrapers have a file on me.
Facebook is just the front end for the NSA / surveillance state backend, and at this point it's frightening how Too Big to Fail they have become.
When you're designing a Big Brother dystopia in which everyone is tracked and surveilled, step one is to create something like Facebook. "All your friends are doing it! You should too!"
No, not the only one. I don't like the recent narrative that Facebook is an extremely evil corporation and responsible for most issues in the world. I think people overestimate the influence of Facebook and just want to live in a world where everything they don't agree with is fake news, controlled indirectly by Russia and spread through "useful idiots" (usually defined as people from the opposite side of political spectrum, because my political spectrum only consists of highly intelligent, emphatic people who analyze every news article in detail).
I suspect you, like I, have either minimal FB use or have healthy guard rails for our use and expectations.
A non-trivial portion of the country doesn't have that mindset. Further, many in that group lack the understanding of FB's ability to influence wide-spread opinions.
Different groups get mad at Facebook for different reasons, so inevitably they get a lot of hate.
In the context of the whistleblower:
One of the popular methods of manipulation is to use "harmful to children" as a basis for making an argument. We have seen this countless times in the past on a variety of issues. This is no different. The "harms" that are being highlighted here are equally, if not more, applicable to adults. Children are at a stage in their lives where good parenting can easily offset any potential harm by consuming content on Instagram or Facebook.
One can recognize that this particular topic, like many contemporary topics, is a subset of the overarching libertarianism versus authoritarianism debate, and opinions often cleanly fall on political lines depending on the complaint. In this case the whistleblower has left-of-center politics, so they have a grievance with "disinformation" and "not enough control". There have been previous whistleblowers who have had right-of-center politics, who have cited "censorship" and "biased control" as their grievance. There is ample evidence for the company being guilty of both, with regard to specific instances.
As such there will always be complaints from opposing points of view as to whether the company is doing "enough" to police content, or whether the policing has become biased. Amusingly, you see the reverse of this debate when you look at actual policing in the USA, where the opposite side argues bias in policing and the other side argues for harsher control and punishment.
Those who fall on either side of the spectrum tend to paint with a broad brush some kind of systemic evil conspiratorial agenda at the company, as a consequence of voicing their respective frustrations.
Overall Facebook is a net positive for the world. There are likely activists within the company trying to push agendas, some of whom may be prevailing over others. This is evident by just taking a walk around campus and reading the political messaging that adorns the various shared spaces. These are also largely irrelevant in the long term because if and when it reaches any sort of extreme, eventually the pendulum will swing too far.
The best I can come up with is that Facebook is so big that the "evil" is an emergent property of all the different things that are happening. It's so big no one can comprehend the big picture of it all, so while the individuals involved have good intentions with what they are working on, the sum total of all employees' intentions ends up broken.
So maybe Zuck is telling the truth here, that they are trying to fix all this. But no one can see the forrest from the trees.
I can't reconcile it any other way.
But there's more at play here. I briefly worked on Twitter's anti-abuse engineering. Many of the people on that team cared a lot about protecting people. I sure did. But we didn't have the necessary power to actually solve the problem.
The people who did have that power were senior execs. They might say that they cared. In their heart of hearts, perhaps they even did. But their behavior demonstrated that they cared about other things much more.
My boss's boss, for example, was an engineering leader who had a climber's resume: quickly advancing through positions of more and more power. In my view, he cared about that a great deal, and did not give a shit about the actual harm to users. As soon as he got the chance, he pushed out my boss, laid off the team's managers, me included, and scattered the people to the wind.
I presume the same was true about the senior execs. They were aware Twitter was causing harm to people. If they wanted to know the details, we had plenty of research and they could have ordered more. Did they care? Impossible to know. But what they focused on was growth and revenue. Abuse was a big deal internally only as long as it was a big deal in the press.
1: https://www.ribbonfarm.com/2009/10/07/the-gervais-principle-...
Could this just be an issue of too many problems to care about and not enough time to solve them all or do you think the indifference was intentional?
When an organization becomes too large or bureaucratic, these Dark Triad types hide and typically exert their influence and power and will behind the scenes. This is why these companies seem “evil”, but it’s usually not the founders’ fault, a lot of times they’re unaware of it, or one of the founders is also a sociopath and will protect the evil cabal. That’s my two cents about it anyway.
Isn't it just about that in the end? I think being good or not is about whether you give yourself the room to do the right thing even when other pressures exist – because they always exist.
Being good can be hard, because sometimes it means you have to abandon your usual priorities and stand up to the consequences which will emerge from that decision.
Haugen repeated over and over again her testimony today that Facebook is full of smart, thoughtful, kind, well-intentioned people, and that she has great empathy for them, even Mark Zuckerberg. Her point is that they have created a system of incentives that are inexorably leading to harmful outcomes. It is not about good and evil people, it is about the incentives. It's exactly as you are saying.
That's why she is not advocating to punish Facebook for being evil, but rather to force Facebook to reveal and permit research so we can understand the system and fix it, because Facebook is too deeply trapped in its own tangle of incentives to fix itself. In this I think she is absolutely correct.
Relevant testimony here: https://twitter.com/gillibrandny/status/1445451624005001217
Remember, this is the system of incentives that had us spend 20 disastrous years in Afghanistan, across both parties. And has failed to deal with climate change. And healthcare. And education. And wealth inequality. And housing. And... Siri, what's the definition of insane?
It creates a Hunger Games mentality within teams and makes doing anything that actually matters virtually impossible, generating an infinite sequence of half assed 6 months projects that get systematically abandoned as soon as the people responsible manage to get promoted or switch teams.
If the people inside are "smart, thoughtful, kind, well-intentioned people", they would have tried to work around the incentive, influence them, denounce them, or quit.
It rarely happened. Most of the time, the just take the money, and goes with the flow.
these days research comes with a set of politically charged assumptions, for example the definitions of "hate speech" and "misinformation" are different based on which political camp you ask
So giving access to Cambridge Analytica is bad but to some other partisan "think tank" is fine? who would make those decisions?
Imagine if you were trying to fix climate change, but under the condition that you weren't allowed to burn fewer fossil fuels. You may try very hard, and very sincerely, but it's fool's errand.
> I don't think it's an emergent property, I think it's a by-product of the constraints. > Imagine if you were trying to fix climate change, but under the condition that you weren't allowed to burn fewer fossil fuels.
There is one person who controls all the constraints: Zuckerberg. He even went so far as to enforce that through his stock classifications. It’s entirely understandable and acceptable to have empathy for those working at FB who are attempting to solve the problems. But Zuckerberg made the decision to be the single source of the constraints that bind everyone below. And his constraints are: profit over all else. He should face consequences for setting those constraints, just as anyone should who set a constraint of “address climate change without adversely effecting GDP”.
Separately, and as the “revelations” of Zuckerberg’s immoral behavior continues year after year, those who work for him but are attempting to solve the problems, should recognize at some point in the future, now, or in the past that the problems are insurmountable within the confines of the constraints. As that knowledge spreads, then the question becomes whether those idealistically earnest individuals are justifiably ignorant of the reality: that all their best intentions are moot in the face of the constraints as were determined by Zuckerberg. And when or if they are no longer justifiably ignorant, they become culpable.
I like this. This helps me. Thanks.
Conflicting goals. And the one you want to fix is secondary to not losing money.
Ah, this is what I think of as Schrodinger's Accountability. Zuckerberg and Facebook's senior execs are simultaneously: A) so brilliant for running Facebook that they deserve to be incredibly rich, and B) so normal that they can't possibly be expected to understand the consequences of their actions, and so are morally blameless. Heads they win, tails we lose.
I say it's one or the other. If Facebook is too big to be understood, it should be broken up into small enough units that mere mortals can see the forest and tend it responsibly. And if not, the execs should be morally and legally culpable for the harm it does.
Those same people are protecting their children with $300k+ salaries and buying property in area where they can send their children to Gunn HS. While I empathize with these people the direct opportunity to protect your kin should not be understated. Do they mean well? Sure. Are they putting their best effort to fix things? Sure.
Here's the most important part:
Do you they know deep down inside that the only way to fix these things is to hurt Facebook financially? Probably. But they also know this means risking to protect their own children as a result (forced to move, lose job, less pay, etc.). What would you do? (I think I know the answer)
This can't be understated any further - in the end it doesn't matter what individual people at FB think because no one person or group of people has any legal, economical, or logistical ability to control the company except Mark Zuckerberg. He is figuratively and literally impossible to fight. Well, unless everyone deleted their accounts.
The crazy thing is that FB has taken steps to improve things in past that also hurt them financially (eg post cambridge analytica) . They just make so much money and so fast that its like 1 or 2 bad quarters and its over.
So (1) mark being all powerful means he alone can decide its worth lower profits - he's done it before.
(2) The loss of profits probably wouldn't even matter.
Outrage attracts attention in all group interactions. I can't think of a single large scale group forum where this isn't true. It's integral to an absurd degree in our news cycle. Howard Stern exploited this property in his rise to fame. It's a core element in state propaganda, well documented throughout human history.
I'm old enough to remember when the internet was a lot more free - when there generally wasn't some parent corporation imposing content censorship on what you put on your homepage, or what you said on IRC. All of the complaints regarding Facebook were true of internet communications back then too (on the "sex trafficking" issue, compare to Craigslist of yore!)
The big difference seems to be there's an entity we can point a finger at now. Communications on Facebook aren't worse than what was on the internet two decades ago. In fact, they're far, far more clean and controlled.
What I look to is whether Facebook is more objectionable than alternative forms of communication, and I can't find any reason to believe that this is the case. Is twitter better? Is reddit? Is usenet? No.
So why does Facebook draw such ire?
Are people calling for controls on Facebook also calling for controls on self-published websites? On open communication systems like IRC or email? Where is the coherent moral philosophy regarding internet speech?
To be honest, my biggest concern when I read the news surrounding this issue is that most of the internet might not be old enough to remember what it means to have a truly free platform, unencumbered by moralizing. Why are people begging for more controls?
But I think there are a few key innovations that make FB worse for human psychology than previous iterations. Chief among them is the algorithmic newsfeed designed to drive engagement. Outrage certainly provokes responses, but in a chronological feed situation, eventually threads would become so large that the original outrageous situation would be pushed far back and the outrage would go away. Algorithmic newsfeeds bubble these to the top and continue to show them as they get more comments/retweets/shares/etc. They reward engagement in a visceral way that offers perverse incentives.
Secondly is the filter bubble. By showing you content hyper-relevant to your search interests, you can easily fall into echo chambers of outrage and extremism. Internet communities, like IRC channels, had huge discoverability issues. Each community also usually had disparate ways to join them adding another layer of friction. Even if you were an extremist it took dedicated searching to find a community that would tolerate your extremism. Now mainstream platforms will lump you into filter bubbles with other people that are willing to engage and amplify your extremist posts.
Combine horribly narrow echo chambers with engagement-oriented metrics and you'll have a simple tool for radicalization. That way when you're thinking of committing a violent act because of the disenfranchisement you feel in your life and your community, you'll be funneled to communicate with others who feel similarly and enter a game of broad brinkmanship that can quickly drive a group to the extreme. Balkanization and radicalization.
However for the last 10 years or so grievance culture has taken root and not only excused outrage, its proponents have actively encouraged it.
It makes me think of that scene in Star Wars where palpating is like “good, good. Let the hate flow through you”, expect we now have millions of people encouraging this.
How I wish we could rewind things to a world where foregiveness was still a virtue and we were all taught that sticks and stones may break our bones but words will never hurt us. Without such virtues, a world with outrage is inevitable.
Let's not underestimate the degree to which 'likes' (social affirmation ersatz) are eliciting the worst in people.
It's unfortunate that when you build a product so close to the ground of human communication and human nature you're never going to be able to get everything right, and you're no longer solving technology problems alone but trying to basically combat basic human moral failing itself. We don't ask that of the telephone company.
^ That being said, we can only excuse some of their failures with the above line of thinking. Others we can blame on greed or recklessness, or ignoring the social costs of something like ML recommenders optimizing for engagement. Not sure if those things deserve to be called evil, but I'd still hold back personally. Misguided, overcome by greed, or reckless, perhaps.
The "angry facebook" experience to me seems like the moms against heavy metal / twisted sister case: People are seeing a reflection of what their peers share.
If their circles are angry and share disinformation, that's what they will see.
It is about the circles.
When will this “just luck” characterization of Zuck die? His entire company was certain they should sell for $1B, and most executives resigned when he didn’t. He maneuvered control of the majority of voting shares, how many other founders have done that? Instagram and WhatsApp were genius acquisitions everybody at the time clamored were too overpriced. Even Oculus has turned out to be the leading VR platform. All of the people close to him attest to his extreme intelligence.
Whether malicious or not, Zuck didn’t just “aw shucks I got lucky” into the majority owner of a $1T company, cmon…
Except they are just playing around with the outrage algorithms, the problem is created by Facebook, not some natural occurence. If they wanted to "fix" anything they would make their algorithmic timelines opt-in, or at least an option, for starters.
It is of course very much in the interest of the people working at Facebook to make this seem like a problem that is just there and that it is some "difficult to solve", that "moderation doesn't scale" etc.. These are deflections to make everyone ignore that Facebooks tampering is where it starts.
I know YouTube’s just recommends based on what you just watched/search (you can disable this aspect by clearing or disabling your histories), channels you have subscribed to, (I believe) videos you have “Like” or commented on, and videos you have marked as “Not interested -> I do not like this video”.
Is Facebook’s as “viewer driven”? Or does it recommend based other criteria? e.g. like what’s generally popular.
however, no matter how capable and ethically sound they are, the incentives are forever misaligned with the profit models for both, and adtech over all, as it currently stands. truly good people can chase money and hope to do good things in the process. it's as easy as this.
the writing was on the wall when alex stamos, by all measures the best example of the type of person you're referring to and FB's chief security officer... left. started in 2015 and was out of there by 2018. not many c-levels walk away from a job like that for the reasons he did, and when they do that should be the even to pay attention to (looking at you, sheryl "lean in" sandberg). this was the marker event if people were looking.
If by "established" you mean that it's well known, then yes, you're right. If instead you mean that it's agreed-upon or widely accepted, you'd be wrong. There's a lot of great debate / critique, both about how well the phrase actually applied to Adolf Eichmann himself (Arendt was famously only at the trial for like 5 days), and whether evil in general is ever, in fact, all that banal. Sadly the conversation around "the banality of evil" hasn't received a fraction of the attention that the phrase itself has.
Banality of evil is about how ordinary people can work on evil things while not being sociopaths and still being considered ordinary people. But it also presumes that there is some truly evil / sociopathic force driving this through authority, such as Hitler himself in case of Eichmann.
On the other hand the parent poster is saying that Facebook is simply too big to not end up evil, that evil is an emergent property of the million different processes that is Facebook. That view absolves not only regular workers of Facebook who are helping the company achieve evil things, but also the people who are actually in control of the company – Mark Zuckerberg and his senior executive team.
Personally I'm not buying either of these absolutions, but especially not the grand universal absolution that the parent poster affords to the whole company.
Ultimately it is someone's decision to put profits above everything else. Engagement doesn't excessively optimize itself. Users' contact books aren't getting stolen by themselves. Shadow profiles don't fill up themselves. "Just doing my job" is a choice, not an excuse. Many people are complicit in making and implementing these decisions for their own benefit, and they are all responsible for the outcome.
There are some amazing people on their safety and moderation teams. They're also fighting marketing algorithms, I'm sure.
It's essentially Arendt, a Jewish exile from Berlin who fled the holocaust, wrestling with her realization that Eichmann, who reported to Hitler and organized major portions of the holocaust, wasn't a psychopath, but a completely mundane and thoughtless career focused bureaucrat who was trying to rise in government and believed in doing what you are told, who then organized one of the most evil acts in human history without reflecting on what he was doing.
A lot of people that work at slaughterhouses do so because they have no other choice. It is the best opportunity that's afforded to them. It is a job that causes trauma for many, often has long, grueling hours, and doesn't pay well.
Working at Facebook couldn't be further from that situation. Never mind the obvious perks (the tech, the white collar work, the gourmet food, I hear there's also a wood shop where you can go do woodworking on your break, the half a million dollar salary, etc etc etc). But the overwhelming majority of these people have the whole world of job opportunities to choose from, if they're willing to take a pay cut from an INSANELY HIGH salary to just a VERY HIGH salary.
So in that sense, they couldn't be further away from working at a slaughterhouse. The fact is, they could quite literally work anywhere else (any other company or any other city/country with remote work now), and they choose not to. It's not desperation but the textbook case of golden handcuffs.
It's very, very difficult to say no to 500k a year. I'm not even sure I could say no if I were in that position. I'd probably tell myself "Just coast for two more years and both my kids won't have to pay for college" or something like that, and keep going.
Don't fall for words!
Frances Haugen was able to see the big picture. The documents she presented had Facebook employees mentioning it. Facebook didn't act on what was known. It is not that it wasn't known.
To paraphrase John Roberts - the only way not to do a thing is not to do that thing.
And it's sure seems an intentional part of their fig leaf denial strategy -- viz the recent revelations about human trafficking on fb in arabic [2]. Or armed groups in Ethiopia inciting violence on FB in ways that fb chooses not to monitor because of language issues [2].
A company with 21Q2 revenues of $28.5B can't hire moderators in languages spoken in countries with low costs of living... It reflects a thirst for growth with no thought given to the people affected by their growth.
[1] https://www.reuters.com/article/us-facebook-languages-insigh...
[2] https://www.wsj.com/articles/facebook-drug-cartels-human-tra...
There's also another concept, the reality that people do not actually care as much as we think they do. There's a program every public school in the U.S. has where kids run at each other, at speed, knock each other to the ground with concussions, tear their muscles, break their bones, and have terrible behaviors towards one another. Yet every school still has said program. Parents encourage their kids to join. We just don't care about whats right.
Agree, but I think we should look at guns first then. Invented to kill, and yet we let people mass buy them for fun
Internally at FB, everything looks good, you hit all your OKRs and believe users are better off. Maybe you don't, but you're bonus is huge so you'll put your head down and keep on. Externally, it's an entirely different picture. Connecting people is a comically small issue society needs FB to solve, relative to our need for them not to harm children, or promote extremism, or hide research when testifying to congress.
> so while the individuals involved have good intentions with what they are working on, the sum total of all employees' intentions ends up broken.
I think, honestly, that a huge thing is that when you put together basically the entirety of the internet, and society into a giant conversational feedback loop you're bound to spin out the worst, especially if FB wasn't 100% of the time trying to filter it (which they weren't because its a business and the problems weren't always equally known).
Misinformation that's been around forever... ever play the telephone game in school .. you tell one person a story they tell the next and the next and the next soon that story is no longer factual. Stir all that in with bias and things get even murkier!
This is a generally hard problem but it's as significant now as it was in the aftermath of WWII. I'd say it speaks to the reality of human subjectivity, and it never goes away: I can only wonder if the same will be true of AI, and whether it's possible for a thinking being to really internalize the concept of hard limits to their perception, and build that into their model of the world.
You could say the God concept is a way of trying to internalize the limits to perception: 'something is vastly significant and it's not me, and my understanding does not and cannot encompass it'.
With OR without this concept we as humans are exactly as evil as each other. That's the secret. There isn't a qualitative difference between 'us' and history's great monsters. It's about the choices we've made and how we've acted on them: the rest is rationalization, which we are all subject to in one way or another.
Grappling with this is the Nuremberg moment: the question is 'never mind whether you feel you've been good, what have you done?'
So, what have they done?
I’m not saying they are blameless I just always have a tough time laying all the blame on a couple people.
It’s so big and so organized, they can come up with an idea for a new service or policy they want to implement and it takes roughly two weeks to get all the channels to approve and move forward on the idea. Implementation is different, this is just getting all the approvals from legal, finance, marketing, etc..
They are definitely in a position to make changes quickly should they need to.
Imagine if a company had invented methamphetamine, but the ill effects weren't as readily apparent. Then they built an empire on the belief that the societal benefits of millions of people running around in a seemingly ultra-productive manic state were a godsend to society, and that they had truly changed the world. Then realize that the effects of Facebook are worse than that--it has the opposite effect on productivity, has maybe worse mental health effects, and is nevertheless highly addictive. The reality would never sink in inside that bubble. Worse, the tens of thousands of people whose jobs and wealth depending on tuning said meth to be as addictive as possible are...what? Pawns? Believers? Accomplices? Delusional? Regular people. They are regular people.
https://en.wikipedia.org/wiki/The_Corporation_%282003_film%2...
"The Corporation attempts to compare the way corporations are systematically compelled to behave with what it claims are the DSM-IV's symptoms of psychopathy, e.g., the callous disregard for the feelings of other people, the incapacity to maintain human relationships, the reckless disregard for the safety of others, the deceitfulness (continual lying to deceive for profit), the incapacity to experience guilt, and the failure to conform to social norms and respect the law"
If you can't do what you do in a way that isn't this harmful to the world, then you always have the choice to just stop it.
These are all smart people. They could be working on anything else and be successful at it. But they are scared of change. The money is too good.
I just wish the people working at facebook that are decent, that they would just leave and go work somewhere else.
And that we stop debating whether Zuckerberg is redeemable. He is not. He is a psycho. He is why it escalates this much. He is a monster. Beyond all the lies, he intends the damage he does to the world. Maybe someone bullied him as a child. Maybe he is just not well. I don't know.
I also think the people who make the biggest show of how much they care tend to be the same who don’t actually act in a caring way at all.
No, I’m not surprised at all that FB employees say they really care. And that they do so very convincingly.
My actions can show otherwise. IE: im going to go eat a slice of cake, sit down and watch TV tonight.
My priorities and actions don't appear to be inline and the result I'm going for won't be met.
Who are the people to be bold enough to speak for "everyone". You are definitely not speaking for me. I personally get a lot of value from facebook. I never had any problem with it in any respect. Use it to communicate with my family around the world. Used it to rent my apartment, sell thing on marketplace. I keep in touch with people I know. And have very thoughtful and enlightening political discussions that help me do the right choice who to vote for and stay informed. (The only other place with better discussions is hacker news thought)
https://www.facebook.com/?sk=h_chr
I had to go into my groups settings and unfollow them all individually though:
https://www.facebook.com/groups/feed/
Same for pages:
https://www.facebook.com/pages/?category=liked&ref=bookmarks
Deleted Comment
I half agree. I do in fact think its ben baked in from the get go, just that there was a period where it was not an obvious pillar; you could in fct do all kinds of essentially innocuous things and accept some surveillance capitalism with awareness of your wn liabilities.
It's now become so much larger and problematic, that the 'emergent property' is that every move adds weight to the need for the firms dismemberment into smaller units, or punishing regulatory limits. And I mean truly brutally, snap-noise making bone-breaking regulations.
"So maybe Zuck is telling the truth here, that they are trying to fix all this."
Nope. He knows if he wants to stay on top, he had t keep doing mpre of what hes done, and that his choices are otherwise to actually adapt, which he will not do.
For instance, contrast Zuckerberg's statement here:
> And if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?
With such severe under-resourcing and deprioritization that one person had the literal weight of worldwide election integrity on her shoulders, as revealed more than a year ago: https://www.buzzfeednews.com/article/craigsilverman/facebook...
> The memo is a damning account of Facebook’s failures. It’s the story of Facebook abdicating responsibility for malign activities on its platform that could affect the political fate of nations outside the United States or Western Europe. It's also the story of a junior employee wielding extraordinary moderation powers that affected millions of people without any real institutional support, and the personal torment that followed.
Haugen echoes the same in https://archive.is/tQwE9 :
> She soon grew skeptical that her team could make an impact, she said. Her team had few resources, she said, and she felt the company put growth and user engagement ahead of what it knew through its own research about its platforms’ ill effects.
The fact of the matter is that if Zuckerberg were to say "I'm going to pour our profits into trust and safety and abuse avoidance in order to ensure that our position as a trusted brand is sustainable for generations to come," his high levels of voting control and clear defense to any allegations that this was against long-term shareholder interest would fully make that possible. The fact that quite the opposite has happened should be considered with much more weight than his words in a reactive press statement.
A company is not a democracy.
Indeed, we have probably just seen one of the (former) employees with good intentions struggle to stay true to them.
It is the structural incentives in the system that cause the problems.
So yes, evil and discrimination can be an emergent problem even though no individual intends harm.
Then you might also have bad actors who exploit those structural incentives/weaknesses.
Maybe they are trying, but also maybe they are trying to have their cake and eat it too.
What I mean is that very likely the proper way to fix things would financially hurt FB, which seems it’s something they really don’t want to do.
http://www.aaronsw.com/weblog/bizethics
yea, maybe.. they _are_ cashing in billions in the meanwhile
Deleted Comment
I have no empathy for them. They bring out the worst in Humanity. They build walled silos of festering hate and anger, all driven by "user engagement", "hours on site" and money.
Just remember Rationality is Bounded, ie there are problems chimps with 6 inch brains cant solve. Its the classic Jurassic Park story, where man says he can control anything. And then realizes he cant. By which time its too late.
This is why the road to hell is paved by "good people who have kids" with their good intentions.
FBs issues did not appear yday.
Like the endless war the issues where there right from the start. So why are we talking about it today? Cuz lots of good people didnt do anything, not because they arent good or skilled, because the problem is too complex for them.
This is where Bounded Rationality helps resolve issues. If the problem is too complex, pick a simpler problem.
This is hard for some chimps to do for various reason. So entertaining them is a recipe for disaster. Their narrative will always be- "people are good. People experienced World War 1. They know whats at stake. They lost family, friends, body parts. Many are great Heroes. Trust them. They know what they are doing". And still we got World War 2.
Why? Cause rationality, skill and experince doesnt matter for some problems. All the "good germans" from politicians, to religious leaders, to military and intelligence leaders knew Hitler had to go long before any notion of war entered the minds. Every coup and assassination they ploted they second guessed themselves. All of them ended up dead.
https://www.buzzfeednews.com/article/ryanmac/growth-at-any-c...
(Note: "Connecting people" here does not mean providing communications services, it means using behind-the-scenes, unconsented, and sometimes deceptive tactics to figure out whether and how people are connected to each other IRL.)
Andrew Bosworth June 18, 2016
The Ugly
We talk about the good and the bad of our work often. I want to talk about the ugly.
We connect people.
That can be good if they make it positive. Maybe someone finds love. Maybe it even saves the life of someone on the brink of suicide.
So we connect more people
That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.
And still we connect people.
The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.
That isn't something we are doing for ourselves. Or for our stock price (ha!). It is literally just what we do. We connect people. Period.
That's why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.
The natural state of the world is not connected. It is not unified. It is fragmented by borders, languages, and increasingly by different products. The best products don't win. The ones everyone use win.
I know a lot of people don't want to hear this. Most of us have the luxury of working in the warm glow of building products consumers love. But make no mistake, growth tactics are how we got here. If you joined the company because it is doing great work, that's why we get to do that great work. We do have great products but we still wouldn't be half our size without pushing the envelope on growth. Nothing makes Facebook as valuable as having your friends on it, and no product decisions have gotten as many friends on as the ones made in growth. Not photo tagging. Not news feed. Not messenger. Nothing.
In almost all of our work, we have to answer hard questions about what we believe. We have to justify the metrics and make sure they aren't losing out on a bigger picture. But connecting people. That's our imperative. Because that's what we do. We connect people.
When this statement leaked, he made the bullshit claim that he was playing devil's advocate. He certainly wasn't. This post was made at the same time as another leaked one about messenger adding a deceptive interstitial to get people to agree to share their number and contacts with FB.
It seems that the author is operating under the assumption that if everyone is inside of their product, the world won't be fragmented anymore. People will be connected.
Yes. They will be connected. To the product.
We can do better than this dreary future. It is possible to connect people as peers, without the exploiting hands of intermediaries like the executive who wrote this statement.
Dead Comment
Dead Comment
Dead Comment
Though the statement seems well-meaning, etc., it is weaselly and manipulative. It also conveniently doesn't address some of deeper issues from Frances Haugen's testimony.
For example, Haugen focused on the fact that Zuckerberg has created a relatively flat organization, where if decisions help the core metric they must be good, and vice versa. Haugen testified that Zuckerberg was made aware that instituting a newsfeed tweak would entail a) small ding to the core engagement metrics and b) would decrease violence in Ethiopia... He chose the metric over the decreased violence.
There comes a point where blindly pursuing metrics -- be it money or engagement -- without regard to the effects on society are hard if not impossible to distinguish from sociopathic behavior.
Also, let's not forget that researchers and renown statisticians employed by / sponsored by Big Tobacco (e.g. R.A. Fisher) convinced themselves that smoking didn't cause cancer. [0]
[0] https://pubmed.ncbi.nlm.nih.gov/2000852/
- Social media can't cause "polarization" because the measurements of that are going down in most of the world, except the USA. But social media is heavily used everywhere.
- It makes no sense to claim an organization doesn't care about X when it heavily funds research into X.
- If you react to a company researching the harms of its products by leaking everything and publicly accusing the company of being evil, other companies will simply not do research into the harms of its own products.
The second two are just logic. The first would benefit from a citation but I'll take his word for it.
Deleted Comment
Yeah, we know you don't know, because you're looking at it from on top of a mountain of 100 billion dollars, Mark. There isn't a single damned thing that can change your picture of it being the absolute greatest thing ever.
> We care deeply about issues like safety, well-being and mental health.
What you care about, and what you say you care about, are nothing compared to your actions.
> It's difficult to see coverage that misrepresents our work and our motives.
We don't know your motives, other than the obvious ones. We know your actions.
> At the most basic level, I think most of us just don't recognize the false picture of the company that is being painted.
That's because you live in a big dumb bubble where chat apps are somehow world-changing innovations and creepy stalker behavior is completely fine to you. You are out of touch and people are screaming it at you. You think you are entitled to encroach on everyone's private lives, intermediate on every interaction and mine it for vulnerabilities to auction off to advertisers. Your entire model of the world is broken, Mark. No wonder nothing makes sense.
Stopped reading after this point. I'm sick of billionaires with megaphones blaring their virtues.
As a society, the US has shifted its values from intellectually sound principles, to what ever rich people shout out.
I vomit in my mouth when I see videos of people showing currency, of people talking to you about "doing the hustle", etc etc.
US has fallen into an abyss of moral decline, where the value of your words are directly proportional to the amount of wealth you have managed to gather, no matter the means.
Nothing exemplifies this as much as the whole situation surrounding public transport in the US.
A topic that's generally scoffed at "Everybody has a car, why would the US need a high-speed rail network?!"
At least until some billionaire presents his newest "innovation" by putting people in some pipe or another and allegedly making them go 600+ mph with magic inertia dampeners, then everybody loses their their collective poop about this amazing idea, by that amazing entrepreneur!
Then they end up with a bunch of cars being driven trough a tunnel, still no high-speed rail, but can't wait to chase after the next billionaire promising them to shoot people trough tubes at deadly speeds.
Would be excellent satire if it wasn't actual reality.
When has this not been the case? Politicians in America are traditionally wealthy people.
+100
This has been the consistent picture of Mark from the very beginning of Facebook. ‘Stupid fs gave me their info’
As titzer said, the actions speak louder than the words and Mark’s actions over the years have never varied from their hyper focus.
For creating something from nothing. So if he has 100 billion, it's just a tiny fraction of the value the society got from him in return. I wish there were more people like Zuck, Elon musk etc.. these are the people that advance society.
I read so much hatred towards rich people here instead of praising them, it somehow gives me the chills to know there so many people around me that are full of baseless hatred to the point that they are "sick".
That's a point I've made myself before, and I think there is some truth to it. Social media can't explain the high and increasing degree of US political polarisation, because other countries (including other English-speaking countries) consume social media about as much and yet have less polarisation, and don't show the same degree of increase in it either. The real explanation must lie in other aspects of US culture, or the US political system
The polarization is still there, but spread thin amongst various factions.
In the US, people are shoehorned into R or D.
Edit : I would also like to point out that the OP is a bit confused between cause and effect. In the US, the effect is deep polarization. However, the cause is the power of mass communication, especially misinformation and blatant lies, that FB enables and does not bother to control. The cause is common to all the countries in the world, the effect varies due to various other factors, one of them being the presence of a multi-party system.
Take the example of India. FB has a large and active user base. However, India being a chaos of various identities, cultures, regions, languages, etc, divisions in society are less pronounced as there are a large number of players (politically, regionally, locally, etc)
Other than that, the effect of FB in Europe is also less visible due to the same reason. Every EU country has mostly multi party systems, leading to spreading thin of the hate and focus.
Deleted Comment
First-past-the-post voting leads to a two party system which leads to more polarization. If you have multiple political parties, polarization can only go so far because there will be centrist groups working against polarization. In the US, there is no force pushing for centrism.
There is absolutely no way either political party will allow a multi-party system to ever exist.
If you want a culprit, blame the lack of whipping that both US parties do as a matter of political culture. Since you have historically had all these cuckoo crazy subgroups within both parties, it has meant that other parties simply do not have the bandwidth to exist.
However, in the last four years, the Republicans have started to blindly follow their leader as a matter of pride. I see that as the one positive change the Trump era has brought; yes the man to bring it about has been a disgrace, and has used it for vile purposes, but if both parties stick to following their political platforms and leaders far more, the US can end up in a better place.
Regarding advertising, for companies like Facebook, they may have billions of DAU but still derive the majority of their revenue from rich countries like the US. In 2020 the US and Canada were 45% of Facebook's revenue according to this random website I found: https://statstic.com/facebook-revenue-by-geography/ That's way more than the other regions in terms of $/user, so Facebook is a lot more incentivized to over-optimize for engagement based on US users. Depending on how much the algorithmic feed changes from country to country, it's possible that other countries experience less polarization simply because their culture is weighted less in training the feed's engagement optimization algorithms.
Regarding disinformation and political destablization, most countries simply aren't relevant enough on the world stage to be worth investing money in targetting in this area. The US, China, Russia, UK, Germany, France, Japan, India are all probably relevant enough. China and Russia effectively don't use facebook and are the most obvious non-US-aligned bad actors. They would also get way more bang-for-their-buck targeting the US than other countries. Note, I don't think is as widespread of a problem as many people think it is, but I bring it up because it's relevant in the context of political polarisation since there is strong evidence that it has occurred at least in the 2016 election: https://www.theguardian.com/technology/2017/oct/30/facebook-....
Actually, the two combined can be scary. If you can use outrage (great for engagement) to drive engagement in your content which is designed to politically destabilize the US, you can get a huge reach. This is effectively what you see a lot of the time in highly-engaged US content on facebook anyway: politically inclined outrage.
Social media has downsides, but to lay the blame firmly at the feet of Facebook is to willfully ignore the culpability of CNN, Fox, the NYTimes, Washington Post, and many other legacy media outlets that are making tons of money otherizing the part of the country that is not their readership.
Doesn't entirely contradict your position, given that it was specifically measuring polarisation in terms of attitudes towards political parties, and so may not be good at measuring forms of polarisation that do not map straightforwardly to political parties; and looking at long-term trends over 40 years doesn't tell us much about how people have reacted to something which has only happened in the last 18-24 months.
The study failed to find any statistically significant correlation between political polarisation and proxies of social media use (Internet penetration and online news consumption; the study authors did not have data on social media use itself)
[0] https://www.nber.org/system/files/working_papers/w26669/w266...
But the fact is that this isn’t the defense Zuckerberg thinks it is. In fact, it may even suggest the absolute opposite.
Facebook has never been as popular outside the US as it has within it. The best indication of this fact is FB’s $19Bn purchase of WhatsApp which was largely driven by the fact that FB Messenger was basically only popular within the US, with the rest of the world preferring WhatsApp, which was also an indication of how FB’s network was simply not as entrenched outside the US as it was within it.
Why? My theory is that data mining and software is identifying and targeting seams of ideology that are most readily influenced, so in effect campaigning efforts are efficiently widening the divide between parties. Social media just happens to be the choice source of this data, as well as the medium to influence.
The core idea behind social media is that it amplifies various voices, instead of those few who's job it is to participate in the media. In the US, this has effectively turned our right to freedom of speech into a right of freedom to broadcast.
The total size, wealth, and political influence of the US are much larger than the most culturally comparable nations. The value for those within and without the nation to spend effort using social media to influence discourse and election results is high, and is likely at least a partial factor.
>In Poland, the changes made political debate on the platform nastier, Polish political parties told the company, according to the documents. The documents don’t specify which parties.
>“One party’s social media management team estimates that they have shifted the proportion of their posts from 50/50 positive/negative to 80% negative, explicitly as a function of the change to the algorithm,” wrote two Facebook researchers in an April 2019 internal report.
>Nina Jankowicz, who studies social media and democracy in Central and Eastern Europe as a fellow at the Woodrow Wilson Center in Washington, said she has heard complaints from many political parties in that region that the algorithm change made direct communication with their supporters through Facebook pages more difficult. They now have an incentive, she said, to create posts that rack up comments and shares—often by tapping into anger—to get exposure in users’ feeds. The Facebook researchers, wrote in their report that in Spain, political parties run sophisticated operations to make Facebook posts travel as far and fast as possible.
>“They have learnt that harsh attacks on their opponents net the highest engagement,” they wrote. “They claim that they ‘try not to,’ but ultimately ‘you use what works.’ ”
>In the 15 months following fall 2017 clashes in Spain over Catalan separatism, the percentage of insults and threats on public Facebook pages related to social and political debate in Spain increased by 43%, according to research conducted by Constella Intelligence, a digital risk protection firm. Facebook researchers wrote in their internal report that they heard similar complaints from parties in Taiwan and India.
^ https://www.wsj.com/articles/facebook-algorithm-change-zucke...
Add to that the revelations from yesterday's^^ hearing regarding FB's role in violence in Myanmar and Ethiopia plus repression in PRC and Iran and there is no other interpretation than Mr Zuckerberg is lying.
^^ https://www.cnn.com/2021/10/05/world/meanwhile-in-america-oc...
Their main results are on the page numbered 20 (page 21 of the PDF) – their data shows political polarisation has grown (over the period 1980-2020) in six OECD countries and declined in six OECD countries. The six countries where it is has grown (in order from greatest to smallest growth) are the US, Switzerland, France, Denmark, Canada, and New Zealand. The six countries where it has declined (in order from greatest to smallest decline) are (West) Germany, Sweden, Norway, Britain, Australia and Japan.
They don't have any direct measures of social media use in their source data; the closest things they have are Internet penetration and consumption of online news, but they found no statistically significant correlation between those and polarisation. The only clearly statistically significant correlation they could find was a positive correlation between polarisation of societal elites and that of the general population (p=0.011). They also found a positive correlation between increasing racial diversity and political polarisation which was of borderline statistical significance (p=0.052).
It requires considerable expertise and resources to spin up a disinformation campaigns continuously. The US leads here because of the size of its market and determination of its adversaries.
Mark tries to spin the responsibility away in the quoted sentence.
That said, I'm not a historian so take my shower thoughts with a teaspoon of salt.
A propagandist-for-hire choosing to astroturf, conduct false flag campaigns, form vote brigades, etc. is more likely to target the US political market because -- globally speaking -- there's more political power at stake.
Whether FB can claim that as an excuse is another question.
The semantic acrobatics in statements like this drive the discussion into pedantic details that successfully lose sight that on net, this is a cancerous product, with a pathology unique to each culture it touches.
like, there's been leaked FB PM discussions on this very topic and phrasing it like a product challenge versus the moral calamity that it is.
[0]https://gizmodo.com/facebook-still-working-on-the-whole-geno... [0.5] https://www.nytimes.com/2018/10/15/technology/myanmar-facebo...
Something that's quite observable in places like Germany: Election campaigns in Germany used to be quite boring, there used to be no such thing as "political attack ads", parties kept their campaigning to topics they stand for, instead of trying to attack other parties or candidates on their particular positions.
At least that's how it used to be for the longest time, but during the last decade the whole tone around German elections became noticeably more hostile, something that directly correlates with the rise in popularity of the AfD.
Said AfD has hired American Harris Media for their campaign strategy, the same company that won Trump his presidency [0]. They not only introduced the wonders of micro-targeting, which already contributed to the Obama presidencies, but also added their American flavor of running political campaigns with these extremely hostile overtones.
The latest highlight of that escalation, during the recent election, has been far-right parties literally calling to "hang the Greens" [1]
Or to give another, often overlooked, example; The US wasn't the only country that saw their capitol stormed in recent times, an attempt was also made in Germany, but there police actually stood their ground [2]
And while most of the big established parties condemned what happened there, the rising far-right ones did nothing like that, they were right there riling people up.
[0] https://www.bloomberg.com/news/articles/2017-09-29/the-germa...
[1] https://www.swissinfo.ch/eng/german-court-orders-removal-of-...
[2] https://www.bbc.com/news/world-europe-53964147
It's one of those things that says a lot about the people on HN and their tolerance to wide range of opinions. And yes, we all agree that rude, disrespectful and violence has no place on HN. But, disagreeing on Tax policies? Property laws? Patents? Let people talk about it.
I really don't see it different than anti-liberal hostility on conservative forums. It's the same. Exactly the same.
Let's change this and exercise some restraint. Most people on either fence have good faith.
(I’m influenced here by Naomi Oreskes and Erik M. Conway’s work in “Merchants of Doubt.”)
Of course, I wouldn't say that being aware of the problems your product causes makes you less culpable; just the opposite.
At the same time, I think that when it comes to assessing whether the company is acting in good faith or not, it is important to consider not just what research they do or don’t do, but also which findings they choose to promote and which they choose to keep internal…
(Part of the fiasco with tobacco or climate is that those companies knew the risks associated with their products well enough, but didn’t publicize the findings that might cast them in a bad light in the same way that they publicized the findings that supported a “let’s wait and see” approach to policy. See for example https://www.scientificamerican.com/article/exxon-knew-about-...)
Facebook has a tough problem, though, because a lot of their criticism is not just one wing of loonies, it's from every direction including normal sane people, and a lot of them agree with each other for once. I don't think they're breaking any laws, but I don't doubt someone will invent a shady case or charge them under a new law just because everyone hates Zuck. Not sure how you dodge that.
They could Split /diversify /dilute the brand /products the way google has done making it harder to associate with the bad things directly.
I mean...precisely to be able to control that research. Exactly like companies in other sectors (tobacco|oil|pharma|AI). It's asinine that this is presented as a rhetorical question [1].
For those (myself, as well) comparing portions of this response to tobacco, oil, and pharma companies controlling (and manipulating/suppressing) research, an example to keep in mind that's much closer to home would be Google censuring negative AI research [2].
[1] See for example, The Influence of Industry Sponsorship on the Research Agenda: A Scoping Review
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6187765/.
[2] 'Google told its scientists to 'strike a positive tone' in AI research - documents'
https://www.reuters.com/article/us-alphabet-google-research-...
Someone on HN actually linked to the slides, and it does seem like there's a lot of FUD being created.
This reminds me of moxie talking about damaging research focusing on Signal:
> To me, this article reads as a better example of the problems with the security industry and the way security research is done today, because I think the lesson to anyone watching is clear: don't build security into your products, because that makes you a target for researchers, even if you make the right decisions, and regardless of whether their research is practically important or not. It's much more effective to be Telegram: just leave cryptography out of everything, except for your marketing.
source: https://news.ycombinator.com/threads?id=moxie&next=16117487
Facebook's whole history is not really caring about those things. What they care about is not looking bad. Which is why they trot out those lines when things get too awful for the public, the press, and legislators to ignore.
"We are so, so sorry you caught us {doing, allowing} this thing. We are deeply embarrassed that we didn't hide it well enough from you. We promise to take the time to do the work so that in the future you won't discover us still doing it. We deeply value {word spew of the month} and look forward to you believing our apology well enough to continue using our service."
He probably does care about safety, well-being and mental health" (though not about serial commas apparently). He cares about them in the same way that the vaping companies care about lung health (they exist to reduce cigarette smoking, right?)
This note is a rant against the whistle-blower, without even an empty apology.
^ After niceties I started with a quote from his post which was false and the the explanation from reliable news sources of why it was 'false news - politics.'
Honestly seemed to be quite sufficient of a summary of whatever the text was, to a great extent.
Also the fact that much of the research / “leaks” are showing most of the issues are only happening in the US is a sort of a glaring hole in most of the arguments being made against FB. The US is just borked at the moment. Everything is red vs blue, and no one even cares what the issues are at this point.
Oddly enough, I've felt for a long time that Facebook is among the most unethical companies out there. It's very weird to see it facing a potential reckoning on the back of such incredibly weak claims.
Well to start, Facebook is neither good or nor bad - it is not a person and does not have morality. The human mind is not really capable of dealing with an abstract entity such as a corporation so we anthropomorphize the large mess of people, software, and hardware presenting us a little app on our phones as imbued with a personality and morality. Then we debate whether this mess, which has Zuch's vague face over it, is a personification of good or evil. It's neither - it is a morality-free phenomenon.
Facebook's properties have benefits for certain slices of the population in certain instances, and negative effects for other parts of the population. Businesses rely on Facebook's properties for communication, people rely on Messenger and especially WhatApp for communication, artists rely on Instagram for inspiration and social outreach, teens use the properties to connect with each other and to feel sad or happy about themselves in social contexts (as teens tend to anyway). In some cases the teen are happier, in others sadder.
I got to know my gf over Messenger, I have wasted tons of time on Facebook, Instagram has been a medium influence on me. Life happens, people happen, and now social media happens.
Zuckerberg is the public face of the organization now and it is his life's work. He has an entire society's worth of social media interactions he is tasked with controlling, censoring, and maintaining. People tell him they want it free for some speech, closed for other speech, they want Facebook to hire tons of people to censor and curate the content, they want great decisionmaking, they want to be be free to say anything they want to say (so long as it matches their specific moral values), they want their kids to only see good things online, they want their kids to find what they need but see only what they should see on all of Facebook's multifaceted properties. It is a huge task.
Why would Zuckerberg intentionally want to fail at this task and have people hate his company? To make more money? He is not an idiot and knows that the success of the company is dependent on its reputation. How exactly could he NOT do his utmost best to mitigate the negative effects of his properties, enhance the positive effects and thereby positively influence everyone - the users, the company, himself? I don't understand the cynicism which drives all these comments either
> Why would Zuckerberg intentionally want to fail at this task and have people hate his company?
Nobody wants to fail at anything. Doesn’t stop people from failing all the time. Nobody wants to get fired but if you suck at your job you get fired all the same.
You’re approaching this way too logically without considering the social aspects. Stop trying to think like a robot.
Great. Zuckerberg also bullied the Instagram co-founders out of the company, ditto Whatsapp, and they've routinely lied about both
If FB shuts down today there would be another one that is occupying the exactly same space, having the same influence on the society soon.
Services that might not require you to expose your real identity to the whole internet, that might not exfiltrate your full contact list from your phone or would not apply questionable morals and censorship on the content you exchange with your loved ones.
A messenger app brings people together. Such an app doesn't need an algorithmic feed to hook people on junk/harmful content. That was added for one reason, money.
> In fact, in 11 of 12 areas on the slide referenced by the Journal -- including serious areas like loneliness, anxiety, sadness and eating issues -- more teenage girls who said they struggled with that issue also said Instagram made those difficult times better rather than worse.
All the reporting has been about how the research found that Instagram was so terrible for teenage girls, but that seems to be a total mischaracterization. Honestly, it seems like if you ask teenage girls about anything (clothing stores, schools, television) there's going to be a mix of positive and negative experiences. Is the bar we are holding facebook to that no matter how much good they do that any negative experiences outweigh that? Is that a bar we would hold anything else to?
I have a kind of unpopular belief that we'd still have similar polarization if misinfo were just distributed on Fox + MSNBC, websites, and Twitter.
I’m prone to believe that, but also not sure that is has anything to do with the other issues at hand.
I have never signed up for Facebook but if my family members have ever "uploaded their phone contacts to find more Facebook Friends" or if my relatives have ever uploaded pictures with me in the picture ((Facebook has detected a face we don't recognize. Please tag the person in this picture)), or if I have a US Government Name and/or Social Security Number, there's a company that is building a DATABASE DOSSIER with information about me. I cannot ask them to please DELETE information they have about me because those Shadow Profiles are SECRET, I'm not even supposed to KNOW that these creepy data-scrapers have a file on me.
Facebook is just the front end for the NSA / surveillance state backend, and at this point it's frightening how Too Big to Fail they have become.
When you're designing a Big Brother dystopia in which everyone is tracked and surveilled, step one is to create something like Facebook. "All your friends are doing it! You should too!"
The impact of Facebook on America is on a completely different playing field from us.
A non-trivial portion of the country doesn't have that mindset. Further, many in that group lack the understanding of FB's ability to influence wide-spread opinions.
In the context of the whistleblower:
One of the popular methods of manipulation is to use "harmful to children" as a basis for making an argument. We have seen this countless times in the past on a variety of issues. This is no different. The "harms" that are being highlighted here are equally, if not more, applicable to adults. Children are at a stage in their lives where good parenting can easily offset any potential harm by consuming content on Instagram or Facebook.
One can recognize that this particular topic, like many contemporary topics, is a subset of the overarching libertarianism versus authoritarianism debate, and opinions often cleanly fall on political lines depending on the complaint. In this case the whistleblower has left-of-center politics, so they have a grievance with "disinformation" and "not enough control". There have been previous whistleblowers who have had right-of-center politics, who have cited "censorship" and "biased control" as their grievance. There is ample evidence for the company being guilty of both, with regard to specific instances.
As such there will always be complaints from opposing points of view as to whether the company is doing "enough" to police content, or whether the policing has become biased. Amusingly, you see the reverse of this debate when you look at actual policing in the USA, where the opposite side argues bias in policing and the other side argues for harsher control and punishment.
Those who fall on either side of the spectrum tend to paint with a broad brush some kind of systemic evil conspiratorial agenda at the company, as a consequence of voicing their respective frustrations.
Overall Facebook is a net positive for the world. There are likely activists within the company trying to push agendas, some of whom may be prevailing over others. This is evident by just taking a walk around campus and reading the political messaging that adorns the various shared spaces. These are also largely irrelevant in the long term because if and when it reaches any sort of extreme, eventually the pendulum will swing too far.
Dead Comment