> Zuckerberg vetoed a 2019 proposal that would have disabled Instagram’s so-called “beauty filters,” a technology that digitally alters a user’s on-screen appearance and allegedly harms teens’ mental health by promoting unrealistic body image expectations, according to the unredacted version of the complaint filed this week by Massachusetts officials.
> After sitting on the proposal for months, Zuckerberg wrote to his deputies in April 2020 asserting that there was “demand” for the filters and that he had seen “no data” suggesting the filters were harmful, according to the complaint.
So, why didn't instagram ceo provide that data? Seems a huge oversight for someone in that position.
You’re going to use company resources to create a test for your position or pull the data to support it (using resources that could be pushing forward on the business) when the CEO hates your position? That’s not generally a wise strategy for corporate executives.
Even if you have the data, this is likely all about the CEO not really wanting to hear the argument or go in this direction and not his officer’s inability to provide supporting data.
I'm seeing a lot of responses about the morality or PR implications of trying to A/B test this, but this seems fundamentally impossible to A/B test to me and points at a bigger problem with what companies and their marketing departments believe they can know and the limits of what science can actually do.
The hypothesis here is that usage of professional but automated editing tools to make people look more beautiful than they really are promotes unrealistic competition standards and makes people feel worse about how they look in a regular mirror that is showing them the truth.
How do you A/B test this? Giving the feature to some people and not others isn't good enough. The impact is because of seeing other people look more beautiful on the Internet than they actually look in person. How are you going to prevent a user from seeing the photos of other users who use filters? Even if the global social graph had strict partitions, which I imagine isn't the case, these photos leak into web search, news articles, and the same technology gets adopted by other platforms. You can't A/B test something that impacts the entire broad culture. The world is too connected.
> “All the people that I’ve talked to internally about this were like… Mark’s level of proof, in order to be able to take the work seriously and act on it, is too high,” Bejar added. “I think it’s an impossible standard to meet.”
"Meta CEO Mark Zuckerberg allegedly halted proposals aimed at improving Facebook and Instagram’s impact on teen mental health, according to internal communications revealed as part of unsealed court documents.
Zuckerberg allegedly vetoed plans to ban filters that simulate plastic surgery on Meta-owned platforms, according to the unredacted lawsuit filed by Massachusetts Attorney General Andrea Campbell (D), and ignored requests from top executives to boost investments in teens’ well-being."
> “I respect your call on this and I’ll support it,” Stewart wrote, according to a message cited in the complaint, “but want to just say for the record that I don’t think it’s the right call given the risks…. I just hope that years from now we will look back and feel good about the decision we made here.”
Were they saying "for the record" just as an idiom, or did they have a particular paper trail purpose in mind when putting something in writing to the CEO?
"If you're going to go down, take people with you."
If there's an issue, you partner with your superiors to fix it. This is what the Insta CEO is trying to do (because he believes there's an issue). Having done that, yes, you leave a paper trail to cover your ass.
Is there actually strong evidence that social media harms teens' mental health?
From what I can find, the correlation between social media use and mental health problems is small [1] or nonexistent [2]. There are few causal studies, and their results are even smaller, eg: halting Facebook usage for a month decreased depression by 0.09 stdev [3].
[1] "We found a small but significant positive correlation (k=12 studies, r=.11, p<.01) between adolescent social media use and depressive symptoms."
[2] "There was no association between frequency of social media use and SITBs [self-injurious thoughts and behaviors] however, studies on this topic were limited."
[3] "A study using an experimental design measured the effects of abstaining from
Facebook for four weeks in an adult population and found that there
was a slight decrease (SD=0.09) in depression."
My first criticism would be that intervening only to restrict Facebook would likely just result in a substitution effect, with instagram, snapchat, youtube, reddit, etc filling the void. Likewise, if I remove cake from my diet for 4 weeks but make no restrictions on all other forms of sugary baked goods, I'm not likely to see the same magnitude of effect as I would've otherwise.
Here's an associational study that found almost 3x odds of having depression between the most and least frequent users of social media sites. This was among US adults aged 19-32 and adjusted for age, sex, race, relationship status, living situation, household income, and education level
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4853817/
I know the media often cites Instagram's internal research saying "we make body image issues worse for 1 in 3 teen girls", but the actual stat is not too damning IMO: 'Among teen girls with body image issues, 32% said Instagram made it worse, 22% said Instagram made it better, and 46% said it had no impact' [1]
> Zuckerberg vetoed a 2019 proposal that would have disabled Instagram’s so-called “beauty filters,” a technology that digitally alters a user’s on-screen appearance and allegedly harms teens’ mental health by promoting unrealistic body image expectations, according to the unredacted version of the complaint filed this week by Massachusetts officials.
Focusing on beauty filters really undermines the narrative. If you are talking about ways that Meta harms children and the first thing you list is that they didn’t ban beauty filters, it actually makes Zuckerberg seem reasonable.
We don't know what wasn't proposed or investigated to lessen issues if the initiatives were all shut down. If cursory work admits there's a problem and your profit is in the billions, where you pay engineers to mostly sit on their ass - the onus is on you to reduce harm.
Rejecting all proposals indicates any harms of the platform are inbuilt fundamentally.
But… they didn’t do that? Filters became ubiquitous when they were put into Instagram?
Obviously the bad version of anything is always available at some level of effort, but also obviously “level of effort” is in fact the only deterrent that exists.
It is amazing how much internet bandwidth is wasted by stories like this... Expecting anything less that big tech favoring profits over EVERYTHING else is like expecting Sun not to rise tomorrow.
Yet somehow each of these stories always goes viral and everyone seems "stunned" like "how can this amazing human Zuck (the worst scum of the Earth as most other big tech CEOs are) do this!?? So surprising given his impeccable record as a decent human being who cares about the well-being of our children..."!!
It is amazing how you jumped to "favoring profits" conclusion from this article, which doesn't even mention that the plans were rejected because there was no data supporting their effectiveness to any degree.
That seems super reasonable. However, I believe it has the opposite effect. Everyone with a shred of common sense gets numb to these stories which the effectively prepares us to be numb to even more outrageous stories that come after it
There was a post on HN months back about testimony to Congress. I forget the details except for one graph. Clear and direct correlation between the rise in social media and teen mental health issues.
The number of teen girls that I’ve seen with cut marks is terrifying. The dramatic increase in mental health issues is shocking.
I know with my kids social media has made every one of their challenging teenage moments more difficult.
Before social media we had rap music glorifying criminal activity, 16 and pregnant and other assorted reality TV, before that we had old MTV and metal music that glorified hedonism, before that we had girl magazines with "impossible beauty standards", the pressure to wear make up, before that we had the sexual revolution... It seems that for every generation there has been something corrupting it. The older people get their panties in a wad and the younger people become more self destructive.
America just has a culture of pushing limits and often those limits were there for a reason. Chesterton's fence and all that. All this social upheaval and people aren't happy. So much progress and everyone seems to be more and more miserable. So what is the solution? Ban Instagram filters? Make self esteem our golden bull? Begin goose stepping? I don't think anyone knows, and we are all just looking for someone to blame.
There is solid research out there suggesting that this is different. Large n, several years, multiple countries all pointing to all increase in tune with the advent of visual social media (Instagram, TikTok - not Facebook or WhatsApp). The time series analysis makes this not merely correlational.
That would be worth something in a debate, where truth is the goal.
But this isn't a debate, this is a decision. The agent is Meta. The field is their products. The goal is reduced teen mental health issues. And the buck stops with Zuckerberg.
"Correlation does not prove causation" alone is not an excuse to avoid an action. It is a principle that can be used to compare actions for how likely they are to achieve the goal, and so it must be combined with some other action and evidence that makes that other action appear more likely to work. Without that, then the action suggested by the correlation remains the action most likely to work, and so it must still be taken.
If, for whatever reason, Zuckerberg doesn't want these changes to be made, then he needs to either dole out resources to gather the evidence he demands and which may free him from them, or he needs to implement some other action which may turn out to work.
If he doles out the resources, and the evidence comes back and establishes causation, then he has to make the changes. It sucks for him, but the goal isn't to please Zuckerberg, it's to reduce teen mental health issues.
If he doles out the resources, and the evidence comes back failing to establish causation, then that's convenient for him. But he still has to try something else, because the goal is reduced teen mental health issues and it has not been achieved.
If he goes straight to implementing some other action, and it at least appears to work, then he doesn't have to make the changes he didn't want to make. That's great for him, and since teen mental health issues were reduced, he can put it behind him.
If he goes straight to implementing some other action, and it doesn't work, then still has to try something else, because the goal is reduced teen mental health issues and it has not been achieved.
The social sciences are not advanced enough to measure the harm caused by advances in technology. Social science is in its infancy and lacks systemic tools and viewpoints.
The legal use of dark patterns needs to be eliminated. This is the core issue.
> After sitting on the proposal for months, Zuckerberg wrote to his deputies in April 2020 asserting that there was “demand” for the filters and that he had seen “no data” suggesting the filters were harmful, according to the complaint.
So, why didn't instagram ceo provide that data? Seems a huge oversight for someone in that position.
Because there is no data that show causation?
-Jim Barksdale
Even if you have the data, this is likely all about the CEO not really wanting to hear the argument or go in this direction and not his officer’s inability to provide supporting data.
The hypothesis here is that usage of professional but automated editing tools to make people look more beautiful than they really are promotes unrealistic competition standards and makes people feel worse about how they look in a regular mirror that is showing them the truth.
How do you A/B test this? Giving the feature to some people and not others isn't good enough. The impact is because of seeing other people look more beautiful on the Internet than they actually look in person. How are you going to prevent a user from seeing the photos of other users who use filters? Even if the global social graph had strict partitions, which I imagine isn't the case, these photos leak into web search, news articles, and the same technology gets adopted by other platforms. You can't A/B test something that impacts the entire broad culture. The world is too connected.
Zuckerberg allegedly vetoed plans to ban filters that simulate plastic surgery on Meta-owned platforms, according to the unredacted lawsuit filed by Massachusetts Attorney General Andrea Campbell (D), and ignored requests from top executives to boost investments in teens’ well-being."
Were they saying "for the record" just as an idiom, or did they have a particular paper trail purpose in mind when putting something in writing to the CEO?
If there's an issue, you partner with your superiors to fix it. This is what the Insta CEO is trying to do (because he believes there's an issue). Having done that, yes, you leave a paper trail to cover your ass.
From what I can find, the correlation between social media use and mental health problems is small [1] or nonexistent [2]. There are few causal studies, and their results are even smaller, eg: halting Facebook usage for a month decreased depression by 0.09 stdev [3].
[1] "We found a small but significant positive correlation (k=12 studies, r=.11, p<.01) between adolescent social media use and depressive symptoms."
https://www.sciencedirect.com/science/article/abs/pii/S01650...
[2] "There was no association between frequency of social media use and SITBs [self-injurious thoughts and behaviors] however, studies on this topic were limited."
https://www.sciencedirect.com/science/article/abs/pii/S02727...
[3] "A study using an experimental design measured the effects of abstaining from Facebook for four weeks in an adult population and found that there was a slight decrease (SD=0.09) in depression."
https://www.sciencedirect.com/science/article/abs/pii/S01650...
My first criticism would be that intervening only to restrict Facebook would likely just result in a substitution effect, with instagram, snapchat, youtube, reddit, etc filling the void. Likewise, if I remove cake from my diet for 4 weeks but make no restrictions on all other forms of sugary baked goods, I'm not likely to see the same magnitude of effect as I would've otherwise.
Here's an associational study that found almost 3x odds of having depression between the most and least frequent users of social media sites. This was among US adults aged 19-32 and adjusted for age, sex, race, relationship status, living situation, household income, and education level https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4853817/
This article looks promising: https://en.wikipedia.org/wiki/Criticism_of_Facebook#Psycholo...
I know the media often cites Instagram's internal research saying "we make body image issues worse for 1 in 3 teen girls", but the actual stat is not too damning IMO: 'Among teen girls with body image issues, 32% said Instagram made it worse, 22% said Instagram made it better, and 46% said it had no impact' [1]
[1] page 14 of https://about.fb.com/wp-content/uploads/2021/09/Instagram-Te...
Focusing on beauty filters really undermines the narrative. If you are talking about ways that Meta harms children and the first thing you list is that they didn’t ban beauty filters, it actually makes Zuckerberg seem reasonable.
Rejecting all proposals indicates any harms of the platform are inbuilt fundamentally.
This article's a nothingburger (unfortunately).
Obviously the bad version of anything is always available at some level of effort, but also obviously “level of effort” is in fact the only deterrent that exists.
Yet somehow each of these stories always goes viral and everyone seems "stunned" like "how can this amazing human Zuck (the worst scum of the Earth as most other big tech CEOs are) do this!?? So surprising given his impeccable record as a decent human being who cares about the well-being of our children..."!!
Can ads on the platforms in question be opted out of?
I mean google lets people get out of YouTube ads. YouTube is practically unusable with ads once you’ve had adfree.
Dead Comment
The number of teen girls that I’ve seen with cut marks is terrifying. The dramatic increase in mental health issues is shocking.
I know with my kids social media has made every one of their challenging teenage moments more difficult.
https://pastafarians.org.au/pastafarianism/pirates-and-globa...
The number of women I know with cut marks is terrifying - but they grew up before Instagram and Facebook.
America just has a culture of pushing limits and often those limits were there for a reason. Chesterton's fence and all that. All this social upheaval and people aren't happy. So much progress and everyone seems to be more and more miserable. So what is the solution? Ban Instagram filters? Make self esteem our golden bull? Begin goose stepping? I don't think anyone knows, and we are all just looking for someone to blame.
I point to my sibling post: https://news.ycombinator.com/item?id=38201807
Visual social media like TikTok and instagram seem to be a different beast than TV, WhatsApp and other media.
Correlation does not prove causation.
It’s claiming exactly that: certain social media as a cause of mental health deterioration among teens especially girls
But this isn't a debate, this is a decision. The agent is Meta. The field is their products. The goal is reduced teen mental health issues. And the buck stops with Zuckerberg.
"Correlation does not prove causation" alone is not an excuse to avoid an action. It is a principle that can be used to compare actions for how likely they are to achieve the goal, and so it must be combined with some other action and evidence that makes that other action appear more likely to work. Without that, then the action suggested by the correlation remains the action most likely to work, and so it must still be taken.
If, for whatever reason, Zuckerberg doesn't want these changes to be made, then he needs to either dole out resources to gather the evidence he demands and which may free him from them, or he needs to implement some other action which may turn out to work.
If he doles out the resources, and the evidence comes back and establishes causation, then he has to make the changes. It sucks for him, but the goal isn't to please Zuckerberg, it's to reduce teen mental health issues.
If he doles out the resources, and the evidence comes back failing to establish causation, then that's convenient for him. But he still has to try something else, because the goal is reduced teen mental health issues and it has not been achieved.
If he goes straight to implementing some other action, and it at least appears to work, then he doesn't have to make the changes he didn't want to make. That's great for him, and since teen mental health issues were reduced, he can put it behind him.
If he goes straight to implementing some other action, and it doesn't work, then still has to try something else, because the goal is reduced teen mental health issues and it has not been achieved.
The legal use of dark patterns needs to be eliminated. This is the core issue.