Hey, Facebook VP of Integrity here (I work on this stuff).
This WSJ story cites old research and falsely suggests we aren’t invested in fighting polarization. The reality is we didn’t adopt some of the product suggestions cited because we pursued alternatives we believed are more effective. What’s undeniable is we’ve made significant changes to the way FB works to improve the integrity of our products, such as fundamentally changing News Feed ranking to favor content from friends and family over public content (even if this meant people would use our products less). We reduce distribution of posts that use divisive and polarizing tactics like clickbait, engagement bait, and we’ve become more restrictive when it comes to the types of Groups we recommend to people.
We come to these decisions through rigorous debate where we look at all angles of how our decisions will affect people in different parts of the world - from those with millions of followers to regular people who might not otherwise have a place to be heard. There’s a baseline expectation of the amount of rigor and diligence we apply to new products and it should be expected that we’d regularly evaluate to ensure that our products are as effective as they can be.
We get criticism from all sides of any decision and it motivates us to look at research, our own and external, analyze and pressure test our principles about where we do and don't draw lines on speech. We continue to do and fund research on misinformation and polarization to better understand the impact of our products; in February we announced an additional $2M in funding for independent research on this topic (e.g. https://research.fb.com/blog/2020/02/facebook-misinformation...).
Criticism and scrutiny are always welcome, but using cherry-picked examples to try and negatively portray our intentions is unfortunate.
Just to cherry-pick from your reply here, if $2M is the biggest ticket item you have to show for independent research on this topic - then you're woefully short given Facebook's revenues and size.
10 short years ago, nobody could have imagined that huge swathes of the population could have been swayed to accept non-scientific statements as fact because of social media. Now we're struggling to deal with existential threats like climate change because a lot of people get their worldview from Facebook. Algorithms have decided that they fall on one side of the polarization divide and should receive a powerful dose of fake science and denialism ... all because clicks and engagement.
10 short years ago, huge swaths of the population were swayed to accept non-scientific statements like eating fat and cholesterol were unhealthy. I don't think Facebook is the problem here.
How much exactly do you think Facebook ought to donate to independent researchers? Most tech companies donate ~0$ to such efforts.
A counter-point to this is that studies show polarization has also fallen in some countries over the past years - including ones where social media (Facebook or otherwise) is popular. Studies also show some of the most polarized segments in the US to be the older population, which uses social media less. We definitely have work to do, but this suggests there are many factors at play.
> Now we're struggling to deal with existential threats like climate change because a lot of people get their worldview from Facebook
You state this as a matter of fact. How do you know this?
Even if it were to be true, that people were more polarized in the climate worldview by Facebook and more so to the wrong side than the right side, we all know that climate change is the result of our behaviour the last centuries and that counter efforts has been resisted the last 50 years.
Something like 85% of the planet believes in non-science. There are 2.3 billion Christians, 1.9 billion Islam, 1.1 billion Hindu and probably a billion "other" religion. The fact that people believe non-science has got nothing to do with Facebook.
If you're not being given the benefit of the doubt, it's because your employer has 16 years of lying about this and related issues. Zuck long ago torched whatever shred of trust ever existed, so no, we are not going to be impressed by an extra 0.003% of annual revenue thrown to problems you've created.
On top of that - it seems their efforts to prioritize friends and family may not take into consideration this is where the divisiveness seems to begin? How many of us have friends and family that share news articles, worldview opinions, and memes that fit into divisiveness, fake news, and/or borderline racism.
You can reshuffle the deck but the same cards are still inside.
The fact that FB has not banned political ads is pretty shocking and absolutely related to this topic.
Twitter managed to do it, but FB continues to allow political parties to spread misinformation via the algorithm, and FB profits from it.
So essentially VP of Integrity, your salary is paid for in part by the spread of misinformation. Until you at least ban political ads your integrity is non-existent.
Hi there. Have you ever considered making the decision making open? I mean it seems you are obsessed with criticism and scrutiny. Then here is an idea for you: invite journalists from major media outlets to your decisions making. Then you can avoid these "unfortunate" cherry-pickings as you put it.
Why I am saying this: it seems you sit backwards on your high horse, criticising those people who for all intents and purposes have very limited insight into the decision making.
Me and my close friends are fed up with Facebook and how obviously it is trying to polarize everyone in this world.
No sympathy for you on my side, and I cann assure you I speak on behalf of my friends too.
Yep we actually do this! (invite journalists to decision-making meetings). One of our regular meetings is about the content policies, we publish the minutes here - https://about.fb.com/news/2018/11/content-standards-forum-mi... - and have also hosted journalists and outside academics from time to time.
We have entered an era in which non-state actors like Facebook have power that was once the exclusive domain of governments [1]. Facebook understands this, and justifiably views itself as a quasi-government [2].
I would really like to understand Facebook’s theory of governance. If I want to understand my own government, I can read the Federalist papers. These documents articulate an understanding of history and a positive view of the appropriate role of government in society. I can use these documents to help myself evaluate a particular government action in light of the purpose of government and the risks inherent in concentrated power.
Has Facebook published something like this? I struggle to understand Facebook’s internal view of it’s role in society and its concept of what “doing the right thing” means. Without some clear statement of governing principles, people will naturally gravitate to the view that Facebook is a cynical and sometimes petty [3] profit maximizer.
Without some statement of purpose and principles, it is hard to level criticism in a way that Facebook will find helpful or actionable. We are left to speculate about Facebook's intentions, instead of arguing that a certain outcome is inconsistent with its stated purpose.
This may come off as condescending, but I'm honestly just curious.
From the outside looking in, it seems as though you are paid to drink cool-aid and paint FB in a positive light. How does one get to be in your position? What are the qualifications for your job?
Remember that VPN app that Apple pulled from the app store, the one that Facebook was using to spy on users' internet usage to gain intel about potential competitors. When Facebook acquired the VPN app this guy came with the purchase. VP of Inegrity. Oh, the irony.
Fair enough, but I also get to see first-hand how decisions are made and how rigorous debates take place, so I have more faith in the process. We've got lots to do to improve transparency of how this stuff happens, because I know people care about it. One way we started a while ago is publishing minutes to one of our meetings where decisions on content policies get made. https://about.fb.com/news/2018/11/content-standards-forum-mi... -- lots more to do!
Potentially useful context: the parent here was the Co-Founder and CEO of Onavo which he sold to Facebook for $120 million. If the name "Onavo" doesn't trigger any bells: https://en.wikipedia.org/wiki/Onavo . It was ostensibly a VPN but tracked its users' behavior and Facebook used the data from Onavo to judge how much traffic various startups had when deciding whether to acquire them.
I think that the fact that the founder/CEO of Onavo is now Facebook's VP of Integrity is entirely consistent with everything else we've read about Facebook over the years.
Anyone want to hazard a guess at the total comp of a VP @Facebook? Curious
I hope there are more people at the top with the integrity to stand up for injustice like Tim Bray. I have all the respect in the world for someone who puts their neck on the line for what they believe in. Thank you Tim @tbray
> We reduce distribution of posts that use divisive and polarizing tactics like clickbait, engagement bait, and we’ve become more restrictive when it comes to the types of Groups we recommend to people.
This seems inherently a political evaluation. What are the criteria and is this driven manually automatically?
Would you care to post the same stats but updated for 2020 then? Also i see that this is statement is also copy pasted on your twitter so I have a hard time believing you actually do read this feedback and didn’t just post This comment as damage control.
If we're talking metrics, the 64% stat cited in the article turns out not to be a good way to measure impact of recommendations on an extremist group. We internally think about things like prevalence of bad recommendations. More on prevalence here - https://about.fb.com/news/2019/05/measuring-prevalence/. I don't have a metric I can share on recommendations specifically but you can see the areas we've shared it for so far here: https://transparency.facebook.com/community-standards-enforc...
Thanks for the message. Does your group have a centralized place where your team’s research, recommendations, and roadmap for changes are visible to the public?
Your MO over the years seems to be "we're working on it" from your frontline interview in 2018 regarding Myanmar to a business insider article from last year regarding the New Zealand shooting.
$2M for independent research. haha, that's really generous of Facebook. You all probably made that in a days worth of misleading political ads you've sold.
One question I have for you mr VP of INTEGRITY... I haven't used FB in over 8 years but I would like the full set of every data point that you have on me, my wife and my kids. Where can I get that? And if I can't, please explain to me why.
I guess you've been drinking so much moneysaurus rex kool aid that you seem to not understand that people just don't believe anything you all say anymore. Your boss can't even give straight answers in front of the government.
Maybe you all should change your tagline to:
Facebook, we're working on it.
Maybe that's how Facebook builds their organization: VP of Integrity, VP of Honesty, VP of Sincerity. So that Zuckerberg doesn't have to take on any of those roles.
Yes - but in this case the CEO has none so he needs to outsource his integrity to someone else, and when you are a vacuum of integrity, everyone else looks like a saint. Therefore, top comment.
Respectfully, to a top brass executive at Facebook: I respect the hard work and innovation that has gone into building Facebook. It's allowed billions of people to connect worldwide in ways that were never possible before. It is truly a billion dollar platform.
The problem is that your entire executive leadership believes it is a five hundred billion dollar platform. They have to, because the investors demand it.
Why are you asking third parties to conduct this research?
Why isn’t this an initiative driven by an internal team? Where applications for this program advertised outside of Facebook?
Is $2M realistic for this research? I know I wouldn’t be enthused considering my total compensation. Do you expect top quality researchers to apply?
Well, when your management has such a record regarding their own integrity why on earth would you expect us "dumb fucks", as Zukerberg put it, to trust you?
Facebook has lied again and again, took any shady approach to get data, mishandle that data, practically endorsed a genocide. What integrity are you talking about?
You maybe serious about your job, but you work for people that proved they have no moral compass. I'd quit if I were you and actually believe in what you try to accomplish.
Edit: just realized you are the founder of onavo, a spyware bought by facebook. Were you also heading project atlas? Were you responsible for inviting teens to install spyware so you can collect all their data?
The mere fact YOU are chosen as VP of integrity just says it all. When the head of integrity is a spyware peddler... Well....
Yeah, I guess it's a cynical move I should've expected of Facebook.
> What’s undeniable is we’ve made significant changes to the way FB works to improve the integrity of our products, such as fundamentally changing News Feed ranking to favor content from friends and family over public content (even if this meant people would use our products less).
Emphasis added.
What is that supposed to mean? Would you like a sticker for doing the right thing?
Meta question. How is this post from a user created 6 hours ago, with 66 Karma at the top of this comment section? This is particularly interesting since just this week, TripleByte’s CEO’s top level comments were buried. Is there some kind of change or is this really the naturally occurring top comment?
Thanks for chiming in here! Honestly the response to your comment reminds me of the youtube comment section. I'm not sure why people aren't capable of civil discourse, would have expected better from HN.
“We come to these decisions through rigorous debate where we look at all angles of how our decisions will affect people in different parts of the world”
If you had any integrity, you would delegate this governance back to the captured provinces.
We've got this idea stuck in our heads that only the website itself is allowed to curate content. Only Facebook gets to decide which Facebook posts to show us.
What if, instead, you had a personal AI that read every Facebook post and then decided what to show you. Trained on your own preferences, under your control, with whatever settings you like.
Instead of being tuned to line the pockets of Facebook, the AI is an agent of your own choosing. Maybe you want it to actually _reduce_ engagement after an hour of mindless browsing.
And not just for Facebook, but every website. Twitter, Instagram, etc. Even websites like Reddit, which are "user moderated", are still ultimately run by Reddit's algorithm and could instead be curated by _your_ agent.
I don't know. Maybe that will just make the echo chambers worse. But can it possibly make them worse than they already are? Are we really saying that an agent built by us, for us, will be worse than an agent built by Facebook for Facebook?
And isn't that how the internet used to be? Back when the scale of the internet wasn't so vast, people just ... skimmed everything themselves and decided what to engage with. So what I'm really driving at is some way to scale that up to what the internet has since become. Some way to build a tiny AI version of yourself that goes out and crawls the internet in ways that you personally can't, and return to you the things you would have wanted to engage with had it been possible for you to read all 1 trillion internet comments per minute.
The primary content no user wants to see any every user agent would filter out is ads. Since ads are the primary way sites stay in business, they are obligated to fight against user agents or other intermediary systems.
The ultimate problem is that Facebook doesn't want to show you good, enriching content from your friends and family. They want to show you ads. The good content is just a necessary evil to make you tolerate looking at ads. Every time you upload some adorable photo of your baby for your friends to ooh and aah over, you're giving Facebook free bait that they then use to trap your friends into looking at ads.
I sure am tired of hearing about "the fundamental flaw" in empowering people. What you describe is not a flaw in empowerment, it's a flaw in their business model, and it's one that can that be fixed (i.e. "innovate a better business model"). Can we stop propagating the idea that people who do not want to use their limited bandwidth and processing power to rasterize someone else's advertising are somehow "flawed"?
The only thing more insane than blaming users for having self-interest are the people who pretend that Facebook et al. are somehow owed the business model they have, painting ad-blockers as some kind of dangerous society-destabilizing technology instead of the commonsense response to shitty business practices it clearly is.
"The ultimate problem is that Facebook doesn't want to show you good, enrishing content from your friends and family."
Well, it is someone else's website. What do you expect Zuckerberg has his own interests in mind.
In 2020, it is still too difficult for everyone to set up their own website, so they settle for a page on someone else's.
If exchanging content with friends and family (not swaths of the public who visit Facebook - hello advertisers) is the ultimate goal, then there are more efficient ways to to do that without using Zuckerberg's website.
The challenge is to make those easier to set up.
For example, if each group of friends and family were on the same small overlay network they set up themselves, connecting to each other peer-to-peer, it would be much more difficult for advertisers to reach them. Every group of friends and family on a different network instead of every group of friends and family all using the same third party, public website on the same network, the internet.
Naysayers will point to the difficulty setting up such networks. No one outside of salaried programmers paid to do it wants to even attempt to write "user agents" today because the "standard", a ridiculously large set of "features", most of which benefit advertisers not users, is far too complex. What happens when we simplify the "standard"? As an analogy, look at how much easier is is to set up Wireguard, software written more or less by one person, than it is to set up OpenVPN.
> The primary content no user wants to see any every user agent would filter out is ads
"no user"? Nope. People buy magazines, that are 90% ads. Subscribe to newsletters. Hunt for coupons. Watch home shopping channels. Etc, etc.
There's large part of population that wants to see ads. Scammy and bad ads? No. Good and relevant ads? A LOT of people do want them. Even tech-folks, who claim that ads are worst thing for humanity. Don't you want to learn about sale for new tech gadgets? Discounts for AWS? Good rent deals?
Honestly I think the only way to make an ethical social network is to make a non-profit one. Fund it alongside other public goods like PBS, public education, highways, rail networks, healthcare, etc.
And yes, I know: good luck getting THAT to happen in the US given how badly funded everything else in my list is. If you’re in another country that actually funds public goods maybe this is a thing you could talk to some of your fellow techies about and make a proposal, especially if your country is getting increasingly tired of Facebook?
Alternatively, ground-up local funding of federated social networks might be workable; I run a Mastodon server for myself and a small group of my friends and acquaintances, with the costs pretty much evenly split between myself and the users who have money to spare. It is not without its flaws and problems but it is a thing I can generally keep going with a small investment of my spare time every few months.
> Since ads are the primary way sites stay in business
Flaw? It seems that the point would be to force FB to transact with currency rather than a bait-and-switch tactic. The site would also be more usable if they were forced to change business model.
That is how it is today. But does it have to be like that ? What is the minimum revenue per user required for service like FB to run.
While everyone is sceptical on whether such a service can reach critical mass to make financial sense, a brand new FB replacement may not be able to do it, However FB itself can certainly give that as an option without hurting their revenues substantially.
I was sceptical on the value prop for Youtube Premium, I am constantly surprised how many people pay for it, if google can afford to loose ad money with YT premium, I am sure FB can build a financial model around a freemium offering if they wanted to.
I think another tangential but related issue is with how these companies measure success. They measure success by engagement, and things that drive the most user engagement aren't usually the best for the user.
YouTube has been getting a lot of flack for this recently.
>Since ads are the primary way sites stay in business, they are obligated to fight against user agents or other intermediary systems.
Not all users hate ads in principle, just in practice. In theory, you'd be making the users select ads for relevance and not being annoying. But obviously, the site wants to show ads based on how much they're paying and "not being annoying" only factors in if pushes people off the site entirely.
How are the user agents funded? Probably through ads.
The problem is actually how to fund the timeline publication services. But systems like Medium etc seem to work OK.
I am now spending several hundred dollars a year on content subscriptions. Plus subscriptions for Gmail, Zoom and a few other things where I have outgrown the free service. A freemium model for the timeline publication services would probably work.
I don't know about that. I buy a lot of things. I wish something would help me buy what I need and didn't know I needed so I didn't have to spend time shopping and researching.
maybe you're interested in hearing about X tech, or you can tell your "Agent" that you want to buy Y thing, or travel to Z.
That's where ads and reviews get thru.
I think transparency matters more. I liked Andrew Yang’s suggestion to require the recommendation algorithms of the largest social networks to be open sourced given how they can shape public discourse and advertising in all mass media is regulated to prevent outright lies from being spread by major institutions (although an individual certainly may do so).
Open sourcing the algorithms (however we define it) does absolutely nothing. What use is a neural network architecture? Or a trained NN with some weights? Or an explanation that says - we measure similar posts by this metric and after you click on something we start serving you similar posts? None of those things are secret. More transparency wouldn't change anything because even if completely different algorithms were used, the fundamental problems with the platform would be exactly the same.
Not the recommendation engines. The graph. All the social media companies (and indeed Google and others) profit by putting up a wall and then allowing people to look at individual leaves of a tree behind the wall, 50% of which is grown with the help of people's own requests. You go to the window, submit your query, and receive a small number of leaves.
These companies do provide some value by building the infrastructure and so on. But the graph itself is kept proprietary, most likely because it is not copyrightable.
>advertising in all mass media is regulated to prevent outright lies from being spread
Advertising in mass media is regulated. You are very much allowed to publish claims that the government would characterize as outright lies, you just can't do it to sell a product.
Does that actually work? If they create some complex AI and then show us the trained model, it doesn't really give much insight into the AI doing the recommendation. You could potentially test certain articles to see if it is recommended, but reverse engineering how the AI recommends it would be far more time consuming than updating the AI. As such Facebook would just need to regularly update the AI faster than researchers can determine how it works to hide how their code works. Older versions of the AI would eventually be cracked open (as much as a large matrix of numbers representing a neural network could be), but between it being a trained model with a bunch of numbers and Facebook having a never version I think they'll be able to hide behind "oops there was a problem, but don't worry our training has made the model much better now".
Setting aside the concerns about the efficacy of the idea, it also seems like an arbitrary encroachment on business prerogatives. I think everyone agrees that social media companies need more regulation, but mandating technical business process directives based on active user totals isn't workable, not the least of which because the definition of "active user" is highly subjective (especially if there is an incentive to get creative about the numbers), but also because something like "open source the recommendation algorithm" isn't a simple request that can be made on demand, especially with the inevitable enfilade of corporate lawyering to establish battle lines around the bounds of intellectual property that companies would still be allowed to control vs that which they would be forced to abdicate to the public domain.
The risk is that it behaves like a reinforcement learning algorithm which essentially rewards itself by making you more predictable, I'd argue that's what curated social networks do today.
If you're unpredictable you're a problem. Thus, it makes sense to slowly push you to a pole so you conform to a group's preferences and are easier to predict.
A hole in my own argument is that today's networks are incentivized to do increase engagement where a neutral agent is in most ways not.
So perhaps the problem isn't just the need for agents but for a proper business model where the reward isn't eyeball time as it is today.
But you are predictable, even if you think you are unpredictable, you are just a bit more adventurous. Algorithm can capture that as well. It will be easier for algorithm that works on your behalf.
What you’re referring to is splitting the presentation from the content. The server (eg Facebook) provides you with the content, and your computer/software displays it to your liking (ie without ads and spam and algorithmically recommended crap).
There’s a lot of history around that split, and the motivation for HTML/CSS was about separating presentation from the content in many ways. For another example, once upon a time a lot of chat services ran over XMPP, and you could chat with a Facebook friend from your Google Hangouts account. Of course, both Google and Facebook stopped supporting it pretty quickly to focus on the “experience” of their own chat software.
The thing is that there is very little money to be made selling content, and a lot to be made controlling the presentation. So everyone focuses on the latter, and that’s why we live in a software world of walled gardens that work very hard to not let you see your own data.
There is some EU legislation proposal that may make things a bit better (social network interop), but given the outsized capital and power of internet companies i’m not holding my breath.
> you could chat with a Facebook friend from your Google Hangouts account
This was never true. There was an XMPP-speaking endpoint into Facebook's proprietary chat system, but it wasn't a S2S XMPP implementation and never federated with anything. It was useful for using FBChat in Adium or Pidgin, but not for talking to GChat XMPP users.
Your friends provide you with the content, not Facebook. You only need Facebook now because you don’t have a 24/7 agent swapping content on your behalf and presenting it how you like it.
Separating presentation and content is one way to do it, but it's not the only way.
For example, Facebook could create some kind of plugin API that allows you to interpose your filtering/ranking code between their content and their presentation.
For example, maybe they give you a list of N possible main page feed items each with its own ID. Your code then returns an ordered list of M <= N IDs of the things that should go into your feed. That would allow you to filter out the ones you don't want and have the most interesting stuff displayed first. Facebook could display the M items you've chosen along with ads interspersed.
Something like that could run in the browser or Facebook could even allow you to host your algorithm in a sandbox on their servers if that helps performance. (Which means you trust them to actually run it, but you have to trust them on some basic things if you're going to use their service at all.)
In other words, changing the acoustics of the echo chamber doesn't mean you need to be the one implementing a big chunk of the system. You just need a way to exert control over the part you want to customize.
ActivityPub and other federated networks are the answer. They do exactly that: if you aren't satisfied with the rules on existing servers, you host your own. The network itself is wide open, and its control is distributed across many server admins. The way the content is presented is of course completely up to the software the user is running. Having no financial incentive to make UX a dumpster fire visible from space also helps a lot.
They're not the answer as long as they don't have loads of people. The attraction of FB and the like is that almost everyone has a FB account, just like almost every public figure has a twitter account. The downside of things like Mastodon is how do you know what server you want to connect to? For a non-technical user it doesn't offer any more obvious utility than a FB group.
I like this, and so does my friend Confirmation Bias, who is pretty clear that the AI would select completely unbiased content relevant to me, not limited by any of the Bias family. It would be 100% better than the bias filters in place now, because my thoughts and selections are always unbiased, IMHO. (FYI: Obviously I'm not being serious. You clearly knew that, this notice is for the other person who didn't.)
> I don't know. Maybe that will just make the echo chambers worse.
This.
Also. What incentive does a walled garden even have to allow something like this? Put a different way, what incentive does a walled garden have to not just block this "user agent"? Because the UA would effectively be replacing the walled garden's own "algo curated new feed" - except if the user builds their own AI bot -- the walled garden can't make money the way they currently do.
I think the idea is very interesting. I personally believe digital UA's will have a place in the future. But in this scenario I couldn't see it working.
True, but we have ad blockers and they're effective. They're effective against the largest, richest companies in the world. There are various reasons for that, but at the end of the day it remains true that I can use YouTube without ads if I choose to. There's clearly a place in the world for pro-user curation, even if that's not in FAANG's best interests. I think it's antithetical to the Hacker ethos to not pursue an idea just because it's bad for mega-corps.
I was in agreement with you until I read that. People don’t need to have content dictated to them like mindless drones whether it is from social media, bloggers, AI, or whatever. Many people prefer that, though, out of laziness. It’s like the laugh track on sitcoms because people were too stupid or tuned out to catch the poorly written jokes even with pausing and other unnecessarily directed focus. It’s all because you are still thinking in terms of content and broadcast. Anybody can create content. Off loading that to AI is just more of the same but worse.
Instead imagine an online social application experience that is fully decentralized without a server in the middle, like a telephone conversation. Everybody is a content provider amongst their personal contacts. Provided complete decentralization and end-to-end encryption imagine how much more immersive your online experience can be without the most obvious concerns of security and privacy with the web as it is now. You could share access to the hardware, file system, copy/paste text/files, stream media, and of course original content.
> And isn't that how the internet used to be?
The web is not the internet. When you are so laser focused on web content I can see why they are indistinguishable.
I think your suggestion is a bit out of scope for what's actually being discussed/not really a solution.
I'm active on the somewhat (not fully) decentralized social medium Fediverse (more widely known as Mastodon, but it's more than that) and I think a lack of curation is a problem: Posts by people who post a lot while I'm active are very likely to be seen, those by infrequent posters active while I'm not very likely to go unnoticed.
How would your proposed system (that seems a bit utopic and vague from that comment, to be honest) deal with that?
> People don’t need to have content dictated to them like mindless drones whether it is from social media, bloggers, AI, or whatever.
If the AI is entirely under the user's control, why not? It's like having a buddie that's doing for me what I'd do for myself, if I had the time and energy (and eyebleach).
In response to it just creating more echo chambers:
- it can't be worse than now
- At minimum, it's an echo chamber of your own creation instead of being manipulated by FB. There's value in that, ethically.
- Giving people choice at scale means it will at least improve the situation for some people.
Isn't facebook (and reddit, and twitter) showing you posts by people companies etc. that you decided to follow? (And some ads)?
I am pretty sure things can be worse than right now, pretending like we are in some kind of hell state at the bottom of some well where it can't possibly be worse, seems unrealistic to me.
Neal Stephenson explores something like your “user agent” idea and comes up with an different solution in his novel “ Fall; or, Dodge in Hell.”
Spoilers ahead:
In Stephenson‘s world people can hire “editors” to curate what they see, and those editors effectively determine reality for people at a mass scale. This is just one of the many fascinating ideas Stephenson explores and I highly recommend reading the book.
This interview covers some of the details if you’re not willing to dive into an 800+ page novel:
Highly recommend reading Reamde first if you can. The story is entirely different, but is the same world and comes chronologically first; I felt the continuity added a lot when reading Fall.
Part of the concept was that the agents would actually roam onto servers on the internet on your behalf raising complicated questions around how to sandbox the agent code (came in useful for VPSs and AWS-style lambdas in the end).
At Baitblock (https://baitblock.app), we're working on something similar. It's called the Intelligent Blocker, and has the same intended goal as your user agent 'AI' (not yet open to general public, under development right now). With it you will be able to block all Facebook posts that are for example say not from your family, or not of a specific type or from specific person.
Or comments on different Internet forums that are blatantly spammy/SEO gaming etc.
Or block authors in search results or Twitter feed or any comment that you don't like. Basically the Zapier of content filtering.
This will be available to the user as a subscription service.
Some of these thigs are not possible on mobile platforms (Android, iOS) unfortunately because the OS do not allow such access, but we hope that Android and iOS in the future open up to allow external curation systems, apart from the app platform it's self as it's in the interest of the user.
I think the overwhelming majority of users don’t want to deal with this kind of detail. IMO most people would end up using some kind of preset that matched their preferred bubble.
I haven't touched this in years, but one time I made a little project[1] to analyze the people I was following on Twitter and recommend who I might want to unfollow based on their attitudes. People who posted negative stuff very frequently were at the top of my list to ditch; I don't need extra input pushing me toward misery. The first few runs were very illuminated, but not surprising, like "wow, now that you mention it, Joe does saw awful stuff approximately hourly".
I would love to have an agent that could apply those sorts of analyses to my data sources. In my case, I wouldn't want to filter out bad news, but unnecessarily nasty spins on it. I'd find that super valuable.
We're a small team working in stealth on this exact challenge. Shoot me a note if you're interested in hearing more or getting involved. itshelikos@gmail.com
This type of thing is nothing new, but it's important to recognize that it doesn't take off because it's illegal.
As soon as Facebook realizes you're a risk, you'll get a C&D ordering you to stop accessing their servers. These typically have the force of law under the CFAA.
You won't access their servers, but just read the page that the user already downloaded? You'll still get nailed under the Copyright Act.
"User agents" in the sense used by the OP are as old as the internet itself. There's an active, serious, and quiet effort to abuse outdated legislation to ensure that they never become a problem.
I mean Facebook doesn't really decide what content I see, I do. I aggressively police my timeline and unfollow people who post garbage content. I don't really need an AI to do that for me...
Another early assumption about the internet and computers in general is that users were going to exert large amounts of control over the software and systems they use. This assumption has thus far been apparently invalidated, as people by far prefer to be mere consumers of software that are designed to make its designers money. Even OSS is largely driven by companies who need to run monetized infrastructure, though perhaps you don't pay for it directly.
Given that users are generally not interested in exerting a high level of sophisticated control over software they use, how then is the concept of a user agent AI/filter any different at a fundamental level? It probably won't be created and maintained as a public benefit in any meaningful way, and users will not be programming and tuning the AI as needed to deliver the needed accuracy. I don't think AI has yet reached a level of sophistication where content as broad a range as what's found on the internet (or even just Facebook) can be curated to engage the human intellect beyond measuring addictive engagement, without significant user intervention.
Hopefully I'm wrong, as I do wish I could engage with something like Facebook without having to deal with ads or with content curated to get my blood boiling. Sometimes I do wonder how much it is Facebook vs. human tendency under the guise of an online persona, as both are clearly involved here.
There are models for this that could probably work. Tim Berners-Lee has been working on a scheme called Solid for years now.
It is important to realize that Facebook is not the first, second or even tenth of its ilk. FaceBook combines a bunch of ideas from previous systems, in particular MySpace and USENET. It is more or less the third generation of Web Social Media. There is no reason to believe there can't be a fourth.
My interest in these schemes is to provide a discussion space that is end-to-end encrypted so that the cloud service collecting the comments does not have access to the plaintext. This allows for 'Enterprise' type discussion of things such as RFPs and patent applications. I am not looking to provide a consumer service (at this stage).
The system you describe could be implemented in a reasonably straightforward fashion. Everyone posts to the timeline service of their choice and choose between a collection of user agents discovering interesting content for them to read. These aggregation services could be a paid service or advertising supported. Timeline publishing services might need a different funding model of course but bit shoveling isn't very expensive these days. Perhaps it could be bundled with video conferencing capabilities, password management or any of the systems people already pay for.
As for when the Internet/Web was not so vast. One of my claims to fame is the last person to finish surfing the Web which I did in October 1992 shortly after meeting Tim Berners-Lee. It took me an entire four days of night shifts to surf every page of every site in the CERN index.
In the context of this discussion Solid sounds amazing. I'd be super excited to tune the social web to my own preferences. Sadly however, I couldn't make heads or tails of this garbage jargon laden website. WTF?
"Time to reset the balance of power on the web and reignite its true potential.
When Sir Tim Berners-Lee invented the web, it was intended for everyone. The excitement and creativity of its early days were driven from the notion that we can all participate — and the impact was world-changing.
But the web has shifted from its original promise — and it’s time to make a change.
We can still unlock the true promise of the web by decentralizing the power that’s currently centralized in the hands of a few. How? By using the power of Solid.
Solid is the technically potent, open-source platform built to decentralize the web. Inrupt is the company that’s helping to fuel Solid’s success."
Why would a personal AI which curates your content be any “better” than FB’s AI which curates your content? Isn’t the current AI based on what you end up engaging with anyway? If you naturally engage in a variety of content across all ideological spectrums, than that’s what the FB AI is going to predict for you. Unfortunately, the vast majority of us engage with content which reinforces our existing worldview - which is exactly what would happen with a personal AI.
Because an algorithm under your control can be tweaked by you. Could be as simple as reordering topics on a list of preferences. Facebook's algorithm can't be controlled like that. Also, an algorithm you own won't change itself unbeknownst to you.
I tried building this 10 years ago as a startup. Maybe time to revisit, the zeitgeist is turning more and more towards this and computing power has gotten cheap enough ...
This misses the point. Facebook refuses to look inwardly or mess with their core moneymaker, regardless of how it affects people. Noone is ever going to sip from the firehose just like we'll never again get a simple view of friend's posts sorted by creation date.
I think the real problem is Facebook's need to be such a large company. They brought this on themselves trying to take over the world. Maybe they need a Bell-style breakup
Zuck doesn't care about anything healthy as long as that healthy content reduces ad revenue/and or user activity(MAU/DAU) metrics. Basically he wants to extract enough time/money from each user to just be bearable for that user that they do not leave the site in disgust. Once you realize this cardinal truth from FB all the reprehensible actions from Zuck/senior leaders make perfect sense.
I like the line of thinking, but who actually provides the agent, and what are their incentives?
This is far from a perfect analogy, but compare it to the problem of email spam. People first tried to fight it with client-side Bayes keyword filters. It turns out it wasn't nearly as simple as that, and to solve a problem that complicated, you basically need people working on it full time to keep pace.
Ranking and filtering a Facebook feed would have different challenges, of course. It's not all about adversaries (though there are some); it's also about modeling what you find interesting or important. But that's pretty complicated too. Your one friend shared a woodworking project and your other friend shared travel photos. Which one(s) of those are you interested in? And when someone posts political stuff, is that something you find interesting, or is it something you prefer to keep separate from Facebook? There are a lot of different types of things people post, so the scope of figuring out what's important is pretty big.
Holochain apps are versatile, resilient, scalable, and thousands of times more efficient than blockchain (no token or mining required). The purpose of Holochain is to enable humans to interact with each other by mutual-consent to a shared set of rules, without relying on any authority to dictate or unilaterally change those rules. Peer-to-peer interaction means you own and control your data, with no intermediary (e.g., Google, Facebook, Uber) collecting, selling, or losing it.
Data ownership also enables new frontiers of user agency, letting you do more with your data (imagine truly personal A.I., whose purpose is to serve you, rather than the corporation that created it). With the user at the center, composable and customizable applications become possible."
I am thinking about the concept of “the last mile to user’s attention”.
Currently, software clients of Mastodon or Twitter hold that mile. Mastodon gives all content unfiltered, which could be too much at times, while Twitter does some oft-annoying opaque black magic in its timeline algorithms.
A better solution would be to have a protocol for capability that filters content with logic under your control. A universal middleware standard that is GUI-agnostic, can fit different content types.
By adopting this, open/federated social could start catching up on content filtering features to for-profit social (in a no-dark-patterns way, benefitting user experience), hopefully stealing users.
Ideally it could be used by the likes of Twitter and Facebook—of course, given the size of for-profit social, such an integration would take some unimaginably big player to motivate them to adopt (the state of their APIs is telling), but if it’s there there’s a chance.
Excellent idea, soon this will be a requirement for using the web in any productive way, considering the ratio of good information to junk info is getting worse rapidly. We already do this in a way; only visiting certain sites that we like and following certain users. A personal AI would make this process much more efficient.
I do see a content filtering AI as very difficult to achieve, and I don't think it will be possible for quite some time. There are so many small problems, even getting AI to recognize targeted content is difficult, given that websites can have infinitely different layouts. And what about video or audio? The most practical way to achieve a content AI would be to persuade websites to voluntarily add standardized tags so that the only problem becomes predicting and filtering. Although I could see some issues with that like people trying to game the system.
I agree - wasnt the browser intended to be the user agent? And counterpoint to some of the replies to you, surely people can just pay instead of sites being ad-based, what other industries operate in this absurd way? The public must think there’s no cost to creating software if everythings always free.
> What if, instead, you had a personal AI that read every Facebook post and then decided what to show you. Trained on your own preferences, under your control, with whatever settings you like.
That would be great. Having an artificial intelligence as a user agent would be perfect. That'd be the ideal browser. So many science fiction worlds have the concept of an intelligent navigator who acts on behalf of its operator in the virtual world, greatly reducing its complexity.
Today's artificial intelligences cannot be trusted to act on our best interests. They belong to companies and run on their computers. Even if the software's open source, the data needed to make it useful remains proprietary.
It’s really not as sophisticated, but these guys[1] created an extension that in addition to their main objective of analyzing Facebook’s algorithm also offers a way to create your own Facebook feed. If I got it right, they analyze posts their users see, categorize them by topic and then let you create your own RSS feed with only the topics you want to see.
It’s not clear to me whether you may see posts collected by other users or only ones from your own feed and it seems highly experimental.
> What if, instead, you had a personal AI that read every Facebook post and then decided what to show you. Trained on your own preferences, under your control, with whatever settings you like.
There is a feedback problem, though, which is that your preferences are modified by what you see. So the AI problem devolves to showing you the kind of content that makes you want to see more of it, i.e. maximize engagement. I think a lot of people are addicted to controversy, "rage porn," anger-inducing content, and these agents are not going to help with this issue.
If we could train AI agents to analyze the preferences of people, I think the best use for them wouldn't be to curate your own content, but to use them to see the world from other people's perspective. If you know in what "opinion cluster" someone lies and can predict their emotional reaction to some content, you may be able to identify the articles from cluster A that people from cluster B react the least negatively to, and vice versa. And this could be leveraged to break echo chambers, I think: imagine that article X is rated +10 by cluster A and -10 by cluster B, and article Y is rated +10 by cluster A but only -2 by cluster B. It might be a good idea to promote Y over X, because unlike X, Y represents the views of cluster A in a way that cluster B can understand, whereas X is probably some inflammatory rag.
The key is that you can't simply choose content according to a user's current preferences, they also have to be shown adversarial content so that they have all the information they need about what others think. This is how they can retain their agency. Show them stuff they disagree with, but that they can respect.
I expect that a system like the one I'm describing would naturally penalize content that paint people with opposing points of view as evil or idiots, because such content is the most likely to be very highly rated by the "smart" side and profoundly hated by the "stupid" side. Again, note that I'm not saying content everyone likes should be promoted, it's more like, we should promote the +10/-2 kind of polarization (well thought out opinion pieces that focus on ideas which might be unpopular or uncomfortable) over the +10/-10 kind of polarization (people who disagree with me are evil cretins).
In the right medium, perhaps the user agent would also decide when my posts are shown to people versus an ad being shown in place of my post such that I make money. Then a site like Facebook would only make a small portion of my ad revenue in exchange for hosting it.
Sure, you can't read every facebook post, but if your browser extension is scanning your feed and suppressing posts for you, how can they even stop you?
This already exists — most social media is already curated. You only see tweets and posts from those you follow or friend. You can already block or ignore any undesirables. This works fine for self-curation.
There is no need for holier-than-thou censorship short of legal breaches. Good to see FB take this change of direction.
I really didn't realize until perhaps the last 2 years that Facebook fundamentally tapped some hidden human need/instinct to argue with people who they believe are incorrect. Specifically, and more importantly, combined with the human inability to actively decide to not pay attention when things are inconsequential or not yet worth arguing about.
Sometimes, just shutting up about an issue and not discussing it is the best thing for a group to do. Not more advocacy or argument. Time heals many things. No app is going to help you take that approach -- and that's not what technology is going to help solve (or is incentivized to solve). Just like telling a TV station that's on 24 hours to not cover a small house fire when there's no other news.
People are not good at disengaging from something when that's the right thing to calm the situation. And Facebook somehow tapped into that human behavior and (inadvertently or purposefully) fueled so many things that have caused our country (and others) to get derailed from actual progress.
There is no vaccine yet for this.
And not to dump on the Facebook train, since others would have come to do it instead. But they sure made a science and business of it.
In general and not necessarily related to just facebook, but one of the best things I've come to learn about myself and the world around me is that sometimes the absolute _best_ thing you can do for yourself is to just shut up and walk away, even if you know in your heart of hearts that you are correct.
I think this is generally helpful to keep in mind.
I also think there’s an art to deescalation and discussing ideas or persuading someone you disagree with to see an alternative view (and then giving them space to change their mind).
Productive discussion isn’t possible with everyone or even one individual depending on where they are in their life, but I’ve generally found it works better than expected when you can remove your own identity and feelings from it.
It’s rarely in the spotlight though because it doesn’t get retweeted or shared as much as combative arguing that’s more a performance from each side (with likes and cheering on the sidelines).
I learned this lesson recently, and I am sure I will continue to learn this lesson in the future as I've already learned it in the past. Just in many different contexts and ways!
I agree, as long as you can find time to research and understand if you’re really correct. This way you avoid conflicts but you still learn if you were wrong.
I call this the "outrage economy". There are several companies (facebook, twitter, reddit, youtube, etc) that grew based on user activity of varying types. The more bickering and polarization, the bigger X Company gets and need to hire more employees and get more funding, and that feeds into more growth. There is also a secondary economy built on or used by these original companies (software tooling, ad software, legal, clickbait, etc). We now have a big chunk of the economy feeding pointless bickering.
> some hidden human need/instinct to argue with people who they believe are incorrect
This is perhaps a form of "folk activism" [1]:
> In early human tribes, there were few enough people in each social structure such that anyone could change policy. If you didn’t like how the buffalo meat got divvied up, you could propose an alternative, build a coalition around it, and actually make it happen. Success required the agreement of tens of allies — yet those same instincts now drive our actions when success requires the agreement of tens of millions. When we read in the evening paper that we’re footing the bill for another bailout, we react by complaining to our friends, suggesting alternatives, and trying to build coalitions for reform. This primal behavior is as good a guide for how to effectively reform modern political systems as our instinctive taste for sugar and fat is for how to eat nutritiously.
Facebook is a collection of your friends or your "tribe", so repeated arguments with your tribe members is what our unconscious brain pushes us towards. That coupled with the dopamine hit of validation via likes (which is common to other online discussion platforms).
I really don't like the "it can't be helped" attitude about what Facebook has become.
They made a choice to throw gasoline on the flames of these aspects of human behavior. Few people seem to realize that Facebook could have been a force for good, if they had made different choices or had more integrity when it comes to the design and vision of their platform.
The way that things happened is not they only possible way they could have happened, and resigning to the current state as "inevitable", to me, reeks of an incredible lack of imagination.
I am not sure I can agree. Facebook did not change in any significant way. It still serves as a platform to boost your message. It is, at best, simply a reflection of the human condition. The previous example was the internet and some of the revelations it brought about us as a species. FB just focused it as much as it could.
Force for good. I do not want to sound like this, but how, in your vision, that would look like? This is a real question.
> I really didn't realize until perhaps the last 2 years that Facebook fundamentally tapped some hidden human need/instinct to argue with people who they believe are incorrect.
I think everyone has a natural human need to feel that they have agency in their community. The need to feel that they participate in the culture that surrounds them and that they can have some affect on the groups that they are members in. The alternative is being a powerless pawn subject to the whims of the herd.
In the US, I think most people lost this feeling with the rise of suburbia, broadcast television, and consumer culture. There are almost no public spheres in the US, no real commons where people come together and participate. The only groups many people are "part" of are really just shows and products that they consume.
Social media tapped into that void. It gave them a place to not just hear but to speak. Or, at least, it gave them the illusion of it. But, really, since everyone wants to feel they have more agency, everyone is trying to change everyone else but no one wants to be changed. And all of this is mostly decoupled from any real mechanism for actual societal change, so it's become just angry shouting into the void.
I think it’s important to note that Facebook didn’t invent any of this. They just built the biggest mainstream distribution channel to do so. Nothing they ever did in terms of facilitating pointless arguments has been all that original either.
People have been doing this forever, and even on the Web much, much longer than Facebook has existed.
Now that said, they know what they have on their hands and how it makes them the money. They aren’t going to fix it. It is a big feature of their product.
think it’s important to note that Facebook didn’t invent any of this
I think that’s literally true. They told their algorithm “maximise the time people spend on Facebook” and it discovered for itself that sowing strife and discord did that.
Facebook’s crime is that when this became obvious they doubled down on it, because ads.
Facebook, and others, absolutely innovated with their recommendation engines. Enabled by implementing the most detailed user profiling to date coupled with machine learning.
Part of this is that Facebook makes the opinions of people you know but don't really care about highly visible, which I think leads to some of the animosity you see on the platform. When the person you're confronting is the uncle of someone you talked to once back in high school, there's little incentive to be kind.
> I think it’s important to note that Facebook didn’t invent any of this.
I don't agree with that. I very strongly think that Facebook did invent a lot of this.
> They just built the biggest mainstream distribution channel to do so
Scale does matter though. There is a lot in life that is legal or moral at small scale but illegal or immoral at large scale. Doing things at scale does change the nature of what you are doing. There's no 'just' to be had there.
> Nothing they ever did in terms of facilitating pointless arguments has been all that original either.
I don't agree with that either. They have even published scientific papers, peer-reviewed, to explain their new and novel methods of creating emotionally manipulative content and algorithms.
> People have been doing this forever, and even on the Web much, much longer than Facebook has existed.
I also don't agree with this. Facebook has spent 10+ years inventing new ways to rile people up. This stuff is new. Yes I know newspapers publish things that are twisted up etc, but that's different, clearly. The readers of the paper are not shouting at each other as they read it.
I think it's super dangerous to take this new kind of mass-surveillance and mass-scale manipulation and say, welp, nothing new here, who cares? I think that's extremely dangerous. It opens populations to apathy and lets corporations do illegal and immoral things to gain unfair and illegal power.
Facebook should not be legally allowed to do all the things they are doing. It's invasive, immoral, and novel, the way they deceive and manipulate society at large.
That's an interesting thought, for sure. I should point out that this doesn't only apply to facebook, but other large discussion forums as well: reddit, 4chan, tumblr, twitter etc.
> some hidden human need/instinct to argue with people who they believe are incorrect
I've said it before, I'll probably say it again: this place is chock full of people just itching to tell you you're wrong and why. Don't get me wrong: obviously there's also a hell of a lot of great discussion and insightful technical knowhow being shared by real experts — but in my experience I also do have to wade through quite a lot of what feels like knee-jerk pedantry and point-scoring.
Extremely true, also relevant for work disagreements between people who have existing positive relationships. A surprising number of disagreements disappear if left on their own for a time.
I find that many people with engineering backgrounds (myself included) can struggle letting conflicts sit unresolved. I suspect that instincts learned debugging code get ported over to interpersonal issues, as code bugs almost never disappear if simply left to rest.
Close your eyes, hold your breath and hope the situation resolves itself, that's your solution? I don't believe in a: "hidden human need/instinct to argue with people". There is nothing hidden about human conflict. It is as natural as any conflict; as natural as space and time. In fact, without conflict evolution can not exist. Obviously, a good portion of the arguments being had have the potential of bearing no fruit, but I would argue that just as many of them not only should but NEED to be had, and are quite productive on the whole.
I actually really enjoy having a good argument with random people online, but I don't as much enjoy arguing with my friends and family. 1) I don't like being mad at or contemptious of people I'm close to and 2) they're usually not worth the effort of arguing with because they're just cutting and pasting stupid shit they found elsewhere and it's _exhausting_ to continuously correct the record when they put zero effort into copy and pasting it to begin with.
I first purged everyone that posted that stuff from my feed, and then eventually quite facebook altogether.
> I really didn't realize until perhaps the last 2 years that Facebook fundamentally tapped some hidden human need/instinct to argue with people who they believe are incorrect.
Hidden human need/instinct to argue, period. These arguments aren't intellectual debates, it's people getting pissed off at something, and venting their rage towards the other side.
It's odd how addictive rage can be. But that's not a new phenomenon. Tabloids have been exploiting this for decades before Facebook.
Most of my facebook feed is just memes and selfies.
(I'm venezuelan)
When facebook as new/trending up years ago there were some political discussions but people quickly figured out it was worthless, how come USAians haven't?
In my line of work, we need to dig deep and find the root cause on anything we 'touch'. I have noticed (since day 1 in this line of work) that elaborate, complex truths tire the audience, they want something snappy and 'sexy'. I remember a French C-suite telling me "make it sexy, you will lose them".
Facebook managed to get this just right: lightweight, sexy (in the sense of attractive), easy to believe, easy to understand, easy to spread. The word "true" is completely absent on the above statement. That generates clicks. That keeps users logged in more. That increases "engagement". That increases as revenue. Game over.
The masterminds/communications brilliant minds could never get so many eyeballs and ears tuned-in with such a low cost before.
I've mentioned before that FB = cancer
It gives 1 (ability to communicate) and it takes 100.
Sometimes, just shutting up about an issue and not discussing it is the best thing for a group to do.
Then the terrorists win.
That used to be the conventional wisdom on trolls, but there are now so many of them. Worse, about half are bots.[1] (Both NPR and Fox News have that story, so it's probably correct.)
> And Facebook somehow tapped into that human behavior and (inadvertently or purposefully)
It's not just that they tapped into it, it's the entire mission statement in a sense. 'to connect the world' if you want to treat it like a sort of network science basically means to lower the distance between individuals so much that you've reduced the whole world to a small world network. There's no inhibition in this system, it's like an organism on every stimulant you can imagine.
Everything spreads too fast and there's no authority to shut anything down that breaks, so the result is pretty much unmitigated chaos.
The vaccine is the thing people complain about all the time, the much maligned 'filter bubbles', which is really just to say splitting these networks into groups that can actually work productively together and keeping them away from others that make them want to bash their heads in.
People do go on Facebook and argue with others, but that's not the core of the divisiveness. Rather, people sort themselves into opposing groups and spend most of their time talking amongst themselves about how good they are and how horrible the other group is.
You are on to something. I interpret it as it is fantastic fun/addictive/dopamine-short-term-win to argue or discuss with someone. Especially if you can afford the hangover that outrage might lead to.
Face to face with people I know or at least recognize as human, not a bot, educated or at least not cartoon-hick-personality - Arguments can be great, because of the ability to see when to pull back and stop something from escalating. We are all human after all.
In internet-powered discussion, where numbers of people observing can be huge, and every username can feel inhuman or maybe even just trolling in an attempt to create a stupid argument - that Argument gets painful. But the dopamine hit is still there...
Given our current (social) media ecosystem, converting outrage into profit (per Chomsky, McLuhan, Postman, and many, many others), what does a non-outrage maximizing strategy look like?
I currently favor a slower, calmer discourse. A la both Kahnemann's thinking fast vs slow, and McLuhan's hot vs cold metaphors.
That means breaking or slowing the feedback loops, removing some of the urgency and heat of convos.
Some possible implementation details:
- emphasis on manual moderation, like metafilter, and dang here on HN
- waiting periods for replies. or continuing allowing the submissions but delay their publication. or treat all posts as drafts with a hold period. or HN style throttling. or...?
- only friends can reply publicly.
- hide "likes"
- do something about bots. allow aliases, but all accounts need verified real names or ownership.
Sorry, these are just some of misc the proposals I remember. I should probably have been cataloguing them.
I can't see that working for anything other than niche networks. A social network will make less money doing this, so what's their incentive? The bulk of people will stick with a network that gives them constant and instant feeding of their addiction. I think the "vaccine" would need to be broader to be effective or major networks would need to grow a serious conscience.
I had the same realization recently, and deleted my Twitter account in favor of a new one where I only follow people I know in real life.
That worked great for a couple of weeks, but now I log on Twitter and half of my feed is tweets of people I don't know or follow, with the worst, most infuriatingly stupid hot takes. No wonder they have literally hundreds of thousands of likes. The platform is built around this "content".
> I really didn't realize until perhaps the last 2 years that Facebook fundamentally tapped some hidden human need/instinct to argue with people who they believe are incorrect.
Funny, years ago, around the Aurora shooting in Colorado, it was Facebook that made me recognize this behaviour in myself.
A lot of people here are saying they will write responses then wait before posting.
Could this be part of the solution? If a discussion is getting particularly heated, put responses on a time delay. Maybe even put the account on a general delay for engaging with heated subjects, so the outrage doesn't crop up elsewhere.
Of course this would decreased engagement. It might even push users to more permissive platforms.
Yeah. There's a lot of relief in letting go, accepting that other people are outside of your power to control, and just practicing acceptance no matter how wrong or annoying or stupid you think people are being.
If you realize it’s a dumpster fire then delete your account and move on with life. If that line of thinking is a challenge in absolutely any way the problem is addiction.
It's been 5 years now. Facebook ads used be creepy, but polite: "Come back to Facebook"
But not anymore, black text one white background:
"Go To Facebook"
next ad:
"See What You've Missed From Friends And Family"
Kill it with fire! The only advice I have about the company and it's products.
I am a progressive, so liberal I verge on socialist, and I think one of the "Left's" great flaws in the US is its inability to walk away, to just ignore. Engaging vociferously is seen almost as a moral imperative; "we must fight evil wherever we find it" sort of thing. But all that does is bring attention and a form of validation to the more lunatic attempts to enrage them. They aren't accomplishing a single thing by getting so righteously angry over statements the speakers probably aren't even making in good faith.
You can even see in the memes. It's the right that loves "trolling libs," and the left that's taking the bait. I think it's telling that the stereotype hardly ever seems to go the other way; you almost never see people talk about liberals "trolling reps."
You really don't need to engage with everything in order to be a good activist. In fact, I believe taking time and emotional energy to do so is actually being a bad activist. You're just wasting effort, nothing you say or do will change anyone's minds because mostly, the whole reason they're saying whatever it is is specifically to make you upset. To trap you into unwinnable arguments just to laugh at how heated you get.
Really we all need to be better at just walking away from crazy, whatever side of whatever spectrum we find it. By regularly surrounding yourself with such conflicts and by regularly basting yourself in such a soup of intense negativity, you are quite literally doing nothing more than causing physical harm to your body and mind via the morass of cortisol, etc. you are unleashing. You are accomplishing nothing.
I agree that Facebook makes this painfully easy, although Twitter and Reddit are right there as well.
Disclaimer: I don't agree with your conclusions regarding general needs and instincts (of human) and that a human possess abosolute/built-in/DNA-engrained inabilities (therefore, I do believe a human can fly). I also don't agree that Facebook is to share much of the blame for the chaotic human zeitgeist present today.
I do believe a human is highly malleable and impressionable, and that these qualities have been exploited historically at various scales for various reasons.
"There is no vaccine yet for this."
There may not be any vaccine, but there may be a cure. If we change the language used to communicate within a setting/platform such as Facebook, possibly by using a subset of the language previously used or by adopting a more Formal construct.
But Facebook is a virtual neighborhood, with greatly increased bandwidth and range. It is difficult or impossible to achieve it in their settings.
I don't personally think it's productive for me to engage with these kind of people but I will definitely support and cheer on others doing so: https://www.youtube.com/watch?v=Q65aYK0AoMc (NSFW content)
(Personally I get too wound up in internet arguments and it's just not a healthy space for my head to be in)
They tapped into that human behaviour 'as somehow as' hn doesn't have an orangered envelope when somebody replies to your messages. It's by design and not by coincidence.
There are plenty of vaccines for this, but not in the sense that you can apply it to people by force, like you can apply a vaccine to babies.
Meditation, yoga, religions, sports - there are many ways to calm the mind.
Here's the paragraph I found most damning. It would make me want to assign liability to Facebook.
> The high number of extremist groups was concerning, the presentation says. Worse was Facebook’s realization that its algorithms were responsible for their growth. The 2016 presentation states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.”
> Facebook's mission is to give people the power to build community and bring the world closer together. People use Facebook to stay connected with friends and family, to discover what's going on in the world, and to share and express what matters to them.
Encouraging group communication is the primary goal, regardless of the consequences.
It’s one thing to enable people to seek out extremist communities on their own. It’s quite another to build recommendation systems that push people towards these communities. That’s putting a thumb on the scale and that’s entirely Facebook’s doing.
This is one example, and it’s quite possibly a poor example as it is a partisan example, but Reddit allows The_Donald subreddit to remain open, but it has been delisted from search, the front page, and Reddit’s recommendation systems.
It sounds like an honorable goal, doesn't it? But when you build a community that becomes simply a place for shared anger, you allow that anger to be amplified and seem more legitimate.
I thought the most interesting part was Mark asking not to be bothered with these types of issues in the future. By saying do it, but cut it 80%, he sounds like he wants to be able to say he made the decision to "reduce" extremism, but without really making a change.
Hey, Facebook VP here (I work on this). We’ve made some meaningful changes to address this since 2016. We’ve strengthened our enforcement in groups and have been actively working on our recommendation tools as well, for example removing groups with extremist content that violates our policies, from recommendations.
Of course, it's hard to assign blame without looking at how "extremist groups" are defined and at whether the recommendation tools do good as well as harm.
The problem really is platforms that give people content to please them. An algorithm selects content that you are likely to agree with or that you have shown previous interest. This only causes people to get reinforced in their beliefs and this leads to polarization.
For example, when I browse videos on Youtube I will only get democratic content (even though I am from Poland). Seems as soon as you click on couple entries you get classified and from now on you will only be shown videos that are agreeable to you. That means lots of Stephen Colbert and no Fox News.
My friend is deeply republican and she will not see any democratic content when she gets suggestions.
The problem runs so deep that it is difficult to find new things even if I want. I maintain another browser where I am logged off to get more varied selection and not just couple topics I have been interested with recently.
My point of view on this: this is disaster of gigantic proportions. People need to be exposed to conflicting views to be able to make their own decisions.
Sorry for the self-reference outside of a moderation context, but I wrote what turned into an entire essay about this last night: https://news.ycombinator.com/item?id=23308098. It's about how this plays out specifically on HN.
Short version: it's because this place is less divisive that it feels more divisive. HN is probably the least divisive community of its size and scope on the internet (if there are others, I'd like to know which they are), and precisely because of this, many people feel that it's among the most divisive. The solution to the paradox is that HN is the rare case of a large(ish) community that keeps itself in one piece instead of breaking into shards or silos. If that's true, then although we haven't yet realized it, the HN community is on the leading edge of the opportunity to learn to be different with one another, at least on the internet.
The thing is that HN is essentially run like singapore - a benign-seeming authoritarian dictatorship that shuts down conflicts early and is also relatively small and self-contained. One thing that doesn't get measured in this analysis is the number of people who leave because they find that this gives rise to a somewhat toxic environment, as malign actors can make hurtful remarks but complaints about them are often suppressed. Of course, it tends to average out over time and people of opposite political persuasions may both feel their views are somewhat suppressed, but this largely reactive approach is easily gamed as long as its done patiently.
This is why I like HN. I am always challenged with different points of view on here, and in a non-argumentative way. It's just a rational discussion. Often I will see something on FB or Twitter that is outrageous to me (by design), but when I look it up on HN and find some discussion on the details, truth is often more sane than it seems...
One of my theories about the success of HN is that we are grouped together based on one set of topics (on which we largely agree), but we discuss other topics over which we are just as divided as the general public.
I believe there is an anchoring effect -- if you are just in a discussion where someone helps you understand the RISC-V memory model, it feels wrong to go into another thread on the same site and unload a string of epithets on someone who feels differently than you do about how doctors should get paid.
First of all, less divisive environment means you interact with people of different opinions which means that few interactions will be with exactly like-minded people.
Environments where all people tend to think exactly the same are typically extremist in some way, resulting from some kind of polarization process that eliminates people that don't express opinion at the extreme of spectrum. They are either removed forcibly or remove themselves when they get dissatisfied.
One way HN stays away from this polarization process is because of the discussion topics and the kind of person that typically enjoys these discussions. Staying away from mainstream politics, religion, etc. and focusing mainly on technological trivia means people of very different opinions can stay civilized discussing non-divisive topics.
Also it helps that extremist and uncivilized opinions tend to be quickly suppressed by the community thanks to vote-supported tradition. I have been reading HN from very close to start (even though I have created the account much further). I think the first users were much more VC/development oriented and as new users were coming they tend to observe and conform to the tradition.
(I red your piece. I think I figured it out. The users actually select themselves on HN though in a different way. The people who can't cope with diverse community can't find place for themselves, because there is no way to block diverse opinion, and in effect remove themselves from here and this is what allows HN to survive. The initial conditions were people who actually invited diverse opinion which allowed this equilibrium).
I agree with you but this is an incredibly hard problem to solve. How are you going to get your friend to engage with videos that are in direct opposition to her world views? Recommendations are based on what she actually clicks on, how long she actually watches the videos, etc.
And from the business perspective, they're trying to reduce the likelihood that your friend abandons their platform and goes to another one that she feels is more "built for her".
A start would be to recognize that businesses are not allowed to exploit this aspect of human nature because the harm is too great to justify business opportunity.
It's easy to solve. FB gets to either be a platform for content or a curator for content. They can't be both because that would be a conflict of interest.
I think that is not quite right, but the distinction is subtle. The algorithm selects the content that you are most likely to be engaged with. For most people likely that is the filter bubble, and seeing only what they agree with. But for some folks, they actively like to have debates (or troll one another) and see more content they will not agree with, because what they don't agree with gets more engagement. The intent is to keep you engaged and active as long as possible on the site, and feed whatever drives that behavior.
This isn't necessarily bad all the time. But when content is used to form opinions on real world things that actually Matter, it definitely becomes a problem.
In other words, Steam, please filter games by my engagement in previous games I've played. News organizations, please don't filter news by my engagement in previous news.
Facebook's problem is it acts in two worlds: keeping up with your friends, and learning important information. If all you did was keep up with your friends' lives, filtering content by engagement is kind of meh.
Same with youtube. I mostly spend all my time on there watching technical talks and video game related stuff. It's pure entertainment. So filtering content is fine. But if I also used it to get my news, you start to run into problems.
That is a really annoying issue I have with YouTube.
I occasionally watch some of the Joe Rogan podcast videos when he has a guest I'm interested in. I swear, as soon as I watch one JRE video, I am suddenly inundated with suggestions for videos with really click-baity and highly politicized topics.
I've actually gotten to the point where I actively avoid videos that I want to watch because I know what kind of a response YouTube will have. Either that or I open them in incognito mode. It's a shame. I wish I could just explicitly define my interests rather than YT trying to guess what I want to watch.
This is the exact same behavior I have noticed from YouTube as well. I miss the "old" YouTube around 2011, when it was a terrific place to discover new and interesting videos. If I watched a video on mountain biking, let's say, then the list of suggested videos all revolved around that topic. But in today's YouTube, the suggested content for the same mountain biking video is all unrelated, often extremely polarizing, political content. I actually can NO LONGER discover new interesting content on YouTube. Like you say, it automatically categorizes you based on the very first few videos and that's all you see from there on out. That is why I have now configured my browser to block all cookies from YouTube. I'm annoyed that I can no longer enjoy YouTube logged in, but at least now I feel like I've gotten back that "old" YouTube of what it once was. It's a whole lot less polarizing now, I feel much better as a result of it, and the suggestions are significantly improved.
Exactly. I remember clicking on homepage to get selection of new, interesting videos. Now I just get exactly the same every time I click. Useless. I would like to discover new topics not get rehash of same ones.
In the case of Facebook they absolutely do not try to please me. They quite literally tries to do the exact opposite of everything I would like from my feed.
Chronological with the ability to easily filter who I see, and who I post to. On each point capabilities has either been removed, hidden, or made worse in some other creative way.
Adding insult to injury, having to periodically figure out where they've now hidden the save button for events, or some other feature they don't want me to use is always a 'fun' exercise.
It doesn't address all of those, but if you visit https://www.youtube.com/feed/subscriptions it looks like it's still just a reverse chronological list of videos from your subscriptions.
What really scares me is how many people I know who acknowledge that platforms like Facebook and YouTube are designed to create echo chambers which tend to distort people's opinions and perceptions towards extremes... but still actively engage with them without taking any precautions. They know it's bad for them, but they keep going back for more.
Having awareness probably means they can engage in a meaningful way. Some degree of maturity and critical thought are required to dam up invaluable media. It's something akin to junk food; junk media.
Same goes for non-political content. I often have to log out of youtube to find something new and interesting (even though I have hundreds of subscriptions).
Interesting. The diff appears to be (a) they changed the headline from "Facebook Knows It Encourages Division. Top Executives Nixed Solutions." to "Facebook Executives Shut Down Efforts to Make the Site Less Divisive", and (b) they inserted a video most of the way down the article, captioned "In a speech at Georgetown University, Mark Zuckerberg discussed the ways Facebook has tightened controls on who can run political ads while still preserving his commitment to freedom of speech."
Wow, Cloudflare's 1.1.1.1 DNS server sets up a man-in-the-middle (broken cert gives it away) and serves a 403 Forbidden page when clicking on this link. Verified that 8.8.8.8 works fine.
I will make a parenthetical point that the WSJ, while expensive to subscribe, is a very high quality news source and worth paying for if it's in your budget. There are discounts to be found on various sites. And god knows their newsroom needs all the subscribers it can get (just like NYT, etc) to stay independent of their opinion-page-leaning business model that tends to be not so objective (the two are highly separated). Luckily they have a lot of business subscribers who keep them afloat, but I decided to subscribe years ago and never regretted it.
Every platform ultimately makes choices in how users engage with it, whether that goal is to drive up engagement, ad revenues or whatever metric is relevant to them. My general read is that Facebook tries to message that they're "neutral" arbiters and passive observers of whatever happens on their platform. But they aren't, certainly not in effect, and possibly in intent either. To preserve existing algorithms is not by definition fair and neutral!
And in this instance, choosing not to respond to what its internal researchers found is, ultimately, a choice they've made. In theory, it's on us as users and consumers to vote with our attention and time spent. But given the society-wide effects of a platform that a large chunk of humanity uses, it's not clear to me that these are merely private choices; these private choices by FB executives affect the commonweal.
It's pretty laughable for Facebook to claim they're neutral when they performed and published[1] research about how tweaking their algorithm can affect the mood of their users.
Even if they hadn't done that, it would still be a laughable claim prima facie.
There's something of an analogue to the observer effect: that the mere observation of a phenomenon changes the phenomenon.
Facebook can be viewed as an instrument for observing the world around us. But it is one that, through being used by millions of people and personalizing/ranking/filtering/aggregating, affects change on the world.
Or to be a little more precise, it structures the way that its users affect the world. Which is something of a distinction without much difference, consequentially.
If the private platform is de facto the primary source of news for the majority of the population, this affects the public in incredible ways. I don’t understand how the US Congress does not recognize and regulate this.
“It is difficult to get a man to understand something, when his [campaign fundraising] depends on his not understanding it.” - Upton Sinclair (lightly adapted)
Consider the following model scenario. You are a PM at a discussion board startup in Elbonia. There are too many discussions at every single time, so you personalize the list for each user, showing only discussions she is more likely to interact with (it's a crude indication of user interest, but it's tough to measure it accurately).
One day, your brilliant data scientist trained a model that predicts which of the two Elbonian parties a user most likely support, as well as whether a comment/article discusses a political topic or not. Then a user researcher made a striking discovery: supporters of party A interact more strongly with posts about party B, and vice versa. A proposal is made to artificially reduce the prevalence of opposing party posts in someone's feed.
Would you support this proposal as a PM? Why or why not?
That's beside the point, though. The point here is that Facebook executives were told by their own employees that the algorithms they designed were recommending more and more partisan content and de-prioritizing less partisan content because it wasn't as engaging. They were also told that this was potentially causing social issues. In response, Kaplan/FB executives said that changing the algorithm would be too paternalistic (ignoring, apparently, that an algorithm that silently filters without user knowledge or consent is already fundamentally "paternalistic"). Given that Facebook's objective is to "bring the world closer together", choosing to support an algorithm that drives engagement that actually causes division seems a betrayal of its stated goals.
Same. I miss the days of the chronological feed. Facebook's algorithms seem to choose a handful of people and groups I'm connected to and constantly show me their content and nothing else. It's always illuminating when I look someone up after wondering what happened to them only to see that they've been keeping up with Facebook, but I just don't see any of their posts.
I agree with this. I have a mildly addictive personality and found I had to block my newsfeed to keep myself (mostly) off facebook. I follow a couple of groups which are useful to me and basically nothing else.
I deleted all of my old posts to reduce the amount of content FB has to lure my friends into looking at ads. But because of the covid-19 pandemic I was using facebook again to keep in contact with people. Now that restrictions are eased in my country I can see people again, and have deleted my facebook posts.
No. Why should the only desirable metric be user engagement?
Is the goal of FB engagement/virality/time-on-site/revenue above all else? What does society have to gain, long term, by ranking a news feed by items most likely to provoke the strongest reaction? How does Facebook's long-term health look, 10 years from now, if it hastens the polarization and anti-intellectualism of society?
> Is the goal of FB engagement/virality/time-on-site/revenue above all else?
Strictly speaking, Facebook is a public company that exists only to serve its shareholder's interests. The goal of Facebook (as a public company) is to increase stock price. That almost often, if not always, means prioritizing revenue over all else.
That's the dilemma.
Then again, I believe Mark has control of the board, right? (And therefore couldn't be ousted for prioritizing ethical business practices over revenue - I could be wrong about this)
This is a false choice. The real problem stems from the fact that the model rewards engagement at the cost of everything else.
Just tweaking one knob doesn't solve the problem. A real solution is required, that would likely change the core business model, and so no single PM would have the authority to actually fix it.
Fake news and polarization are two sides of the same coin.
I'd just suggest the data scientist was optimizing the wrong metrics. People might behave that way, but having frequent political arguments is a reason people stop using Facebook entirely. It's definitely one of the more common reason people unfollow friends.
Very high levels of engagement seems to be a negative indicator for social sites. You don't want your users staying up to 2AM having arguments on your platform.
This is why the liberal arts are important, because you need someone in the room with enough knowledge of the world's history to be able to look at this and suggest that maybe given the terrible history of pseudo-scientifically sorting people into political categories, you should not pursue this tactic simply in order to make a buck off of it.
Agreed. Engineers have an ethical duty to the public. When working on software systems that touch on so many facets of people's lives, a thorough education in history, philosophy, and culture is necessary to make ethical engineering decisions. Or, failing that, the willingness to defer to those who do have that breadth of knowledge and expertise.
"The term is probably a shortening of “software engineer,” but its use betrays a secret: “Engineer” is an aspirational title in software development. Traditional engineers are regulated, certified, and subject to apprenticeship and continuing education. Engineering claims an explicit responsibility to public safety and reliability, even if it doesn’t always deliver.
The title “engineer” is cheapened by the tech industry."
"Engineers bear a burden to the public, and their specific expertise as designers and builders of bridges or buildings—or software—emanates from that responsibility. Only after answering this calling does an engineer build anything, whether bridges or buildings or software."
You don't need liberal arts majors in the boardroom, you need a military general in charge at the FTC and FCC.
Can we dispense with the idea that someone employed by facebook regardless of their number of history degrees has any damn influence on the structural issue here, which is that Facebook is a private company whose purpose is to mindlessly make as much money for their owners as they can?
The solution here isn't grabbing Mark and sitting him down in counselling, it's to have the sovereign, which is the US government exercise its authority which it has forgotten how to use apparently and reign these companies in.
You voluntarily put yourself in this position with no good way of fixing it. No one's forcing Facebook to do what they (and now you) do, eh?
My perception of reality is that you and your brilliant data scientist are (at best naive and unsuspecting) patronizing arrogant jerks who have no business making these decisions for your users.
You captured these peasants' minds, now you've got a tiger by the tail. The obvious thing to do is let go of the tiger and run like hell.
- User-configurable and interpretable: Enable tuning or re-ranking of results, ideally based on the ability to reweight model internals in a “fuzzy” way. As an example, see the last comment in my history about using convolutional filters on song spectrograms to distill hundreds of latent auditory features (e.g. Chinese, vocal triads, deep-housey). Imagine being able to directly recombine these features, generating a new set of recommendations dynamically. Almost all recommendation engines fail in this regard—the model feeds the user exactly what the model (designer) wants, no more and no less.
- Encourage serendipity: i.e. purposefully select and recommend items that the model “thinks” is outside the user’s wheelhouse (wheelhouse = whatever naturally emerging cluster(s) in the data that the user hangs out in, so pluck out examples from both nearby and distant clusters). This not only helps users break out of local minima, but is healthy for the data feedback loop.
If you restrict yourself to 2 bad choices, then you can only make bad choices. It doesn't help to label one of them "artificial" and imply the other choice isn't artificial.
It is, in fact, not just crude but actually quite artificial to measure likelihood to interact as a single number, and personalize the list of discussions solely or primarily based on that single number.
Since your chosen crude and artificial indication turned out to be harmful, why double-down on it? Why not seek something better? Off the top of my head, potential avenues of exploration:
• different kinds of interaction are weighted differently. Some could be weighted negatively (e.g. angry reacts)
• [More Like This] / [Fewer Like This] buttons that aren't hidden in the ⋮ menu
• instead of emoji reactions, reactions with explicit editorial meaning, e.g. [Agree] [Heartwearming] [Funny] [Adds to discussion] [Disagree] [Abusive] [Inaccurate] [Doesn't contribute] (this is actually pretty much what Ars Technica's comment system does, but it's an optional second step after up- or down-voting. What if one of these were the only way to up- or down-vote?)
• instead of trying to auto-detect party affiliation, use sentiment analysis to try to detect the tone and toxicity of the conversation. These could be used to adjusts the weights on different kind of interactions, maybe some people share divisive things privately but share pleasant things publicly. (This seems a little paternalistic, but no more so than "artificially" penalizing opposing party affiliation)
• certain kinds of shares could require or encourage editorializing reactions ([Funny] [Thoughtful] [Look at this idiot])
• Facebook conducted surveys that determined that Upworthy-style clickbait sucked, in spite of high engagement, right? Surveys like that could be a regular mechanism to determine weights on interaction types and content classifiers and sentiment analysis. This wouldn't be paternalistic, you wouldn't be deciding for people, they'd be deciding for themselves
I feel like this is a false presentation of the PM choice. If I was the PM there, I would question the first assumption that the users want to see more of the stuff they interact with. That's an assumption, it's not founded in any user or social research (in the way you've presented it).
And even if it was supported by research, I would think about the long tail. What does this mean for my user engagement in the long run. This list might satisfy them now, but it necessarily leads to a narrowing down of the content pool in the long run. I would ask my marketing sciences unit or my data science unit, whatever I have, to try to forecast or simulate a model that tells us what would the dynamic of user engagement be with intervention A and intervention B.
I feel this is one of the biggest problems of program management today. Too much reliance on short-term A/B testing, which, in most cases, can only solve very tactic problems, not strategic problems with the platform. Some of the best products out there rely much less on user testing, and much more on user research and strategic thinking about primary drivers in people.
If you were to use this approach - you might see that actually, the product you have with choosing to optimise for short-term engagement brings less user growth and less opportunity for diverse marketing - which, it is important to note, is one of the main purpose of reach-building marketing campaigns.
I would say the way this whole problems is phrased shows that the PM, or the company indeed, is only concerned with optimising frequency of marketing campaigns, rather than the quality, reach and engagement with marketing campaigns.
Obviously, hindsight 20/20 and generals after battle and all that. I'm still pretty sure I would've thought more strategically than "how do I increase frequency of showing ads".
As a PM, I'd support it as an A/B test. Show some percentage of your users an increased level of posts from the opposite party, some others an increased level of posts from their own party, and leave the remaining 90% alone. After running that for a month or two, see which of those groups is doing better.
They've clearly got something interesting and possibly important, but 'interaction strength' is not intrinsically good or bad. I would instead ask the researcher to pivot from a metric of "interaction strength" to something more closely aligned to the value the user derives from their use of your product. (Side note: Hopefully, use of your product adds value for your users. If your users are better off the less they use their platform, that's a serious problem).
Do people interacting with posts from the opposite party come away more empathetic and enlightened? If they are predominantly shown posts from their own party, does an echo chamber develop where they become increasingly radicalized? Does frequent exposure to viewpoints they disagree with make people depressed? They'll eventually become aware outside of the discussion board of what the opposite party is doing, does early exposure to those posts make them more accepting, or does it make them angry and surprised? Perhaps people become fatigued after writing a couple angry diatribes (or the original poster becomes depressed after reading that angry diatribe) and people quit your platform.
Unfortunately, checking interaction strength through comment word counts is easy, while sentiment analysis is really hard. Whether doing in-person psych evals or broadly analyzing the users' activity feed for life successes or for depression, you'll have tons of noise, because very little of those effects will come from your discussion board. Fortunately, your brilliant data scientist is brilliant, and after your A/B test, has tons of data to work with.
They did as you say (you are a PM, after all!), and next week they rolled out the "likelihood of engagement" model. An independent analysis by another team member, familiar with the old model, confirmed that it was still mostly driven by politics (there is nothing much going on in Elbonia, besides politics), but politics was neither the direct objective not an explicit factor in the model.
The observed behavior is the same: using the new model, most people are still shown highly polarized posts, as indicated by subjective assessment of user research professionals.
Agreed, as a general rule I shy away from predicting things I wouldn't claim expertise in otherwise. This is why consulting with subject matter experts is important. Things as innocuous as traffic crashes and speeding tickets are a huge world unbeknownst to the casual analyst (the field of "Traffic Records")
I would take a step back and question the criteria we are using to make decisions. “Engagement” in this context is euphemistic. This startup is talking about applying engineering to influence human behavior in order to make people use their product more, presumably because their monetization strategy sells that attention or the data generated by it.
If I were the PM I’d suggest a change in business model to something that aligns the best interests of users with the best interests of the company.
I’d stop measuring “engagement” or algorithmically favoring posts that people interact with more. I’d have a conversation with my users about what they want to get out of the platform that lasts longer than the split second decision to click one thing and not another. And I’d prepare to spend massive resources on moderation to ensure that my users aren’t being manipulated by others now that my company has stopped manipulating them.
I think the issues of showing content from one side of a political divide or the other is much less important than showing material from trustworthy sources. The deeper issue, which is a very hard problem to solve, is dealing with the fundamental asymmetries that come up in political discourse. In the US, if you were to block misinformation and propaganda you’d disproportionately be blocking right wing material. How do you convince users to value truth and integrity even if their political leaders don’t, and how do you as a platform value them even if that means some audiences will reject you?
I don’t know how to answer those questions but they do start to imply that maybe “news + commenting as a place to spend lots of time” isn’t the best place to expend energy if you’re trying to make things better?
I would think engagement would be a core metric you would be measured against in this example. And if that’s the case, this certainly isn’t a side effect.
This WSJ story cites old research and falsely suggests we aren’t invested in fighting polarization. The reality is we didn’t adopt some of the product suggestions cited because we pursued alternatives we believed are more effective. What’s undeniable is we’ve made significant changes to the way FB works to improve the integrity of our products, such as fundamentally changing News Feed ranking to favor content from friends and family over public content (even if this meant people would use our products less). We reduce distribution of posts that use divisive and polarizing tactics like clickbait, engagement bait, and we’ve become more restrictive when it comes to the types of Groups we recommend to people.
We come to these decisions through rigorous debate where we look at all angles of how our decisions will affect people in different parts of the world - from those with millions of followers to regular people who might not otherwise have a place to be heard. There’s a baseline expectation of the amount of rigor and diligence we apply to new products and it should be expected that we’d regularly evaluate to ensure that our products are as effective as they can be.
We get criticism from all sides of any decision and it motivates us to look at research, our own and external, analyze and pressure test our principles about where we do and don't draw lines on speech. We continue to do and fund research on misinformation and polarization to better understand the impact of our products; in February we announced an additional $2M in funding for independent research on this topic (e.g. https://research.fb.com/blog/2020/02/facebook-misinformation...).
Criticism and scrutiny are always welcome, but using cherry-picked examples to try and negatively portray our intentions is unfortunate.
10 short years ago, nobody could have imagined that huge swathes of the population could have been swayed to accept non-scientific statements as fact because of social media. Now we're struggling to deal with existential threats like climate change because a lot of people get their worldview from Facebook. Algorithms have decided that they fall on one side of the polarization divide and should receive a powerful dose of fake science and denialism ... all because clicks and engagement.
How much exactly do you think Facebook ought to donate to independent researchers? Most tech companies donate ~0$ to such efforts.
You state this as a matter of fact. How do you know this?
Even if it were to be true, that people were more polarized in the climate worldview by Facebook and more so to the wrong side than the right side, we all know that climate change is the result of our behaviour the last centuries and that counter efforts has been resisted the last 50 years.
Dead Comment
You can reshuffle the deck but the same cards are still inside.
Deleted Comment
Twitter managed to do it, but FB continues to allow political parties to spread misinformation via the algorithm, and FB profits from it.
So essentially VP of Integrity, your salary is paid for in part by the spread of misinformation. Until you at least ban political ads your integrity is non-existent.
Why I am saying this: it seems you sit backwards on your high horse, criticising those people who for all intents and purposes have very limited insight into the decision making.
Me and my close friends are fed up with Facebook and how obviously it is trying to polarize everyone in this world.
No sympathy for you on my side, and I cann assure you I speak on behalf of my friends too.
Genuine question here.
We have entered an era in which non-state actors like Facebook have power that was once the exclusive domain of governments [1]. Facebook understands this, and justifiably views itself as a quasi-government [2].
I would really like to understand Facebook’s theory of governance. If I want to understand my own government, I can read the Federalist papers. These documents articulate an understanding of history and a positive view of the appropriate role of government in society. I can use these documents to help myself evaluate a particular government action in light of the purpose of government and the risks inherent in concentrated power.
Has Facebook published something like this? I struggle to understand Facebook’s internal view of it’s role in society and its concept of what “doing the right thing” means. Without some clear statement of governing principles, people will naturally gravitate to the view that Facebook is a cynical and sometimes petty [3] profit maximizer.
Without some statement of purpose and principles, it is hard to level criticism in a way that Facebook will find helpful or actionable. We are left to speculate about Facebook's intentions, instead of arguing that a certain outcome is inconsistent with its stated purpose.
[1] https://www.cfr.org/blog/blurred-lines-between-state-and-non...
[2] https://www.vox.com/the-big-idea/2018/4/9/17214752/zuckerber...
[3] https://www.cnbc.com/2019/02/14/facebooks-security-team-trac...
https://www.vulture.com/2019/11/silicon-valley-recap-season-...
From the outside looking in, it seems as though you are paid to drink cool-aid and paint FB in a positive light. How does one get to be in your position? What are the qualifications for your job?
https://en.wikipedia.org/wiki/Onavo
Another commenter noticed this first but the comment is buried at the bottom of the thread. You have to enable showdead to see it. https://news.ycombinator.com/item?id=23319381
“Let me make some calls.”
Dead Comment
I think that the fact that the founder/CEO of Onavo is now Facebook's VP of Integrity is entirely consistent with everything else we've read about Facebook over the years.
Amazing..
I hope there are more people at the top with the integrity to stand up for injustice like Tim Bray. I have all the respect in the world for someone who puts their neck on the line for what they believe in. Thank you Tim @tbray
https://www.nytimes.com/2020/05/04/business/amazon-tim-bray-...
Deleted Comment
This seems inherently a political evaluation. What are the criteria and is this driven manually automatically?
If we're talking metrics, the 64% stat cited in the article turns out not to be a good way to measure impact of recommendations on an extremist group. We internally think about things like prevalence of bad recommendations. More on prevalence here - https://about.fb.com/news/2019/05/measuring-prevalence/. I don't have a metric I can share on recommendations specifically but you can see the areas we've shared it for so far here: https://transparency.facebook.com/community-standards-enforc...
$2M for independent research. haha, that's really generous of Facebook. You all probably made that in a days worth of misleading political ads you've sold.
One question I have for you mr VP of INTEGRITY... I haven't used FB in over 8 years but I would like the full set of every data point that you have on me, my wife and my kids. Where can I get that? And if I can't, please explain to me why.
I guess you've been drinking so much moneysaurus rex kool aid that you seem to not understand that people just don't believe anything you all say anymore. Your boss can't even give straight answers in front of the government.
Maybe you all should change your tagline to: Facebook, we're working on it.
Business Insider article: https://www.businessinsider.com/facebook-removes-mention-of-...
Frontline Interview: https://www.pbs.org/wgbh/frontline/interview/guy-rosen/
Isn’t that the CEO’s job?
The problem is that your entire executive leadership believes it is a five hundred billion dollar platform. They have to, because the investors demand it.
This is the source of all your failings.
You maybe serious about your job, but you work for people that proved they have no moral compass. I'd quit if I were you and actually believe in what you try to accomplish.
Edit: just realized you are the founder of onavo, a spyware bought by facebook. Were you also heading project atlas? Were you responsible for inviting teens to install spyware so you can collect all their data?
The mere fact YOU are chosen as VP of integrity just says it all. When the head of integrity is a spyware peddler... Well....
Yeah, I guess it's a cynical move I should've expected of Facebook.
Emphasis added.
What is that supposed to mean? Would you like a sticker for doing the right thing?
hahahhahahhahahhahahah
My email address is in my hn bio if you don't want to make that public :)
You guys are a bunch of clowns.
Dead Comment
Dead Comment
Dead Comment
Dead Comment
Dead Comment
Dead Comment
If you had any integrity, you would delegate this governance back to the captured provinces.
We've got this idea stuck in our heads that only the website itself is allowed to curate content. Only Facebook gets to decide which Facebook posts to show us.
What if, instead, you had a personal AI that read every Facebook post and then decided what to show you. Trained on your own preferences, under your control, with whatever settings you like.
Instead of being tuned to line the pockets of Facebook, the AI is an agent of your own choosing. Maybe you want it to actually _reduce_ engagement after an hour of mindless browsing.
And not just for Facebook, but every website. Twitter, Instagram, etc. Even websites like Reddit, which are "user moderated", are still ultimately run by Reddit's algorithm and could instead be curated by _your_ agent.
I don't know. Maybe that will just make the echo chambers worse. But can it possibly make them worse than they already are? Are we really saying that an agent built by us, for us, will be worse than an agent built by Facebook for Facebook?
And isn't that how the internet used to be? Back when the scale of the internet wasn't so vast, people just ... skimmed everything themselves and decided what to engage with. So what I'm really driving at is some way to scale that up to what the internet has since become. Some way to build a tiny AI version of yourself that goes out and crawls the internet in ways that you personally can't, and return to you the things you would have wanted to engage with had it been possible for you to read all 1 trillion internet comments per minute.
The primary content no user wants to see any every user agent would filter out is ads. Since ads are the primary way sites stay in business, they are obligated to fight against user agents or other intermediary systems.
The ultimate problem is that Facebook doesn't want to show you good, enriching content from your friends and family. They want to show you ads. The good content is just a necessary evil to make you tolerate looking at ads. Every time you upload some adorable photo of your baby for your friends to ooh and aah over, you're giving Facebook free bait that they then use to trap your friends into looking at ads.
The only thing more insane than blaming users for having self-interest are the people who pretend that Facebook et al. are somehow owed the business model they have, painting ad-blockers as some kind of dangerous society-destabilizing technology instead of the commonsense response to shitty business practices it clearly is.
Well, it is someone else's website. What do you expect Zuckerberg has his own interests in mind.
In 2020, it is still too difficult for everyone to set up their own website, so they settle for a page on someone else's.
If exchanging content with friends and family (not swaths of the public who visit Facebook - hello advertisers) is the ultimate goal, then there are more efficient ways to to do that without using Zuckerberg's website.
The challenge is to make those easier to set up.
For example, if each group of friends and family were on the same small overlay network they set up themselves, connecting to each other peer-to-peer, it would be much more difficult for advertisers to reach them. Every group of friends and family on a different network instead of every group of friends and family all using the same third party, public website on the same network, the internet.
Naysayers will point to the difficulty setting up such networks. No one outside of salaried programmers paid to do it wants to even attempt to write "user agents" today because the "standard", a ridiculously large set of "features", most of which benefit advertisers not users, is far too complex. What happens when we simplify the "standard"? As an analogy, look at how much easier is is to set up Wireguard, software written more or less by one person, than it is to set up OpenVPN.
"no user"? Nope. People buy magazines, that are 90% ads. Subscribe to newsletters. Hunt for coupons. Watch home shopping channels. Etc, etc.
There's large part of population that wants to see ads. Scammy and bad ads? No. Good and relevant ads? A LOT of people do want them. Even tech-folks, who claim that ads are worst thing for humanity. Don't you want to learn about sale for new tech gadgets? Discounts for AWS? Good rent deals?
And yes, I know: good luck getting THAT to happen in the US given how badly funded everything else in my list is. If you’re in another country that actually funds public goods maybe this is a thing you could talk to some of your fellow techies about and make a proposal, especially if your country is getting increasingly tired of Facebook?
Alternatively, ground-up local funding of federated social networks might be workable; I run a Mastodon server for myself and a small group of my friends and acquaintances, with the costs pretty much evenly split between myself and the users who have money to spare. It is not without its flaws and problems but it is a thing I can generally keep going with a small investment of my spare time every few months.
Not that you're wrong, but: that's the fucking point!
Advertising delenda est.
Flaw? It seems that the point would be to force FB to transact with currency rather than a bait-and-switch tactic. The site would also be more usable if they were forced to change business model.
While everyone is sceptical on whether such a service can reach critical mass to make financial sense, a brand new FB replacement may not be able to do it, However FB itself can certainly give that as an option without hurting their revenues substantially.
I was sceptical on the value prop for Youtube Premium, I am constantly surprised how many people pay for it, if google can afford to loose ad money with YT premium, I am sure FB can build a financial model around a freemium offering if they wanted to.
YouTube has been getting a lot of flack for this recently.
I am already getting such service in the form of Android's newsfeed feature on pixel. Its google but its pretty good.
You could make the same argument for Google or other online web sites relying on ads as the primary source of revenue.
Not all users hate ads in principle, just in practice. In theory, you'd be making the users select ads for relevance and not being annoying. But obviously, the site wants to show ads based on how much they're paying and "not being annoying" only factors in if pushes people off the site entirely.
The problem is actually how to fund the timeline publication services. But systems like Medium etc seem to work OK.
I am now spending several hundred dollars a year on content subscriptions. Plus subscriptions for Gmail, Zoom and a few other things where I have outgrown the free service. A freemium model for the timeline publication services would probably work.
These companies do provide some value by building the infrastructure and so on. But the graph itself is kept proprietary, most likely because it is not copyrightable.
Advertising in mass media is regulated. You are very much allowed to publish claims that the government would characterize as outright lies, you just can't do it to sell a product.
If you're unpredictable you're a problem. Thus, it makes sense to slowly push you to a pole so you conform to a group's preferences and are easier to predict.
A hole in my own argument is that today's networks are incentivized to do increase engagement where a neutral agent is in most ways not.
So perhaps the problem isn't just the need for agents but for a proper business model where the reward isn't eyeball time as it is today.
But you are predictable, even if you think you are unpredictable, you are just a bit more adventurous. Algorithm can capture that as well. It will be easier for algorithm that works on your behalf.
I've been on this for years. Free is a lie, and the idea that everything has to be "free as in beer" is a huge reason so many things suck.
There’s a lot of history around that split, and the motivation for HTML/CSS was about separating presentation from the content in many ways. For another example, once upon a time a lot of chat services ran over XMPP, and you could chat with a Facebook friend from your Google Hangouts account. Of course, both Google and Facebook stopped supporting it pretty quickly to focus on the “experience” of their own chat software.
The thing is that there is very little money to be made selling content, and a lot to be made controlling the presentation. So everyone focuses on the latter, and that’s why we live in a software world of walled gardens that work very hard to not let you see your own data.
There is some EU legislation proposal that may make things a bit better (social network interop), but given the outsized capital and power of internet companies i’m not holding my breath.
This was never true. There was an XMPP-speaking endpoint into Facebook's proprietary chat system, but it wasn't a S2S XMPP implementation and never federated with anything. It was useful for using FBChat in Adium or Pidgin, but not for talking to GChat XMPP users.
For example, Facebook could create some kind of plugin API that allows you to interpose your filtering/ranking code between their content and their presentation.
For example, maybe they give you a list of N possible main page feed items each with its own ID. Your code then returns an ordered list of M <= N IDs of the things that should go into your feed. That would allow you to filter out the ones you don't want and have the most interesting stuff displayed first. Facebook could display the M items you've chosen along with ads interspersed.
Something like that could run in the browser or Facebook could even allow you to host your algorithm in a sandbox on their servers if that helps performance. (Which means you trust them to actually run it, but you have to trust them on some basic things if you're going to use their service at all.)
In other words, changing the acoustics of the echo chamber doesn't mean you need to be the one implementing a big chunk of the system. You just need a way to exert control over the part you want to customize.
This.
Also. What incentive does a walled garden even have to allow something like this? Put a different way, what incentive does a walled garden have to not just block this "user agent"? Because the UA would effectively be replacing the walled garden's own "algo curated new feed" - except if the user builds their own AI bot -- the walled garden can't make money the way they currently do.
I think the idea is very interesting. I personally believe digital UA's will have a place in the future. But in this scenario I couldn't see it working.
I was in agreement with you until I read that. People don’t need to have content dictated to them like mindless drones whether it is from social media, bloggers, AI, or whatever. Many people prefer that, though, out of laziness. It’s like the laugh track on sitcoms because people were too stupid or tuned out to catch the poorly written jokes even with pausing and other unnecessarily directed focus. It’s all because you are still thinking in terms of content and broadcast. Anybody can create content. Off loading that to AI is just more of the same but worse.
Instead imagine an online social application experience that is fully decentralized without a server in the middle, like a telephone conversation. Everybody is a content provider amongst their personal contacts. Provided complete decentralization and end-to-end encryption imagine how much more immersive your online experience can be without the most obvious concerns of security and privacy with the web as it is now. You could share access to the hardware, file system, copy/paste text/files, stream media, and of course original content.
> And isn't that how the internet used to be?
The web is not the internet. When you are so laser focused on web content I can see why they are indistinguishable.
I'm active on the somewhat (not fully) decentralized social medium Fediverse (more widely known as Mastodon, but it's more than that) and I think a lack of curation is a problem: Posts by people who post a lot while I'm active are very likely to be seen, those by infrequent posters active while I'm not very likely to go unnoticed.
How would your proposed system (that seems a bit utopic and vague from that comment, to be honest) deal with that?
If the AI is entirely under the user's control, why not? It's like having a buddie that's doing for me what I'd do for myself, if I had the time and energy (and eyebleach).
In response to it just creating more echo chambers:
- it can't be worse than now - At minimum, it's an echo chamber of your own creation instead of being manipulated by FB. There's value in that, ethically. - Giving people choice at scale means it will at least improve the situation for some people.
I am pretty sure things can be worse than right now, pretending like we are in some kind of hell state at the bottom of some well where it can't possibly be worse, seems unrealistic to me.
Spoilers ahead:
In Stephenson‘s world people can hire “editors” to curate what they see, and those editors effectively determine reality for people at a mass scale. This is just one of the many fascinating ideas Stephenson explores and I highly recommend reading the book.
This interview covers some of the details if you’re not willing to dive into an 800+ page novel:
https://www.pcmag.com/news/neal-stephenson-explains-his-visi...
Part of the concept was that the agents would actually roam onto servers on the internet on your behalf raising complicated questions around how to sandbox the agent code (came in useful for VPSs and AWS-style lambdas in the end).
Or comments on different Internet forums that are blatantly spammy/SEO gaming etc.
Or block authors in search results or Twitter feed or any comment that you don't like. Basically the Zapier of content filtering.
This will be available to the user as a subscription service.
Some of these thigs are not possible on mobile platforms (Android, iOS) unfortunately because the OS do not allow such access, but we hope that Android and iOS in the future open up to allow external curation systems, apart from the app platform it's self as it's in the interest of the user.
I would love to have an agent that could apply those sorts of analyses to my data sources. In my case, I wouldn't want to filter out bad news, but unnecessarily nasty spins on it. I'd find that super valuable.
[1] https://github.com/kstrauser/judgish
As soon as Facebook realizes you're a risk, you'll get a C&D ordering you to stop accessing their servers. These typically have the force of law under the CFAA.
You won't access their servers, but just read the page that the user already downloaded? You'll still get nailed under the Copyright Act.
"User agents" in the sense used by the OP are as old as the internet itself. There's an active, serious, and quiet effort to abuse outdated legislation to ensure that they never become a problem.
Given that users are generally not interested in exerting a high level of sophisticated control over software they use, how then is the concept of a user agent AI/filter any different at a fundamental level? It probably won't be created and maintained as a public benefit in any meaningful way, and users will not be programming and tuning the AI as needed to deliver the needed accuracy. I don't think AI has yet reached a level of sophistication where content as broad a range as what's found on the internet (or even just Facebook) can be curated to engage the human intellect beyond measuring addictive engagement, without significant user intervention.
Hopefully I'm wrong, as I do wish I could engage with something like Facebook without having to deal with ads or with content curated to get my blood boiling. Sometimes I do wonder how much it is Facebook vs. human tendency under the guise of an online persona, as both are clearly involved here.
It always appears to me that the companies are making it difficult and users have no voice.
It is important to realize that Facebook is not the first, second or even tenth of its ilk. FaceBook combines a bunch of ideas from previous systems, in particular MySpace and USENET. It is more or less the third generation of Web Social Media. There is no reason to believe there can't be a fourth.
My interest in these schemes is to provide a discussion space that is end-to-end encrypted so that the cloud service collecting the comments does not have access to the plaintext. This allows for 'Enterprise' type discussion of things such as RFPs and patent applications. I am not looking to provide a consumer service (at this stage).
The system you describe could be implemented in a reasonably straightforward fashion. Everyone posts to the timeline service of their choice and choose between a collection of user agents discovering interesting content for them to read. These aggregation services could be a paid service or advertising supported. Timeline publishing services might need a different funding model of course but bit shoveling isn't very expensive these days. Perhaps it could be bundled with video conferencing capabilities, password management or any of the systems people already pay for.
As for when the Internet/Web was not so vast. One of my claims to fame is the last person to finish surfing the Web which I did in October 1992 shortly after meeting Tim Berners-Lee. It took me an entire four days of night shifts to surf every page of every site in the CERN index.
https://inrupt.com
"Time to reset the balance of power on the web and reignite its true potential.
When Sir Tim Berners-Lee invented the web, it was intended for everyone. The excitement and creativity of its early days were driven from the notion that we can all participate — and the impact was world-changing.
But the web has shifted from its original promise — and it’s time to make a change.
We can still unlock the true promise of the web by decentralizing the power that’s currently centralized in the hands of a few. How? By using the power of Solid.
Solid is the technically potent, open-source platform built to decentralize the web. Inrupt is the company that’s helping to fuel Solid’s success."
What? How will that help me or my grandma?
I think the real problem is Facebook's need to be such a large company. They brought this on themselves trying to take over the world. Maybe they need a Bell-style breakup
This is far from a perfect analogy, but compare it to the problem of email spam. People first tried to fight it with client-side Bayes keyword filters. It turns out it wasn't nearly as simple as that, and to solve a problem that complicated, you basically need people working on it full time to keep pace.
Ranking and filtering a Facebook feed would have different challenges, of course. It's not all about adversaries (though there are some); it's also about modeling what you find interesting or important. But that's pretty complicated too. Your one friend shared a woodworking project and your other friend shared travel photos. Which one(s) of those are you interested in? And when someone posts political stuff, is that something you find interesting, or is it something you prefer to keep separate from Facebook? There are a lot of different types of things people post, so the scope of figuring out what's important is pretty big.
"Holochain is an open source framework for building fully distributed, peer-to-peer applications.
Holochain is BitTorrent + Git + Cryptographic Signatures + Peer Validation + Gossip.
Holochain apps are versatile, resilient, scalable, and thousands of times more efficient than blockchain (no token or mining required). The purpose of Holochain is to enable humans to interact with each other by mutual-consent to a shared set of rules, without relying on any authority to dictate or unilaterally change those rules. Peer-to-peer interaction means you own and control your data, with no intermediary (e.g., Google, Facebook, Uber) collecting, selling, or losing it.
Data ownership also enables new frontiers of user agency, letting you do more with your data (imagine truly personal A.I., whose purpose is to serve you, rather than the corporation that created it). With the user at the center, composable and customizable applications become possible."
http://developer.holochain.org and http://holo.host Apps building using the Holochain framework/pattern: http://junto.love using
Also great are http://scuttlebutt.co.nz and https://docs.datproject.org/. Fritter is Twitter built on the DAT protocol (Paul Frazee) https://twitter.com/taravancil/status/949310662760632320
Currently, software clients of Mastodon or Twitter hold that mile. Mastodon gives all content unfiltered, which could be too much at times, while Twitter does some oft-annoying opaque black magic in its timeline algorithms.
A better solution would be to have a protocol for capability that filters content with logic under your control. A universal middleware standard that is GUI-agnostic, can fit different content types.
By adopting this, open/federated social could start catching up on content filtering features to for-profit social (in a no-dark-patterns way, benefitting user experience), hopefully stealing users.
Ideally it could be used by the likes of Twitter and Facebook—of course, given the size of for-profit social, such an integration would take some unimaginably big player to motivate them to adopt (the state of their APIs is telling), but if it’s there there’s a chance.
I do see a content filtering AI as very difficult to achieve, and I don't think it will be possible for quite some time. There are so many small problems, even getting AI to recognize targeted content is difficult, given that websites can have infinitely different layouts. And what about video or audio? The most practical way to achieve a content AI would be to persuade websites to voluntarily add standardized tags so that the only problem becomes predicting and filtering. Although I could see some issues with that like people trying to game the system.
Deleted Comment
That would be great. Having an artificial intelligence as a user agent would be perfect. That'd be the ideal browser. So many science fiction worlds have the concept of an intelligent navigator who acts on behalf of its operator in the virtual world, greatly reducing its complexity.
Today's artificial intelligences cannot be trusted to act on our best interests. They belong to companies and run on their computers. Even if the software's open source, the data needed to make it useful remains proprietary.
It’s not clear to me whether you may see posts collected by other users or only ones from your own feed and it seems highly experimental.
[1] https://facebook.tracking.exposed/
There is a feedback problem, though, which is that your preferences are modified by what you see. So the AI problem devolves to showing you the kind of content that makes you want to see more of it, i.e. maximize engagement. I think a lot of people are addicted to controversy, "rage porn," anger-inducing content, and these agents are not going to help with this issue.
If we could train AI agents to analyze the preferences of people, I think the best use for them wouldn't be to curate your own content, but to use them to see the world from other people's perspective. If you know in what "opinion cluster" someone lies and can predict their emotional reaction to some content, you may be able to identify the articles from cluster A that people from cluster B react the least negatively to, and vice versa. And this could be leveraged to break echo chambers, I think: imagine that article X is rated +10 by cluster A and -10 by cluster B, and article Y is rated +10 by cluster A but only -2 by cluster B. It might be a good idea to promote Y over X, because unlike X, Y represents the views of cluster A in a way that cluster B can understand, whereas X is probably some inflammatory rag.
The key is that you can't simply choose content according to a user's current preferences, they also have to be shown adversarial content so that they have all the information they need about what others think. This is how they can retain their agency. Show them stuff they disagree with, but that they can respect.
I expect that a system like the one I'm describing would naturally penalize content that paint people with opposing points of view as evil or idiots, because such content is the most likely to be very highly rated by the "smart" side and profoundly hated by the "stupid" side. Again, note that I'm not saying content everyone likes should be promoted, it's more like, we should promote the +10/-2 kind of polarization (well thought out opinion pieces that focus on ideas which might be unpopular or uncomfortable) over the +10/-10 kind of polarization (people who disagree with me are evil cretins).
https://thinkerapp.com/
So you can read more of what you already agree with? That's called living in a bubble. The mind cannot grow in a bubble.
https://github.com/huginn/huginn
We need access to all the data so we can decide which algorithms to apply. There’s really no escaping that
then I choose what I read in my reader
It obeys what you say. No cats? No problem. No babies? No problem.
And yes, it also does filter out ads.
FBP
I doubt FB would let you do that. It's "their" content.
There is no need for holier-than-thou censorship short of legal breaches. Good to see FB take this change of direction.
Dead Comment
Sometimes, just shutting up about an issue and not discussing it is the best thing for a group to do. Not more advocacy or argument. Time heals many things. No app is going to help you take that approach -- and that's not what technology is going to help solve (or is incentivized to solve). Just like telling a TV station that's on 24 hours to not cover a small house fire when there's no other news.
People are not good at disengaging from something when that's the right thing to calm the situation. And Facebook somehow tapped into that human behavior and (inadvertently or purposefully) fueled so many things that have caused our country (and others) to get derailed from actual progress.
There is no vaccine yet for this.
And not to dump on the Facebook train, since others would have come to do it instead. But they sure made a science and business of it.
I also think there’s an art to deescalation and discussing ideas or persuading someone you disagree with to see an alternative view (and then giving them space to change their mind).
Productive discussion isn’t possible with everyone or even one individual depending on where they are in their life, but I’ve generally found it works better than expected when you can remove your own identity and feelings from it.
It’s rarely in the spotlight though because it doesn’t get retweeted or shared as much as combative arguing that’s more a performance from each side (with likes and cheering on the sidelines).
Sometimes I do hit the Reply button though. I still have room for self-improvement. :-)
Deleted Comment
Deleted Comment
This is perhaps a form of "folk activism" [1]:
> In early human tribes, there were few enough people in each social structure such that anyone could change policy. If you didn’t like how the buffalo meat got divvied up, you could propose an alternative, build a coalition around it, and actually make it happen. Success required the agreement of tens of allies — yet those same instincts now drive our actions when success requires the agreement of tens of millions. When we read in the evening paper that we’re footing the bill for another bailout, we react by complaining to our friends, suggesting alternatives, and trying to build coalitions for reform. This primal behavior is as good a guide for how to effectively reform modern political systems as our instinctive taste for sugar and fat is for how to eat nutritiously.
Facebook is a collection of your friends or your "tribe", so repeated arguments with your tribe members is what our unconscious brain pushes us towards. That coupled with the dopamine hit of validation via likes (which is common to other online discussion platforms).
[1] https://www.cato-unbound.org/2009/04/06/patri-friedman/beyon... I don't agree with a lot said here. Only linking the definition of folk activism
They made a choice to throw gasoline on the flames of these aspects of human behavior. Few people seem to realize that Facebook could have been a force for good, if they had made different choices or had more integrity when it comes to the design and vision of their platform.
The way that things happened is not they only possible way they could have happened, and resigning to the current state as "inevitable", to me, reeks of an incredible lack of imagination.
Force for good. I do not want to sound like this, but how, in your vision, that would look like? This is a real question.
I think everyone has a natural human need to feel that they have agency in their community. The need to feel that they participate in the culture that surrounds them and that they can have some affect on the groups that they are members in. The alternative is being a powerless pawn subject to the whims of the herd.
In the US, I think most people lost this feeling with the rise of suburbia, broadcast television, and consumer culture. There are almost no public spheres in the US, no real commons where people come together and participate. The only groups many people are "part" of are really just shows and products that they consume.
Social media tapped into that void. It gave them a place to not just hear but to speak. Or, at least, it gave them the illusion of it. But, really, since everyone wants to feel they have more agency, everyone is trying to change everyone else but no one wants to be changed. And all of this is mostly decoupled from any real mechanism for actual societal change, so it's become just angry shouting into the void.
People have been doing this forever, and even on the Web much, much longer than Facebook has existed.
Now that said, they know what they have on their hands and how it makes them the money. They aren’t going to fix it. It is a big feature of their product.
I think that’s literally true. They told their algorithm “maximise the time people spend on Facebook” and it discovered for itself that sowing strife and discord did that.
Facebook’s crime is that when this became obvious they doubled down on it, because ads.
I don't agree with that. I very strongly think that Facebook did invent a lot of this.
> They just built the biggest mainstream distribution channel to do so
Scale does matter though. There is a lot in life that is legal or moral at small scale but illegal or immoral at large scale. Doing things at scale does change the nature of what you are doing. There's no 'just' to be had there.
> Nothing they ever did in terms of facilitating pointless arguments has been all that original either.
I don't agree with that either. They have even published scientific papers, peer-reviewed, to explain their new and novel methods of creating emotionally manipulative content and algorithms.
> People have been doing this forever, and even on the Web much, much longer than Facebook has existed.
I also don't agree with this. Facebook has spent 10+ years inventing new ways to rile people up. This stuff is new. Yes I know newspapers publish things that are twisted up etc, but that's different, clearly. The readers of the paper are not shouting at each other as they read it.
I think it's super dangerous to take this new kind of mass-surveillance and mass-scale manipulation and say, welp, nothing new here, who cares? I think that's extremely dangerous. It opens populations to apathy and lets corporations do illegal and immoral things to gain unfair and illegal power.
Facebook should not be legally allowed to do all the things they are doing. It's invasive, immoral, and novel, the way they deceive and manipulate society at large.
> some hidden human need/instinct to argue with people who they believe are incorrect
I've said it before, I'll probably say it again: this place is chock full of people just itching to tell you you're wrong and why. Don't get me wrong: obviously there's also a hell of a lot of great discussion and insightful technical knowhow being shared by real experts — but in my experience I also do have to wade through quite a lot of what feels like knee-jerk pedantry and point-scoring.
Extremely true, also relevant for work disagreements between people who have existing positive relationships. A surprising number of disagreements disappear if left on their own for a time.
I find that many people with engineering backgrounds (myself included) can struggle letting conflicts sit unresolved. I suspect that instincts learned debugging code get ported over to interpersonal issues, as code bugs almost never disappear if simply left to rest.
I first purged everyone that posted that stuff from my feed, and then eventually quite facebook altogether.
Deleted Comment
Hidden human need/instinct to argue, period. These arguments aren't intellectual debates, it's people getting pissed off at something, and venting their rage towards the other side.
It's odd how addictive rage can be. But that's not a new phenomenon. Tabloids have been exploiting this for decades before Facebook.
Most of my facebook feed is just memes and selfies.
(I'm venezuelan)
When facebook as new/trending up years ago there were some political discussions but people quickly figured out it was worthless, how come USAians haven't?
Dead Comment
Maybe FB do it better, but it is the same in every online "forum" where you get notifications about comments.
Facebook influences what you see to a far greater extent than a tradition forum did.
Facebook managed to get this just right: lightweight, sexy (in the sense of attractive), easy to believe, easy to understand, easy to spread. The word "true" is completely absent on the above statement. That generates clicks. That keeps users logged in more. That increases "engagement". That increases as revenue. Game over.
The masterminds/communications brilliant minds could never get so many eyeballs and ears tuned-in with such a low cost before.
I've mentioned before that FB = cancer It gives 1 (ability to communicate) and it takes 100.
We didn't evolve as a species to process this much information, or as Yuval Noah Harari calls it in Sapiens, gossip.
Then the terrorists win.
That used to be the conventional wisdom on trolls, but there are now so many of them. Worse, about half are bots.[1] (Both NPR and Fox News have that story, so it's probably correct.)
[1] https://www.huffpost.com/entry/carnegie-mellon-covid-19-twit...
It's not just that they tapped into it, it's the entire mission statement in a sense. 'to connect the world' if you want to treat it like a sort of network science basically means to lower the distance between individuals so much that you've reduced the whole world to a small world network. There's no inhibition in this system, it's like an organism on every stimulant you can imagine.
Everything spreads too fast and there's no authority to shut anything down that breaks, so the result is pretty much unmitigated chaos.
The vaccine is the thing people complain about all the time, the much maligned 'filter bubbles', which is really just to say splitting these networks into groups that can actually work productively together and keeping them away from others that make them want to bash their heads in.
Face to face with people I know or at least recognize as human, not a bot, educated or at least not cartoon-hick-personality - Arguments can be great, because of the ability to see when to pull back and stop something from escalating. We are all human after all.
In internet-powered discussion, where numbers of people observing can be huge, and every username can feel inhuman or maybe even just trolling in an attempt to create a stupid argument - that Argument gets painful. But the dopamine hit is still there...
What would that look like?
Given our current (social) media ecosystem, converting outrage into profit (per Chomsky, McLuhan, Postman, and many, many others), what does a non-outrage maximizing strategy look like?
I currently favor a slower, calmer discourse. A la both Kahnemann's thinking fast vs slow, and McLuhan's hot vs cold metaphors.
That means breaking or slowing the feedback loops, removing some of the urgency and heat of convos.
Some possible implementation details:
- emphasis on manual moderation, like metafilter, and dang here on HN
- waiting periods for replies. or continuing allowing the submissions but delay their publication. or treat all posts as drafts with a hold period. or HN style throttling. or...?
- only friends can reply publicly.
- hide "likes"
- do something about bots. allow aliases, but all accounts need verified real names or ownership.
Sorry, these are just some of misc the proposals I remember. I should probably have been cataloguing them.
That worked great for a couple of weeks, but now I log on Twitter and half of my feed is tweets of people I don't know or follow, with the worst, most infuriatingly stupid hot takes. No wonder they have literally hundreds of thousands of likes. The platform is built around this "content".
Funny, years ago, around the Aurora shooting in Colorado, it was Facebook that made me recognize this behaviour in myself.
It's why I left the platform.
Also, obligatory XKCD: https://xkcd.com/386/
Could this be part of the solution? If a discussion is getting particularly heated, put responses on a time delay. Maybe even put the account on a general delay for engaging with heated subjects, so the outrage doesn't crop up elsewhere.
Of course this would decreased engagement. It might even push users to more permissive platforms.
If you realize it’s a dumpster fire then delete your account and move on with life. If that line of thinking is a challenge in absolutely any way the problem is addiction.
https://en.m.wikipedia.org/wiki/Addiction
Kill it with fire! The only advice I have about the company and it's products.
You can even see in the memes. It's the right that loves "trolling libs," and the left that's taking the bait. I think it's telling that the stereotype hardly ever seems to go the other way; you almost never see people talk about liberals "trolling reps."
You really don't need to engage with everything in order to be a good activist. In fact, I believe taking time and emotional energy to do so is actually being a bad activist. You're just wasting effort, nothing you say or do will change anyone's minds because mostly, the whole reason they're saying whatever it is is specifically to make you upset. To trap you into unwinnable arguments just to laugh at how heated you get.
Really we all need to be better at just walking away from crazy, whatever side of whatever spectrum we find it. By regularly surrounding yourself with such conflicts and by regularly basting yourself in such a soup of intense negativity, you are quite literally doing nothing more than causing physical harm to your body and mind via the morass of cortisol, etc. you are unleashing. You are accomplishing nothing.
I agree that Facebook makes this painfully easy, although Twitter and Reddit are right there as well.
"There is no vaccine yet for this."
There may not be any vaccine, but there may be a cure. If we change the language used to communicate within a setting/platform such as Facebook, possibly by using a subset of the language previously used or by adopting a more Formal construct.
But Facebook is a virtual neighborhood, with greatly increased bandwidth and range. It is difficult or impossible to achieve it in their settings.
(Personally I get too wound up in internet arguments and it's just not a healthy space for my head to be in)
There are plenty of vaccines for this, but not in the sense that you can apply it to people by force, like you can apply a vaccine to babies. Meditation, yoga, religions, sports - there are many ways to calm the mind.
There is a fair amount of anecdotal evidence suggesting that psychedelics can have a significant impact when used correctly.
> The high number of extremist groups was concerning, the presentation says. Worse was Facebook’s realization that its algorithms were responsible for their growth. The 2016 presentation states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.”
> Facebook's mission is to give people the power to build community and bring the world closer together. People use Facebook to stay connected with friends and family, to discover what's going on in the world, and to share and express what matters to them.
Encouraging group communication is the primary goal, regardless of the consequences.
This is one example, and it’s quite possibly a poor example as it is a partisan example, but Reddit allows The_Donald subreddit to remain open, but it has been delisted from search, the front page, and Reddit’s recommendation systems.
> “64% of all extremist group joins are due to our recommendation tools”
How does Facebook currently define extremist group and how does that contrast with the definition of extremist group used in the presentation?
For example, when I browse videos on Youtube I will only get democratic content (even though I am from Poland). Seems as soon as you click on couple entries you get classified and from now on you will only be shown videos that are agreeable to you. That means lots of Stephen Colbert and no Fox News.
My friend is deeply republican and she will not see any democratic content when she gets suggestions.
The problem runs so deep that it is difficult to find new things even if I want. I maintain another browser where I am logged off to get more varied selection and not just couple topics I have been interested with recently.
My point of view on this: this is disaster of gigantic proportions. People need to be exposed to conflicting views to be able to make their own decisions.
Short version: it's because this place is less divisive that it feels more divisive. HN is probably the least divisive community of its size and scope on the internet (if there are others, I'd like to know which they are), and precisely because of this, many people feel that it's among the most divisive. The solution to the paradox is that HN is the rare case of a large(ish) community that keeps itself in one piece instead of breaking into shards or silos. If that's true, then although we haven't yet realized it, the HN community is on the leading edge of the opportunity to learn to be different with one another, at least on the internet.
I believe there is an anchoring effect -- if you are just in a discussion where someone helps you understand the RISC-V memory model, it feels wrong to go into another thread on the same site and unload a string of epithets on someone who feels differently than you do about how doctors should get paid.
Environments where all people tend to think exactly the same are typically extremist in some way, resulting from some kind of polarization process that eliminates people that don't express opinion at the extreme of spectrum. They are either removed forcibly or remove themselves when they get dissatisfied.
One way HN stays away from this polarization process is because of the discussion topics and the kind of person that typically enjoys these discussions. Staying away from mainstream politics, religion, etc. and focusing mainly on technological trivia means people of very different opinions can stay civilized discussing non-divisive topics.
Also it helps that extremist and uncivilized opinions tend to be quickly suppressed by the community thanks to vote-supported tradition. I have been reading HN from very close to start (even though I have created the account much further). I think the first users were much more VC/development oriented and as new users were coming they tend to observe and conform to the tradition.
(I red your piece. I think I figured it out. The users actually select themselves on HN though in a different way. The people who can't cope with diverse community can't find place for themselves, because there is no way to block diverse opinion, and in effect remove themselves from here and this is what allows HN to survive. The initial conditions were people who actually invited diverse opinion which allowed this equilibrium).
And from the business perspective, they're trying to reduce the likelihood that your friend abandons their platform and goes to another one that she feels is more "built for her".
In other words, Steam, please filter games by my engagement in previous games I've played. News organizations, please don't filter news by my engagement in previous news.
Facebook's problem is it acts in two worlds: keeping up with your friends, and learning important information. If all you did was keep up with your friends' lives, filtering content by engagement is kind of meh.
Same with youtube. I mostly spend all my time on there watching technical talks and video game related stuff. It's pure entertainment. So filtering content is fine. But if I also used it to get my news, you start to run into problems.
I occasionally watch some of the Joe Rogan podcast videos when he has a guest I'm interested in. I swear, as soon as I watch one JRE video, I am suddenly inundated with suggestions for videos with really click-baity and highly politicized topics.
I've actually gotten to the point where I actively avoid videos that I want to watch because I know what kind of a response YouTube will have. Either that or I open them in incognito mode. It's a shame. I wish I could just explicitly define my interests rather than YT trying to guess what I want to watch.
Chronological with the ability to easily filter who I see, and who I post to. On each point capabilities has either been removed, hidden, or made worse in some other creative way.
Adding insult to injury, having to periodically figure out where they've now hidden the save button for events, or some other feature they don't want me to use is always a 'fun' exercise.
But it doesn't please them -- study after study shows a high correlation between depression and anxiety and social media use.
https://en.wikipedia.org/wiki/archive.is
http://web.archive.org/web/20200526163314/https://www.wsj.co...
http://web.archive.org/web/20200526201849/https://www.wsj.co...
And in this instance, choosing not to respond to what its internal researchers found is, ultimately, a choice they've made. In theory, it's on us as users and consumers to vote with our attention and time spent. But given the society-wide effects of a platform that a large chunk of humanity uses, it's not clear to me that these are merely private choices; these private choices by FB executives affect the commonweal.
[1] https://www.theatlantic.com/technology/archive/2014/06/every...
There's something of an analogue to the observer effect: that the mere observation of a phenomenon changes the phenomenon.
Facebook can be viewed as an instrument for observing the world around us. But it is one that, through being used by millions of people and personalizing/ranking/filtering/aggregating, affects change on the world.
Or to be a little more precise, it structures the way that its users affect the world. Which is something of a distinction without much difference, consequentially.
Consider the following model scenario. You are a PM at a discussion board startup in Elbonia. There are too many discussions at every single time, so you personalize the list for each user, showing only discussions she is more likely to interact with (it's a crude indication of user interest, but it's tough to measure it accurately).
One day, your brilliant data scientist trained a model that predicts which of the two Elbonian parties a user most likely support, as well as whether a comment/article discusses a political topic or not. Then a user researcher made a striking discovery: supporters of party A interact more strongly with posts about party B, and vice versa. A proposal is made to artificially reduce the prevalence of opposing party posts in someone's feed.
Would you support this proposal as a PM? Why or why not?
The whole point of having friends and being able to (un)follow people is to I can curate my own feed.
I don't use Facebook anymore except for hobby related groups like my motorcycling group.
I deleted all of my old posts to reduce the amount of content FB has to lure my friends into looking at ads. But because of the covid-19 pandemic I was using facebook again to keep in contact with people. Now that restrictions are eased in my country I can see people again, and have deleted my facebook posts.
Is the goal of FB engagement/virality/time-on-site/revenue above all else? What does society have to gain, long term, by ranking a news feed by items most likely to provoke the strongest reaction? How does Facebook's long-term health look, 10 years from now, if it hastens the polarization and anti-intellectualism of society?
Strictly speaking, Facebook is a public company that exists only to serve its shareholder's interests. The goal of Facebook (as a public company) is to increase stock price. That almost often, if not always, means prioritizing revenue over all else.
That's the dilemma.
Then again, I believe Mark has control of the board, right? (And therefore couldn't be ousted for prioritizing ethical business practices over revenue - I could be wrong about this)
Arguably, the PM doesn't care since they have short term targets the want to hit and they might not even be with the company in a few years' time.
Just tweaking one knob doesn't solve the problem. A real solution is required, that would likely change the core business model, and so no single PM would have the authority to actually fix it.
Fake news and polarization are two sides of the same coin.
Very high levels of engagement seems to be a negative indicator for social sites. You don't want your users staying up to 2AM having arguments on your platform.
I'm reminded of this article:
https://www.theatlantic.com/technology/archive/2015/11/progr...
"The term is probably a shortening of “software engineer,” but its use betrays a secret: “Engineer” is an aspirational title in software development. Traditional engineers are regulated, certified, and subject to apprenticeship and continuing education. Engineering claims an explicit responsibility to public safety and reliability, even if it doesn’t always deliver.
The title “engineer” is cheapened by the tech industry."
"Engineers bear a burden to the public, and their specific expertise as designers and builders of bridges or buildings—or software—emanates from that responsibility. Only after answering this calling does an engineer build anything, whether bridges or buildings or software."
Can we dispense with the idea that someone employed by facebook regardless of their number of history degrees has any damn influence on the structural issue here, which is that Facebook is a private company whose purpose is to mindlessly make as much money for their owners as they can?
The solution here isn't grabbing Mark and sitting him down in counselling, it's to have the sovereign, which is the US government exercise its authority which it has forgotten how to use apparently and reign these companies in.
My perception of reality is that you and your brilliant data scientist are (at best naive and unsuspecting) patronizing arrogant jerks who have no business making these decisions for your users.
You captured these peasants' minds, now you've got a tiger by the tail. The obvious thing to do is let go of the tiger and run like hell.
- User-configurable and interpretable: Enable tuning or re-ranking of results, ideally based on the ability to reweight model internals in a “fuzzy” way. As an example, see the last comment in my history about using convolutional filters on song spectrograms to distill hundreds of latent auditory features (e.g. Chinese, vocal triads, deep-housey). Imagine being able to directly recombine these features, generating a new set of recommendations dynamically. Almost all recommendation engines fail in this regard—the model feeds the user exactly what the model (designer) wants, no more and no less.
- Encourage serendipity: i.e. purposefully select and recommend items that the model “thinks” is outside the user’s wheelhouse (wheelhouse = whatever naturally emerging cluster(s) in the data that the user hangs out in, so pluck out examples from both nearby and distant clusters). This not only helps users break out of local minima, but is healthy for the data feedback loop.
It is, in fact, not just crude but actually quite artificial to measure likelihood to interact as a single number, and personalize the list of discussions solely or primarily based on that single number.
Since your chosen crude and artificial indication turned out to be harmful, why double-down on it? Why not seek something better? Off the top of my head, potential avenues of exploration:
• different kinds of interaction are weighted differently. Some could be weighted negatively (e.g. angry reacts)
• [More Like This] / [Fewer Like This] buttons that aren't hidden in the ⋮ menu
• instead of emoji reactions, reactions with explicit editorial meaning, e.g. [Agree] [Heartwearming] [Funny] [Adds to discussion] [Disagree] [Abusive] [Inaccurate] [Doesn't contribute] (this is actually pretty much what Ars Technica's comment system does, but it's an optional second step after up- or down-voting. What if one of these were the only way to up- or down-vote?)
• instead of trying to auto-detect party affiliation, use sentiment analysis to try to detect the tone and toxicity of the conversation. These could be used to adjusts the weights on different kind of interactions, maybe some people share divisive things privately but share pleasant things publicly. (This seems a little paternalistic, but no more so than "artificially" penalizing opposing party affiliation)
• certain kinds of shares could require or encourage editorializing reactions ([Funny] [Thoughtful] [Look at this idiot])
• Facebook conducted surveys that determined that Upworthy-style clickbait sucked, in spite of high engagement, right? Surveys like that could be a regular mechanism to determine weights on interaction types and content classifiers and sentiment analysis. This wouldn't be paternalistic, you wouldn't be deciding for people, they'd be deciding for themselves
And even if it was supported by research, I would think about the long tail. What does this mean for my user engagement in the long run. This list might satisfy them now, but it necessarily leads to a narrowing down of the content pool in the long run. I would ask my marketing sciences unit or my data science unit, whatever I have, to try to forecast or simulate a model that tells us what would the dynamic of user engagement be with intervention A and intervention B.
I feel this is one of the biggest problems of program management today. Too much reliance on short-term A/B testing, which, in most cases, can only solve very tactic problems, not strategic problems with the platform. Some of the best products out there rely much less on user testing, and much more on user research and strategic thinking about primary drivers in people.
If you were to use this approach - you might see that actually, the product you have with choosing to optimise for short-term engagement brings less user growth and less opportunity for diverse marketing - which, it is important to note, is one of the main purpose of reach-building marketing campaigns.
I would say the way this whole problems is phrased shows that the PM, or the company indeed, is only concerned with optimising frequency of marketing campaigns, rather than the quality, reach and engagement with marketing campaigns.
Obviously, hindsight 20/20 and generals after battle and all that. I'm still pretty sure I would've thought more strategically than "how do I increase frequency of showing ads".
They've clearly got something interesting and possibly important, but 'interaction strength' is not intrinsically good or bad. I would instead ask the researcher to pivot from a metric of "interaction strength" to something more closely aligned to the value the user derives from their use of your product. (Side note: Hopefully, use of your product adds value for your users. If your users are better off the less they use their platform, that's a serious problem).
Do people interacting with posts from the opposite party come away more empathetic and enlightened? If they are predominantly shown posts from their own party, does an echo chamber develop where they become increasingly radicalized? Does frequent exposure to viewpoints they disagree with make people depressed? They'll eventually become aware outside of the discussion board of what the opposite party is doing, does early exposure to those posts make them more accepting, or does it make them angry and surprised? Perhaps people become fatigued after writing a couple angry diatribes (or the original poster becomes depressed after reading that angry diatribe) and people quit your platform.
Unfortunately, checking interaction strength through comment word counts is easy, while sentiment analysis is really hard. Whether doing in-person psych evals or broadly analyzing the users' activity feed for life successes or for depression, you'll have tons of noise, because very little of those effects will come from your discussion board. Fortunately, your brilliant data scientist is brilliant, and after your A/B test, has tons of data to work with.
The observed behavior is the same: using the new model, most people are still shown highly polarized posts, as indicated by subjective assessment of user research professionals.
What should you do now?
Deleted Comment
If I were the PM I’d suggest a change in business model to something that aligns the best interests of users with the best interests of the company.
I’d stop measuring “engagement” or algorithmically favoring posts that people interact with more. I’d have a conversation with my users about what they want to get out of the platform that lasts longer than the split second decision to click one thing and not another. And I’d prepare to spend massive resources on moderation to ensure that my users aren’t being manipulated by others now that my company has stopped manipulating them.
I think the issues of showing content from one side of a political divide or the other is much less important than showing material from trustworthy sources. The deeper issue, which is a very hard problem to solve, is dealing with the fundamental asymmetries that come up in political discourse. In the US, if you were to block misinformation and propaganda you’d disproportionately be blocking right wing material. How do you convince users to value truth and integrity even if their political leaders don’t, and how do you as a platform value them even if that means some audiences will reject you?
I don’t know how to answer those questions but they do start to imply that maybe “news + commenting as a place to spend lots of time” isn’t the best place to expend energy if you’re trying to make things better?
If a user is driven to political discussions, so be it.
Sure, this is good for the company because it means the user will spend more time on the platform, but it is a side effect really.
But facebook feels it's their job to drive certain thing to users. That's the whole point as far as they can tell. I disagree too.