Worth noting: the article's headline is one of those tricky situations where the summary isn't wrong, but should probably include more information.
FB allows advertisers to target at specific topics, and they've been blacklisting objectionable categories. But the blacklisting appears to be manual, so while "nazi" isn't a micro-targeting category, things like Josef Mengele and a white supremacist punk band are.
Manually keeping up with and out-thinking objectionable content keywords is a perpetual arms race. If FB wants to win it that way, they'll have to invest pretty hard in that space if they don't want a story like this every quarter.
Because an insanely high percentage of people will refuse to pay $1/year. Also, because $1 does not come close to covering the cost difference; targeted ads typically bring in 2x or more revenue per impression and the average revenue per user per year in the US is around $25.
Facebook made $48B in revenue last year. That's around $38/user (with much more coming from people in North America). Even $1/mo doesn't come close to covering it.
$1 is vastly different from free for a variety of psychological reasons. People will also be less likely to want to pay to see ads on every other post. You'd have to provide more value.
Also, the fees from credit cards companies are so high that micropayments don't make sense (yearly subscription charge + per-transaction fixed fee + per-transaction percentage charge).
Because that wouldn't cover the loss in revenue. $1 per year and non-targeted ads doesn't equate to $50B+ in revenue growing 20-40% per year. Also, advertisers come to FB because they can be ROI positive with the targeted ads. Most advertisers don't want to run non-targeted ads because they would be ROI negative.
It's a 'bad headline' if it will be commonly understood as something not representative of the content.
It's done on purpose by all publications, because the nuanced reality of most situations isn't inflammatory enough to drive clicks and create commotion.
Given this situation i.e. wherein it's really hard to track down and manage the colloquial lingo used around the world for various things ... the headline is unfair.
"Facebook fails to stop advertisers from targeting some extremist memes"
"Facebook unable to tamp down extremists shifting lingo"
"Nazis by another name: advertisers target extremists using shifting terminology on Facebook"
There is definitely a more responsible headline here, and surely the editorial staff are capable of that if they wanted to.
Surely the staff at many publications are wary of this as well, it's one of the ugly pressures of business reality that 'someone' is enforcing.
As for FB ... this has to be hard, whack-a-mole kind of things. Sometimes I'm sympathetic to them, other times I think $100B and some of the best AI folks in the world should be able to mostly figure this out.
Much like rampant fake goods on Amazon ... Bezos can land a rocket by itself, but can't get counterfeit goods off of Amazon? ...
Not that I'd defend either company, but to be fair, landing a rocket is applied engineering and math. Invest enough to build it right, and it will land. Making subjective calls about "good vs bad" content or products, where what's considered good or bad may ebb and flow with current political/cultural whims, is not so straightforward. While "are nazis bad?" is pretty black-and-white, there are a lot of gray area topics. Not sure if adding magical AI fairies would help.
Can Facebook tell the difference between interest in studying history and interest in repeating it? Interest in a skinhead band seems to only have one interpretation, but couldn’t Goebbels and Himler search queries just be to learn and see who is discussing them? I know I’ve spent a lot of time reading about WW2. I’m not trying to be obtuse, just ignorant to what degree Facebook can target different intents behind a query or interest.
Not all skinhead bands are nazis. It might be seen as a minor point but I am worried how quickly we make sweeping appraisals based on partial information.
Early skinhead culture and music overlapped heavily with Jamacain rude boy culture.
In fairness, the interest of Facebook is to target ads, so I don't think they really care. An impression on a user with Himmler interest is an impression on a user with himmler interest, a click is a click.
Now the folks monitoring at DHS? Yeah, I think they probably go to great lengths to try to differentiate and segment the population of people who look at extremist material of any kind.
Actually being interested in skinhead bands might also have a legitimate scientific intention when studying political extremism. Intention and interest are very difficult to distinguish.
And you've described why I use DuckDuckGo when I want to learn about something like that. I have no idea how Google is interpreting my intent, if it is at all, and how it might come back to bite me in the future.
I think it's absolutely a datapoint considered by any organization that is trying to build a comprehensive profile of citizens to catch people like this before they hurt anyone.
My understanding is that they rely on the reporting system because their whole curation model is built around assessing individual posts rather than patterns of activity.
I went through a phase where I listened to a lot of Hitler speeches on YouTube, just because I was historically interested in what the man actually said and how he said it. He's a fascinating person to study - also from a psychological point of view - precisely because of the horrible things that happened. Anyway, after that, YouTube kept recommending me nazi speeches for at least half a year. I remember how on some days I opened YouTube and Hitler kept reappearing on my front page. Now, I'm not a sensitive person so I don't mind, it's actually kind of hilarious. But after months of this, it irked me a bit.
Indeed, someone may have a very strong interest in Hitler and Nazis and Neo-Nazi culture without being one - aka a professor or researcher of authoritarian history.
I was unfamiliar with the bands and most of the people in those targeting lists.
The way I see it, these are the ways you can handle this:
1) Facebook builds this data the hard way. They staff a team of experts on "undesirables", who research and implement custom blocklists at facebook's scale. Insanely cash and time intensive, to say nothing of the "who decides what's undesirable" problem.
2) Spread cost and effort by amassing a central repository of known baddies, and all the orgs contribute and share access. The government does something like this with hashes of sex trafficking imagery, so that eng teams can filter against a blacklist. I think this topic FAR more nuanced and less binary than "does this picture contain illegal pornography or nah". Who maintains this list of undesirables? You're at "social credit score" in a hurry.
3) Algos. You let software extrapolate commonalities from known-bad actors – school shooters, confirmed russian propaganda branches, etc. And let the machine learn their language and flag accordingly. This is going to be coarse and stupid in the way ML always is, and local business owners with names like Heinrich are gonna get their livelihoods smashed accidentally here and there. Not great.
4) What Simulacra said – you just turn the whole targeting infra off. Facebook stops making money. This is great, I'd love to see it as regulation, but it's a big stretch, and very lofty when phrased like this.
5) Some kind of adtech equivalent of finance's KYC (Know Your Customer) regulation. Tie ad buys to confirmable, prosecutable identities, and rather than filtering before launch, aggressively follow up after launch. You run an ad campaign for nazis? Cool, your LLC and its primary stakeholders are permabanned. Facebook has already tried light versions of this, but it was lip service.
IMO 4 and 5 are the places to spend effort. I think we nee to start having conversations that do away with the idea that humans are autonomous and impervious to influence, and start having the discussion in a new context: When and how are you allowed to manipulate the minds of citizens at scale, and what kind of paper trail does it leave?
I don't understand why we don't let such platforms adopt a more laissez faire approach to such situations. There's an inordinate amount of pressure to curb free speech these days which seems very un-American.
I was in this camp for a while but the current political reality has shifted me towards viewing the normalization and acceptance of such hate speech as a negative to society. When I was growing up nazis and white supremacy were always framed in a negative light to make it clear they were wrong (this is mostly relevant in the developing years, less than 14 or so where children don't have a developed moral compass) now a days the extreme right is being treated as just another opinion you can have, normal adults recognize the danger associated with unrestrained nationalism and hate speech but young people are unexposed or unfamiliar with what it can lead to.
This stuff is dangerous, it can warp the way you view the world, any information glorifying it should always be accompanied with explanations and warnings as to the hate it is imbued with... It is un-American to hate another person because of their origin or their religion, it's also un-American to squash an open discussion on that topic but... that discussion needs to happen between mature adults in a setting that makes it clear how unacceptable it is to lean on racist tropes.
I think the only lasting solution to this is to raise a populace smart enough to be impervious to the radical left and right, both fringes being the domain of sloppy thinking and emotionally driven agendas. And in a way that keeps the flywheel of education spinning. We do a bad job of that today, I think.
But that takes time, strength, money, and – most critically – unified vision that I'm not sure America has right now.
So how do you implement education reform that takes 50 years, when nobody even agrees that it's needed, and kids are getting killed and radicalized today?
Do you allow it to continue in defense of the underlying principle of truly free speech? Maybe. To abandon that principle is a terrifying slippery slope.
This is just one of the ways where tech and culture vastly outpace science and regulation.
I generally agree with you, and I'm trying to parse through this myself. I think one difference in the modern era is that the Internet and social media have blurred the lines between broadcast speech and individual speech. Historically, we recoil at any restrictions on individual speech. They exist (can't yell FIRE in a crowded theater), but they are generally considered an unfortunate necessity. However, most people agree that broadcast speech should be censored.
Now we're in an era where speech can start out as individual, and then be broadcast. Or perhaps it's better considered a false dichotomy in the first place. Either way, we definitely haven't figured out how to handle this as a society.
Letting the government use its monopoly on force to stop speech is un-American. Holding Facebook accountable for the voices it chooses to amplify and profit from is the market/society at work. Free speech was never something that makes distributors not responsible morally or ethically for what they distribute.
Separately, hate speech crosses over into a form of speech many societies, America included, have decided can cause direct harm and needs special treatment (including legal restrictions). People can disagree about that, but should probably get that disagreement sorted before diving into the additional questions of speech versus distribution.
Uhh, that's what we have been doing the whole time. That's the status quo. We're talking about changing it because it's caused massive societal problems.
In the specific context of Facebook ad targeting, reports and investigations concluded that it was used as a propaganda channel in the US by foreign national agents attempting to tip the scales in the US Presidential election in 2016. "Why is that a bad thing" or "Why is that Facebook's problem" is a reasonable question, but it won't be seen as popular in the US if Facebook accepts those questions publicly as its business attitude.
The US has always had effective limits on broadcast speech, a history of bans on pornography and "communist" literature, as well as a number of quasi-self-imposed rules (Hays Code, MPAA ratings, Comics Code, "color bar" on radio and live music, etc). Effectively you could have very free speech so long as it was small-scale, but anything sufficiently controversial or offensive on a mass scale attracted attention.
Really? I've witnessed zero efforts on behalf of the government to curb free speech. But if you're talking about private speech that happens on privately owned websites, then that's flat out what is allowed by the constitution.
Seems to me that the issue is actually with private ownership being able to censor. Which can only be addressed with, guess what, government intervention.
"Inordinate" is in the eye of the beholder. Free speech is curbed all the time in the interests of maintaining smooth operation of society, and as even the most intelligent among us tend to forget, free speech is not guaranteed on platforms like Facebook. There are quite a few laws on the books about hate speech, and how that extends to, say, Facebook ads, is what's being discussed here.
...6) Full transparency at scale. When someone buys an advert in a newspaper or TV broadcast, it is visible to anyone watching.
In contrast, thousands of adverts can run in FB environment without anyone but the target able to see -- completely under the radar.
Starting with #5 KYC, and adding a site where EVERY advert of every type is available for public inspection, along with its (verified) originator info and targeting parameters.
This would allow all kinds of scrutiny by journalistic and public interest groups (e.g., researchers tracking hate groups, etc.).
FB Is making a bit of a start at this publishing some ads, but until they get to the full transparency, it can't be trusted.
The full transparency would also be a huge benefit to researchers.
I'd also be happy to see it required as a regulation for all players, not only FB.
I'd argue the rule should simply be that, the moment money changes hands, you can either tell me who paid you for a service, or you're legally liable for performing that service yourself. KYC for ads would then just be a natural consequence of this.
Facebook already necessarily employ a small army of moderators to remove illegal and "undesirable" material; they must have supervisors who set the overall policy direction and deal with new, emergent problems.
Companies that use social media on a large scale, and those companies that run social networks disguised as videogames, employ "community managers", whose job it is to understand and communicate with the community, including keeping abreast of disruptions. It doesn't seem that Facebook itself has many of these.
Facebook should get itself some "machine anthropologists", to study the ant farm. They can then get a sense of these problems before they get in the press, and definitely before they get to the Parliamentary committees. And feed the existing algorithms.
> Facebook should get itself some "machine anthropologists", to study the ant farm.
This is such a cool job title, I love it. I've long dreamed of doing this job at Twitter. There are so many blatantly patterned spam attempts and whatnot. I would love to work on analyzing, for example, the patterns in follower graphs surrounding templated bitcoin scam tweets.
The only viable solution is #4. So long as Facebook can make money by targeting specific groups or behavioral features, their algorithms and advertisers will find facsimiles for protected groups, the vulnerable, children, or Nazis since this optimizes revenue and engagement in dramatic ways. Even the most well-intentioned actor -- and Facebook is very far from that -- could not win this self-imposed game a whack-a-mole.
One alternative to shutting off targeting altogether would be switching from a blacklist to a whitelist approach, where regulators provide the set of features or groups that are allowed to be targeted.
Can we just have a social network that doesn't target anyone, for anything, and just lets us communicate, and build relationships without trying to manipulate us or violate our privacy?
Sadly we already have such a thing. It's the Internet itself.
It's quaint how in movies people easily identify with the underdog/resistance, e.g. a "Neo" in the "Matrix", and would see themselves taking the same direction if a similar scenario would ever play out.
Yet here we are, and at the first hint of inconvenience it's blue pills all around.
If it helps the understanding, it's worth remembering that life outside the Matrix was a hardscrabble garbage existence---so much so that a character was willing to kill to get back in.
Any social network that has people posting any personal data at all, will be mined. For one purpose or another. (And probably for one purpose AND the other.) Almost the only way to stop it would be to make it explicitly illegal, and even then they would still allow mining for certain purposes. There's just no way to get away from it at this point. (Other than just not using social networks at all I suppose?)
Putting personal data out there means... putting personal data out there, advertisers mining that data is a hard problem to solve. So can we re-orient the need for a solution to the injection of advertising into the platform? Lets solve this problem one step at a time and first stop those advertisers from leveraging the platform to distribute advertisements, instead only allowing them to mine the data to support ad campaigns. Once that's done we can look at salves for the data mining of social media, though I can't see that being possible without really resilient privacy enforcement.
Yes, but you would have to pay for it. And convince your friends to also. Actually maybe the subscription model would let you pay for their subscriptions as well.
It's funny because the idea of paying for it reads like a non-starter. Maybe it is. Crazy times when we'd trade away so much for so little vs just paying some nominal amount.
But I sometimes wonder if it's just as much a convenience thing as it is the actual money. That is, if the sign up process was just as easy with payment as it is now, would a lot more people go for it than assumed (at some price point)?
In any case, I don't think that pay or have your privacy co-opted are really the only paths to a viable social network business model.
Phone, email, texting, real life activities? Where did the obsession come from with having to be in real time constant contact with a ton of people who we didn't care about staying in contact with before? Other than a common place to share photos, which existed prior, why?
Wasn't that exodus predicated by the same sort of "undesirables" reputation? Though I'd much rather have a thousand Harry Potter erotic fan fiction authors than a handful of neo-nazis sharing my website.
Don't these exist already? [1] Other than that, I can relate to the other comments of just doing away with social networks on the web and replacing it with real-life communication via talking, phone calls, emails, or text messaging. As I get older I realize how much I miss the traditional ways of communicating without being inundated and distracted by constant notifications from present day social networks.
I deleted my fb+twitter several months ago. It was lonely at first. I realized I had let social media 'automate' my social life away. Since then I've been trying to foster actual friendships through the 'normal channels' like talking, hanging out (when I can make the time) and sms. It feels a bit Luddite in a way, and I feel less like I have 600 friends, and rather like I have 6 friends.
It hasn't been some relevatory experience like some would suggest; social media does make things easier and life without it feel quieter. But like anything else, it's a trade off. biggest positive I've noticed was that I find myself less dopamine-addled. I no longer waste hours on my phone scrolling in order to relax. It took some adjustment time, but I find it easier to get my dopamine fix from more productive hobbies.
That would at the very least, mean that users would pay a fee to use their product. As of now, as a free product that is the only way to monetize. The internet loves it's free, open products but at the same time, ironically, loves to build products whose founders are billionaires and even raises them to near hero status. It culminates in the users having a pretty terrible experience in the way of manipulation and privacy.
Commercial radio and broadcast television worked without fine grained metrics on the users. There's nothing special about the internet that makes invasive tracking a necessity.
Really though? Advertising supported multi-billion dollar industries for decades without mining our personal info. Just because they can doesn't mean they have to. If companies want to advertise to Facebook's hundreds of millions of users, they will do so without targeting if that's their only option.
Why can‘t ads (again) be targeted to content and not users? If I‘m reading an article on a Tesla I might be interested in what VW is offering instead - and might not be interested in that headset I looked at a week ago.
"at the very least"? No i think the third option is having an advertisement model that simply feeds users ads without having to abuse their data for 'optimization'.
It can still be mined. That's the crux of the problem is the mining. If you can mine it, then you can use that data to target ads, you can use it for law enforcement purposes, you can use it to stifle dissent, etc etc etc. Could also use the mined data for a myriad number of good things as well. Really is just up to the miner.
Point is, it's the mining that lays the foundation for the dangers, and there is little that prevents facebook, or twitter, or whatever entity that wants to mine mastodon instances from doing so.
Even in a decentralized/federated network there will always be people trying to mine data for the purposes of advertising and political campaigning. It's human nature to try to take advantage of others for personal and financial gain.
The network effect is strong and if your family is on facebook it can seem rude to post about your life elsewhere where they lack access to it. I'd love to see a government mandate forcing cross pollinization of these networks (perhaps using diapora's node approach) that would allow smaller hosts to compete with the big guys by letting people pull feeds in from other networks. Basically we need XMPP for social media but none of the social media companies have any motivation for implementing it so... ideally the government does sometime.
OP didn't say it had to be free. You could also have a social network that just displayed ads without any kind of targeting and user-tracking whatsoever.
Who would pay for that though? And why shouldn't it manipulate? I'm fine with manipulating people. Nudging is okay in my book if you're benevolent and aiming to make people's life better. It's just shit when all you want is to make them buy some product to fill the void you've helped create.
Am I the only one who doesn't mind the targeted ads on FB? Even from a site I visited recently, it doesn't really bother me. I actually like seeing products I'm interested in as opposed to generic targeting like you have on TV. I've discovered some pretty compelling products this way.
I feel the same, but at the same time I understand that not everyone feels that way, and that there should be easy ways for those people to not have to be subject to it.
Sadly any time this kind of thing gets discussed, the nuance gets thrown out and the discussion becomes either "ban all targeting" or "don't regulate anything", both of which are horrible ideas IMO.
Just stop using Facebook. Stop putting the onus on Facebook to fix all of this. We just need to stop using it. There are alternatives out there; start using them.
These articles are always going to come up. We're going to act surprised for 5 minutes, and continue to feed the machine.
After seeing these comments, I think we have it all completely wrong (that is, the mental model of those of us who wish FB stopped existing). Don't ask people to stop using Facebook. Encourage them to use it more.
We should instead call out the people who fund Facebook as sponsoring child abuse. [1]
And those who work at Facebook as inciting pogroms. [2]
And finally, those who defend Facebook as "dumb fucks" because, well, that's what they are anyway according to Mark Z.
At the same time, don't ask anyone to stop using Facebook. In fact, they should use Facebook so much that they bring down Facebook's servers. [3] Encourage the low ARPU "deadbeats" to keep using Facebook and its network as much as possible.
Just call out everyone who is giving them money [4].
I just don't understand why Facebook doesn't provide a subscription option. I would be willing to pay $4.99/month or perhaps a bit more in exchange for no ads and no tracking.
What is so wrong about being able to pay for things?
It blocks you from seeing the future. If they get money and no data, they're getting paid for who they are today with less insight into who they need to be tomorrow to keep getting paid.
The idea that data is "just sold to businesses" and can be substituted for its sale-value equivalent in cash is wrong, IMO. Serious insight, product directions, election swinging power, really big shit – are all emergent properties of data in aggregate like this that are somewhat unknowable until you actually get the data and see things unfold.
So what they're actually keeping, by choosing data over cash, is priceless long-term optionality.
Couldn't agree more, and the real problem is that Facebook (would be the same for any other company with a similar ad targeting model) is now in the position of arbiter of what is acceptable and what is not. Perhaps a punk white-supremacist band is off limits, but what about joining "maga" with "Infowars" with "Insane Clown Posse"? You'll likely be targeting many of the same people anyway.
FB allows advertisers to target at specific topics, and they've been blacklisting objectionable categories. But the blacklisting appears to be manual, so while "nazi" isn't a micro-targeting category, things like Josef Mengele and a white supremacist punk band are.
Manually keeping up with and out-thinking objectionable content keywords is a perpetual arms race. If FB wants to win it that way, they'll have to invest pretty hard in that space if they don't want a story like this every quarter.
Facebook at $X/mo is AOL. AOL was a successful but fairly predictable business. But Facebook offers a promise of growth that promises more return.
It's done on purpose by all publications, because the nuanced reality of most situations isn't inflammatory enough to drive clicks and create commotion.
Given this situation i.e. wherein it's really hard to track down and manage the colloquial lingo used around the world for various things ... the headline is unfair.
"Facebook fails to stop advertisers from targeting some extremist memes"
"Facebook unable to tamp down extremists shifting lingo"
"Nazis by another name: advertisers target extremists using shifting terminology on Facebook"
There is definitely a more responsible headline here, and surely the editorial staff are capable of that if they wanted to.
Surely the staff at many publications are wary of this as well, it's one of the ugly pressures of business reality that 'someone' is enforcing.
As for FB ... this has to be hard, whack-a-mole kind of things. Sometimes I'm sympathetic to them, other times I think $100B and some of the best AI folks in the world should be able to mostly figure this out.
Much like rampant fake goods on Amazon ... Bezos can land a rocket by itself, but can't get counterfeit goods off of Amazon? ...
Early skinhead culture and music overlapped heavily with Jamacain rude boy culture.
Deleted Comment
Now the folks monitoring at DHS? Yeah, I think they probably go to great lengths to try to differentiate and segment the population of people who look at extremist material of any kind.
https://www.npr.org/2019/02/20/696470366/arrested-coast-guar...
Deleted Comment
I was unfamiliar with the bands and most of the people in those targeting lists.
The way I see it, these are the ways you can handle this:
1) Facebook builds this data the hard way. They staff a team of experts on "undesirables", who research and implement custom blocklists at facebook's scale. Insanely cash and time intensive, to say nothing of the "who decides what's undesirable" problem.
2) Spread cost and effort by amassing a central repository of known baddies, and all the orgs contribute and share access. The government does something like this with hashes of sex trafficking imagery, so that eng teams can filter against a blacklist. I think this topic FAR more nuanced and less binary than "does this picture contain illegal pornography or nah". Who maintains this list of undesirables? You're at "social credit score" in a hurry.
3) Algos. You let software extrapolate commonalities from known-bad actors – school shooters, confirmed russian propaganda branches, etc. And let the machine learn their language and flag accordingly. This is going to be coarse and stupid in the way ML always is, and local business owners with names like Heinrich are gonna get their livelihoods smashed accidentally here and there. Not great.
4) What Simulacra said – you just turn the whole targeting infra off. Facebook stops making money. This is great, I'd love to see it as regulation, but it's a big stretch, and very lofty when phrased like this.
5) Some kind of adtech equivalent of finance's KYC (Know Your Customer) regulation. Tie ad buys to confirmable, prosecutable identities, and rather than filtering before launch, aggressively follow up after launch. You run an ad campaign for nazis? Cool, your LLC and its primary stakeholders are permabanned. Facebook has already tried light versions of this, but it was lip service.
IMO 4 and 5 are the places to spend effort. I think we nee to start having conversations that do away with the idea that humans are autonomous and impervious to influence, and start having the discussion in a new context: When and how are you allowed to manipulate the minds of citizens at scale, and what kind of paper trail does it leave?
This stuff is dangerous, it can warp the way you view the world, any information glorifying it should always be accompanied with explanations and warnings as to the hate it is imbued with... It is un-American to hate another person because of their origin or their religion, it's also un-American to squash an open discussion on that topic but... that discussion needs to happen between mature adults in a setting that makes it clear how unacceptable it is to lean on racist tropes.
I think the only lasting solution to this is to raise a populace smart enough to be impervious to the radical left and right, both fringes being the domain of sloppy thinking and emotionally driven agendas. And in a way that keeps the flywheel of education spinning. We do a bad job of that today, I think.
But that takes time, strength, money, and – most critically – unified vision that I'm not sure America has right now.
So how do you implement education reform that takes 50 years, when nobody even agrees that it's needed, and kids are getting killed and radicalized today?
Do you allow it to continue in defense of the underlying principle of truly free speech? Maybe. To abandon that principle is a terrifying slippery slope.
This is just one of the ways where tech and culture vastly outpace science and regulation.
I have no answers.
Now we're in an era where speech can start out as individual, and then be broadcast. Or perhaps it's better considered a false dichotomy in the first place. Either way, we definitely haven't figured out how to handle this as a society.
Separately, hate speech crosses over into a form of speech many societies, America included, have decided can cause direct harm and needs special treatment (including legal restrictions). People can disagree about that, but should probably get that disagreement sorted before diving into the additional questions of speech versus distribution.
Deleted Comment
Dead Comment
Seems to me that the issue is actually with private ownership being able to censor. Which can only be addressed with, guess what, government intervention.
In contrast, thousands of adverts can run in FB environment without anyone but the target able to see -- completely under the radar.
Starting with #5 KYC, and adding a site where EVERY advert of every type is available for public inspection, along with its (verified) originator info and targeting parameters.
This would allow all kinds of scrutiny by journalistic and public interest groups (e.g., researchers tracking hate groups, etc.).
FB Is making a bit of a start at this publishing some ads, but until they get to the full transparency, it can't be trusted.
The full transparency would also be a huge benefit to researchers.
I'd also be happy to see it required as a regulation for all players, not only FB.
Facebook already necessarily employ a small army of moderators to remove illegal and "undesirable" material; they must have supervisors who set the overall policy direction and deal with new, emergent problems.
Companies that use social media on a large scale, and those companies that run social networks disguised as videogames, employ "community managers", whose job it is to understand and communicate with the community, including keeping abreast of disruptions. It doesn't seem that Facebook itself has many of these.
Facebook should get itself some "machine anthropologists", to study the ant farm. They can then get a sense of these problems before they get in the press, and definitely before they get to the Parliamentary committees. And feed the existing algorithms.
This is such a cool job title, I love it. I've long dreamed of doing this job at Twitter. There are so many blatantly patterned spam attempts and whatnot. I would love to work on analyzing, for example, the patterns in follower graphs surrounding templated bitcoin scam tweets.
Deleted Comment
One alternative to shutting off targeting altogether would be switching from a blacklist to a whitelist approach, where regulators provide the set of features or groups that are allowed to be targeted.
It's quaint how in movies people easily identify with the underdog/resistance, e.g. a "Neo" in the "Matrix", and would see themselves taking the same direction if a similar scenario would ever play out.
Yet here we are, and at the first hint of inconvenience it's blue pills all around.
But I sometimes wonder if it's just as much a convenience thing as it is the actual money. That is, if the sign up process was just as easy with payment as it is now, would a lot more people go for it than assumed (at some price point)?
In any case, I don't think that pay or have your privacy co-opted are really the only paths to a viable social network business model.
[1] https://diasporafoundation.org/
It hasn't been some relevatory experience like some would suggest; social media does make things easier and life without it feel quieter. But like anything else, it's a trade off. biggest positive I've noticed was that I find myself less dopamine-addled. I no longer waste hours on my phone scrolling in order to relax. It took some adjustment time, but I find it easier to get my dopamine fix from more productive hobbies.
If you're hoping for a commercial, centralized social network, then you're always going to have to deal with perverse incentives.
[1] https://joinmastodon.org/
Point is, it's the mining that lays the foundation for the dangers, and there is little that prevents facebook, or twitter, or whatever entity that wants to mine mastodon instances from doing so.
Is it? Or is it the perverse financial incentive structure that leads humans to put financial gain over all other priorities?
Would humans still make decisions to take advantage if the incentive structures were built differently?
Deleted Comment
How does any of that stop the data mining?
It's the data mining and the ad targeting that people want to stop. And the FB competitors could be mining data just as easily as FB can.
Why should it? And why not have a social network that just serves up ads that aren't based on tracking/targeting users?
"Better" according to whom?
Sadly any time this kind of thing gets discussed, the nuance gets thrown out and the discussion becomes either "ban all targeting" or "don't regulate anything", both of which are horrible ideas IMO.
At least I hope it doesn't.
These articles are always going to come up. We're going to act surprised for 5 minutes, and continue to feed the machine.
We should instead call out the people who fund Facebook as sponsoring child abuse. [1]
And those who work at Facebook as inciting pogroms. [2]
And finally, those who defend Facebook as "dumb fucks" because, well, that's what they are anyway according to Mark Z.
At the same time, don't ask anyone to stop using Facebook. In fact, they should use Facebook so much that they bring down Facebook's servers. [3] Encourage the low ARPU "deadbeats" to keep using Facebook and its network as much as possible.
Just call out everyone who is giving them money [4].
Lets see how long FB operates after that.
[1] https://www.npr.org/2019/02/21/696430478/advocates-ask-ftc-t...
[2] https://newrepublic.com/article/147486/facebook-genocide-pro...
[3] https://www.jbaynews.com/whatsapp-crashes-almost-worldwide-o...
[4] https://m.signalvnoise.com/become-a-facebook-free-business/
What is so wrong about being able to pay for things?
The idea that data is "just sold to businesses" and can be substituted for its sale-value equivalent in cash is wrong, IMO. Serious insight, product directions, election swinging power, really big shit – are all emergent properties of data in aggregate like this that are somewhat unknowable until you actually get the data and see things unfold.
So what they're actually keeping, by choosing data over cash, is priceless long-term optionality.
The math and computers don't care, we have to care unless we want to facilitate just about anything.