If we know that humans have all sorts of cognitive biases, how come it's ok to use that fact while at the same time we insist there's some kind of free market?
Say you discover that putting good-looking women next to cars causes the sale of cars to increase. Why does nobody question whether it is legitimate to do so? It's as if there's a line between actively lying ("Studies show that men who buy this car will find many many women attracted to them") and just putting it there suggestively, for some as yet undescribed but working cognitive bias to do its magic.
Some advertisers even make a joke out of it, eg the Lynx ads where the dude is thronged by a huge horde of women. It's a cliché, for a good reason.
I suppose most people will just say you have free will and it's your own fault for thinking what was suggested, but I sense this is more of a grey zone than most people are willing to admit. How can the free market work if everyone is so easily affected by suggestion?
---
Of course this also applies to the free market in ideas. In what sense are people free to make up their minds if it's decided for them what they should see, whether or not the government is doing it or FB? Isn't this the same as the authoritarian nightmares that we've been pointing fingers at?
There is absolutely no limit to this. Putting on a suit for a financial job interview exploits the cognitive biases of interviewers. Tattoos exploit the cognitive biases of hipsters. Putting hockey-stick growth projections in pitch decks exploits the cognitive biases of VCs. Equity grants (pretty much always) exploit the cognitive biases of startup employees. Driving a fancy car exploits the cognitive biases of (a very large portion of) the dating market.
Human beings are not computers. No aspect of human behavior is perfectly logical or rational. You simply cannot ban emotional appeals in principle, because all appeals have an emotional component. This way lies a dystopia far more terrible than "grandpa shared some nonsense on facebook".
Yes, the problem lies not with the technology, but with the gross asymmetries created by wealth and power imbalances. There is an enormous difference in the danger posed by an individual salesperson using cognitive bias on a personal level, and a multibillion-dollar conglomerate using it in an automated, industrial way to exploit millions of people. Those two situations should be regulated very differently.
Now it came about after this that Absalom provided for himself a chariot and horses and fifty men as runners before him. Absalom used to rise early and stand beside the way to the gate; and when any man had a suit to come to the king for judgment, Absalom would call to him and say, “From what city are you?” And he would say, “Your servant is from one of the tribes of Israel.” Then Absalom would say to him, “See, your claims are good and right, but no man listens to you on the part of the king.” Moreover, Absalom would say, “Oh that one would appoint me judge in the land, then every man who has any suit or cause could come to me and I would give him justice.” And when a man came near to prostrate himself before him, he would put out his hand and take hold of him and kiss him. In this manner Absalom dealt with all Israel who came to the king for judgment; so Absalom stole away the hearts of the men of Israel.
We're going to have to figure out the right limit at some point. Otherwise we're heading towards some neural network continually feeding depression vulnerable people content that keeps them in a perpetual state of depression in order to optimize engagement. Perhaps we're already there.
There's a big difference between an accountant putting on a suit for a job interview, and a company having realtime insight into the behavior of every single accountant in the U.S. as they search for a job.
The latter exists and has been used by Google to offer jobs to users for whom they may have an opening. (Or perhaps a competitor has openings and Google wants to ensure the best candidates not work there, which is what I would do if I were Google.)
That several gargantuan companies have and use this kind of insight into widescale human behavior is more terrible than "grandpa shared some nonsense on facebook."
But you can't just say that bias exploitation should never be banned simply because there's a large variation in how it's done. Obviously some of these things are unacceptable and others are subtle enough to be acceptable.
It'd be like saying that all car driving should be banned because high speed driving is extremely risky. Of course that makes no sense, because there's a huge difference between driving 100mph in a neighborhood and just doing 25mph, and it's also the reason we have speed limits, which draw a line in the sand, past which we say "this is unacceptable."
> You simply cannot ban emotional appeals in principle, because all appeals have an emotional component. This way lies a dystopia far more terrible than "grandpa shared some nonsense on facebook".
True enough. But are you implying (not saying you are, just asking) that ~"nothing can be done"?
"Banning emotional appeals....This way lies a dystopia..." is just one option - might there be other approaches that could improve the situation?
Some economists have tried arguing that advertising adds value to products, but I think it only hinders people from making choices based on the product's merit. It's a distortion of the market.
We wouldn't lose much if advertising would just be forbidden. A traditional definition of it would probably be enough to get rid of most of it. The main problem is that huge sums of money are involved, and almost all media profit from it to some extent, so there's a huge incentive to shut down this conversation.
> Some economists have tried arguing that advertising adds value to products, but I think it only hinders people from making choices based on the product's merit. It's a distortion of the market.
Precisely. Spec sheets, unbiased reviews, ratings systems that vet purchases, those add value. Almost by definition, what we consider advertisement is designed to distort humans from making rational free-market choices. Stretching the analogy: Attention is finite, therefore every ad you consume moves you further from perfect information.
That's a purely pragmatic argument without getting into the whole bit where ads use all sorts of psychological tools to shape behavior.
It would seem that review sites would be a massive beneficiary of a ban on advertising; whether or not they review in a Consumer Reviews (scientifically backed, generally) or Bob’s Internet Affiliate Shills or Celebrity Chris’ Favorites manner is left as an exercise to the reader.
>We wouldn't lose much if advertising would just be forbidden.
We would lose the current ad industry and just replace it with some other form of it.
You can't get rid odd advertising, because the existence of the product itself is already advertising. Let's imagine that advertising were banned. How could a car manufacturer still advertise their cars? Put bigger logos on them and sell lots of cheaper models for a while. That way the city will be full of your vehicles and any time someone sees a car they'll think of your cars. Same goes for Coca-Cola and other brands. In that case they just need to take up more shelf-space.
As an absurd example, if someone were to invent a ray-gun that reprograms people's brains with arbitrary beliefs it would be 100% unethical and illegal to use.
However, ads or fake news article that takes many exposures to reprogram you (by using the cognitive bias back channels as you mention) are perfectly fine. ¯\_(ツ)_/¯
With the raygun there's a clear cause and effect. You can fairly easily prove that what changes beliefs is the raygun. You can't do that with ads and fake news articles.
The other point is that with ads and articles you have to engage with them yourself. Something can grab your attention, but you can decide to ignore it anyway.
There's this problem in the sciences and higher academe where we give things names that sound like something but mean another. For instance, Einstein's Special Theory of Relatively is a common example. Ask the average person on the street what it means and they'd say, "Well of course I know what it means, after all, it means that everything is relative."
Well obviously that's not what it means. What it means is that the speed of light is rather impossibly constant. Likewise, economics has this same issue. "Free Market" does not mean everyone has complete free will and has total immunity to persuasion. "Free Markets" defines a philosophy of trade where individuals can own and sell property. Whether or not people are 'easily affected by suggestion' as you put it, has got no impact on the concept of free markets.
> Whether or not people are 'easily affected by suggestion' as you put it, has got no impact on the concept of free markets.
Yes, it does. A free market is not just about individuals owning and trading property. A free market requires that all trades are voluntary. If people are forced, for example by law, to make trades they would not freely choose to make, you do not have a free market: forced trades are not voluntary. But if people are manipulated into making trades they would not freely choose to make if they knew the information the manipulator is hiding from them, those trades are not voluntary either.
"Individuals can own and sell property" is part of but not the whole concept of free market.
All the economic theory of free market (and all the advantages of it) rely on a few core assumptions, like liberty to trade and set prices, but also lots of buyers, lots of sellers, full competition, low barriers of entry and full information.
Just as it's well known that a market that devolves into monopoly or oligopoly does not work like a free market anymore, or in the case of severely unequal bargaining power, the same applies in the case of information asymmetry, which also is well known to lead to a failure of free market.
So whether "people are easily affected by suggestion" does matter, because if that becomes the case and companies are widely using effective methods to do so, then the resulting economic structure of the competition is not like a free market.
Just a minor nitpick, but it's really special relativity that says that the speed of light is constant (and the maximum anything can travel at). General relativity is much more general theory about gravity and space time.
I see the rest of society buying and liking the typical brands such as: Tide, Kraft, Nabisco, Pepsi-co, what have you and I shutter. It's really weird but I started to have an adverse reaction to all the companies that actually advertise and I'm always on the look out for companies that use few ingredients and in general have no commercials.
And this has changed me so much. Even seeing that stereo typical hot-rod style cars that you are clearly described as attracting women - I actively dislike people that show off even. I have never had an FB account.
I don't know where I'm going with this but yeah, cognitive biases and such and controlling people. I guess just trying to say, it doesn't work on everyone - not identically and as expected at least.
I suppose there are both philosophical and pragmatic reasons for the current situation:
1. Philosophical - you don't want to get the threat of physical force involved unless there is a very very good reason for doing so. That is, you don't want to make things illegal. So we forbid outright lying and fraud, openly misleading customers, but allow things that you describe. If you have a private online platform, why should you not be allowed to or be compelled to moderate its content? Yes, tech companies will try to maximize user engagement, but what else should they be trying to maximize? It is up to the rest of society to develop a proper culture and information hygiene to shape their demand, and tech companies will adapt. It can develop organically and passed from generation to generation, or you can try to expedite the process through schools, media and other mechanisms that exist in a society.
2. Practical - people tend to get interested by forbidden things, and they seem to like their biases. In the Soviet Union, for example, perhaps most people were fascinated by the Hollywood movies of the 70s and the 80s, and even the way the Western brands were being advertised. They seemed 'cooler' to the Soviet products that were mostly devoid of marketing. Try to eliminate most biases, and people will vote with their feet. Arguably a better approach is to allow things, but in good faith educate people on how to best deal with them and why that is the right way of dealing with them. Same way a lot of parents today would explain to their children why they should stay away from cigarettes, for example, or wash their hands before having food.
I don't think there's anything reasonable about either of these:
1. Are the freedoms of corporations to act without the coercion of the state more important than the freedom of citizens to act without the subtle but effective coercion of corporate persuasion? The case for individual freedom doesn't always imply the absence of action by the state.
2. Would restricting manipulative practices by businesses really lead to more businesses exploring those practices? That may be the case in restricting consumer choices but would it also be true for corporate behavior? I'm not so sure.
> It is up to the rest of society to develop a proper culture
Just as the environmentalists say that you can't really throw things away because there is no "away", increasingly there is no "rest of society" that exists completely outside online culture. Especially in the year we've all been quarantined inside.
That point is sort-of pointless. It would be fair to boil a lot of the argument there down to "People don't act randomly. They have reasons for what they are doing. I think their reasons are bad".
I can't argue with that, but the alternatives are worse. If you centralise power, sooner or later the advertising exec gets control of the powerful body, and now you can't choose to resist even if you can see that what is happening is bad.
A key part of the free market is precisely that the world is actually quite predictable. The fact that people sometimes make predictably bad choices doesn't especially undermine the free market. The market doesn't require people make good choices, it just redirects resources to people who make better choices than the average.
You could deploy the same argument in favor of allowing any kind of fraudulent product or service; the full libertarian "caveat emptor" approach. However, not only is this unpopular, prohibiting fraud doesn't in and of itself result in fraudsters taking over the market.
The boundary between overly enthusiastic promises and actual fraud is one that's in different places for different jurisdictions and is constantly at the forefront of litigation as people invent new fraud schemes.
> "People don't act randomly. They have reasons for what they are doing. I think their reasons are bad".
Sometimes the reasons are bad in very objective ways.
A goes down to the market to buy a kilogram of apples. A vendor B advertises 1kg apples for a pound. He weighs them out on his scales. A hands over his pound. When he gets home he finds he has 800g of apples. Was A's reason for purchasing the apples from B good or bad?
(laws against short measure have been a thing since at least Roman times; I believe they also had a few product quality laws, although the canonical example there is always German beer law from 1516)
> It's as if there's a line between actively lying ("Studies show that men who buy this car will find many many women attracted to them") and just putting it there suggestively
I doubt pretty girls in ads are actually supposed to mean the goods they advertise make a man more attractive. Who (except teens) would believe that, consciously or subconsciously? You just get attracted yourself and that's enough, simply seeing a pretty girl fires the hormones and neurotransmitters making you feel good about what she advertises, no semantic load necessary.
Nobody would believe drinking Coke will make you live free and happily. Still their marketing messaging is about connecting Coke with a youthful, carefree, liberating experience, friends, fun.
You wouldn't think eating candy bars will make you athletic. But they market it with athletes and active people playing soccer etc.
They don't show some fat dude sitting in the dark in front of the computer shoving candy and chips and coke in his face and becoming diabetic.
The associations can be immediate if repeated often enough. When picking a product you don't reason "okay this will make my life X", but you feel a familiarity, a draw, a positive emotion. Not necessarily consciously. But especially after a stressful day, in the supermarket you will be more prone to emotional, autopilot handling.
> I doubt pretty girls in ads are actually supposed to mean the goods they advertise make a man more attractive. Who (except teens) would believe that, consciously or subconsciously?
How about chewing gum and mint ads? Many try for messages that are essentially "don't get cock-blocked by your bad breath" or something similar. And they work.
I studied "strategic communication" in college (a mix of PR, advertising, marketing, whatever) and I distinctly remember a mentor saying, "When you're selling a drill, you're not selling a drill. You're selling the hole."
The point is, people don't buy things for their own sake. They buy them for what they think the thing can do for them. Any car will get you to point B, but some people will go for a cheap, utilitarian car because they want to save money (or maybe the utilitarian aesthetic is their thing), while others will go for the flashy car for that feeling of sex appeal (even if women don't suddenly fall all over a new car owner, the feeling of confidence is a social benefit to the buyer, even if that's not worth the asking price).
More generally, though, all communication has this sort of color to it. We see anti-privacy legislation being touted as protecting children and fighting crime. Small talk is not really about sports. So I don't think it's realistic to legislate persuasion. I would probably be behind making formal logic a part of public school curriculum, though, so people are better equipped to discern for themselves when persuasion they're exposed to is nonsense (among other benefits).
I think there is much uncorroborated assertions about bias.
Associating emotions to a product or getting attention is an old trick that existed before modern marketing. The smoking example can be generalized for fashion and there are mechanisms like peer pressure involved that certainly incentivize consumption. But they are not overriding your will. Addictions can, but even then I would say there is still a free will involved.
> but I sense this is more of a grey zone than most people are willing to admit.
I do believe ads affect me, but the scope is limited and regulation would be more draconian than my natural inclination to give products an unjust bonus for boobs. However, it may not work, because I come to the conclusion that the product must lacking if you try to sell it with dirty tricks.
> Isn't this the same as the authoritarian nightmares that we've been pointing fingers at?
No, because people can make up their minds, otherwise they would have a lot of cars by now. They loose that, however, if you regulate too excessively because the decision is already made for you. There are sensible reasons for regulation, so it is a gray area, but I don't see it as helpful here.
Empirical counter evidence in favor of my free will for any practical purpose: There is no irresistible ad.
Isn't what you're calling a free will just the influence of your previous experiences? Which would mean that there should theoretically be a way for powerful advertisers to avoid people from obtaining such experiences in the first place.
The wider culture is supposed to counterbalance for this.
Taking things to the extreme: maybe people are by default violent and will kill a few people in their lifetime to get their way. But the culture counterbalances this.
Likewise everyone old enough to have a sex drive has seen enough publicity with barely-clad attractive women to know the ruse. Of course, people are still vulnerable to these things (cf. onlyfans, etc.) but at some point you have to establish that the rules are clear and if people still want to indulge the fantasy of a beautiful woman by buying a car, well, whatever.
Sure, we still put backstops to this with drugs, gambling. But the use of sex as a tool for attention grabbing is way too wide a net to cast.
(I note that vaunted free speech zone America is actually more restrictive in what sexual material can be shown on TV or even marketed; there isn't a US "Babestation" TV channel, is there?)
If you go that far then the entire legitimacy of legal systems, judicial systems, penal systems, world order etc falls apart. Free will does not really exist, but what is the alternative to pretending that it exists.
Recent scholarship engaging with the impact of digital technology on contract law has suggested that practitioners and researchers will need to give proper consideration to the ‘role of equitable remedies in the context of contracts drafted in whole or part in executable code’. More generally, a raft of challenges stem from the increasingly active role digital technologies play in contractual relations.
Faced with these challenges, instinct may dictate attempting to tame the technological beast with a variety of regulatory responses spanning the full spectrum of possibilities, from legal requirements to voluntary codes of conduct or standards. While regulatory action may be a priority from a public policy perspective, the seeming trustworthiness of algorithms, and the consequent reliance placed on them by contracting parties carry the inherent risk of lack of autonomy and fully‐informed independent decision‐making that, in Australia at least, is addressed by equity through the doctrine of undue influence.
This article explores whether this traditional doctrine can adapt to operate alongside regulation in dealing with some of the challenges presented by algorithmic contracting. Specifically, it focuses on those contracts where algorithms play an active role not only in the execution, but in the formation of the contract, as these are the “algorithmic contracts” that challenge the very fundamentals of contract law.
Cognitive biases are a real problem for free markets, but the question isn't whether free markets are perfect, it's how they compare to the alternatives.
People can make poor choices because of cognitive biases, or they can have choices made for them by other people with cognitive biases. The other people can be unelected, unaccountable leaders, or leaders that are chosen by voters, and politics seems to be where cognitive biases are worst.
In general, I would rather suffer for my own cognitive biases than the biases of elected officials and voters, but that's not to say I advocate for free markets in every scenario, because there is a lot more to consider than individual choice in that discussion.
With freedom, humans can control for this by learning from it. They can see that cars don't necessarily get you women despite of what the ads say. This can also be indirect learning by someone pointing it out.
If you start making certain things illegal to say, you can use that against people. For example, given enough money for lawyers, you can sue people for saying that "these cars don't get you women" using the same anti-free speech regulations by finding holes and exceptions in them. History has shown that lawyers can pull this off.
Nonsense. There are plenty of things you are not allowed to say, and nobody is using them as "loopholes". False advertising is a crime already, for one.
I saw a talk a while ago that argued the sexist marketing of home computers toward boys is likely responsible for the drop in women becoming computer programmers during the 90s.
My family is a living proof of this. I have 2 sisters, I was a boy, and yet, I am the only programmer. GW-BASIC’s evil plan with its marketing oriented towards me, its black-and-white text, and their 10 20 30 LIST instructions, or the white reference book of the Amstrad 8086 in English which, as a French boy of 7, was incredibly opaque (which I still read and memorized by heart), those were all directed towards me.
Or perhaps it is time to admit that talks which are drawing inferences are just talks, and while my sisters were asking dads to draw horses, I was asking dad to offer me a drawing of a powerpoint, because things that plug to each other fascinated me.
If sex creates differences in the body, it is only stunning ideology that draws us to affirm it creates no difference in average in the brain, and that everything is socially constr... No, I’m not James Damore.
Something like Lynx (called Axe here) is probably a great example for a different reason, being that it would not exist were it not for the very advertising that promotes it. The aversion for natural body odour (that is, not 'sweaty' smell) is given by advertising, same with the aversion for 'bad breath'.
Banning advertising would kill these products, because they add no value except to solve the problem they created.
Education is the best mitigation. Knowing that we have biases helps us recognize when it is happening. Most of the pushback to advertising that I've seen uses either asceticism or anti-consumerism language instead of cutting to the heart of what's going on.
> If we know that humans have all sorts of cognitive biases, how come it's ok to use that fact while at the same time we insist there's some kind of free market?
Of course humans aren't perfectly rational but I'd argue it's still a good assumption to make, as a society, because the alternative leads to a very disturbing path. Ultimately, assuming individuals don't really know what's best for themselves can be used to justify all kinds of authoritarian measures from limiting speech to straight up enslavement.
You can discuss persuasion or cognitive bias away until everyone gets an aneurysm, sure. Some example: You can't not communicate. So even not persuading is persuading. What if the presidential candidate would just not give a scheduled speech? How do you communicate information objectively? Casual language creates bias, so does scientific language, simple language, passive language, active language. "Cognitive Bias" might as well be called "Cognition", since it is just how the brain works. You have to think "tree" immediately when you see one, even before you validated that all leaves are real and that the whole thing is not a projection on an transparent screen. Otherwise you can't function.
But: Big tech throws us in a situation where a small group of people influences our perception on a massive scale. Facebook changes a sentence on their homepage and a billion people read it. Youtube raises some parameter (yeah I know that's not how AI works) by 0.01 and the political opinion about the Grenfell tower disaster changes ever so slightly - for 30 million people. Google's filter has a tiny hole and some troll broadcasts wrong medical information about gout to 200k people.
Every time one of these things happen, the world shakes. Dozens die or survive. Demonstrations form and elections swing. Opportunities are wasted and ideas surface.
I am not arrogant to actually propose an easy solution to this, and I don't think there is one. Just be aware that "I can always go and stab someone" is not a good argument when you are discussing a fully automated drone swarm with kill authority.
I agree scale is very important and it still has too little visibility. Everyone knows the quote "with great power there must also come great responsibility". Well, scale is indeed power. An action applied to one person might not be a big deal. But when the action is applied to or affects thousands, it should require much more consideration and carry much more responsibility. This argument has a lot of applicability in many other areas too.
Twitter CEO said we can give them the right to speak, but that doesn't mean the right to go viral. I think "viral" is an apt term - these are mind viruses that are 95% harmful.
To your point about scale, once a tweak is made and X or Y "news story" is let loose in the wild, it gets amplified. The scale of the impact isn't linear, it's something more exponential.
I keep coming back to my default idea here - that PII needs to be seen as legally owned by me and only licensed to others for use. The default legal framework should include medical / epidemiology research as freely licensed and commercial use as ... well let's just say i think my license conditions will be expensive.
If an advertising channel then shows ads that breach license they are liable. A fairly simple licensing process will come into play and we can find new ways to fund things
Edit: Yes i do get a lot of the issues around regulation of tech - it's almost like saying regulation of every day life which is really broad. And the different bodies and approaches will also need to be broad. But i am a believer in markets and individual decision making and i also believe that personal information has in the past few years become a genuine new ... commodity? And we need to raise that commodity into visibility - to be able to put prices on it openly. Maybe it won't work, maybe privacy is like a human right and can only be dealt with at that level - but i don't think so - privacy to me seems ephemeral and usually poorly defined. Longer discussion to be had
If you're a US citizen with a Facebook account, your value for Facebook is about $ 200 per year. That's the average, including children, seniors, etc. If you're a tech worker in your 30s, it's probably 3x as much.
If you want to use Facebook without targeted advertising, you need to either convince them that they don't need all that money, or pay it yourself. And that's just Facebook.
In other words: the internet economy without targeted ads will be a very different place. Facebook will survive. imgurl / snopes / fivethirtyeight? Unlikely...
> If you want to use Facebook without targeted advertising, you need to either convince them that they don't need all that money, or pay it yourself. And that's just Facebook.
I hate this argument. It's tries to find a solution to sustain the status quo in the face of change. But that's illogical because if there is a change, then the status quo is going to change too.
The only way Facebook is going to stop doing targeted advertising is if they're forced to, likely due to new regulations. If that happens, Facebook's entire business model will collapse, so they'll either have to make massive fundamental changes to how they make money, or they'll die and be taken over by a competitor.
In either case, users don't "owe" Facebook anything. Consumers are the life blood of the economy, and thinking about how consumers should change their behaviors to sustain a corporation is backwards thinking.
"Without ads" and "without targeted ads" are different things. There's an "ad-centrist" view whereby adverts should be OK so long as they're chosen to go with the content on the page and the community that the site aims at - you know, like all newspapers and magazines used to do.
I'm not sure how imgurl survives given the abuse workload; there is an inevitable death spiral of image hosting sites as they get cluttered by more and more adverts of worse and worse quality.
And what about all the sales that are made through Facebook ads? They support millions of jobs around the world. A less efficient ad machine would mean worse sales or higher marketing costs or both. Some of this excess goes directly to profits, but a good chunk of it supports employment.
Imagine if we got rid of all advertising and what that would do to demand. I know the discussion here is centered on targeted ads, but I think there would be significant economic effects if their incidence was decreased.
The global consumer economy runs on all these extra desires created by ads, and ads do fulfil the function of informing customers.
If I have to see ads (and I really don't), I'm not against tracking ads per se, but the long-term storage of them. Given current technology and governance, there's not a way to separate these two.
In any case, there's significant economic effects to stopping tracking ads.
>If you're a tech worker in your 30s, it's probably 3x as much.
I wouldn't be surprised if it's much more than that. Most businesses sales follow the Pareto principle, 80% of their revenue comes from 20% of their customers.
I always wondered what's stopping a rival to Facebook that offers users a share of the money they are generating. Besides the scale barrier to entry of course. Generally speaking, people like money and will switch products and services that puts money in their pocket.
I think challenge it doesn't solve a user oriented problem.
Most users care about the enjoyment/utility/stimulation they get out of a social network. Some users care about their privacy. Needing an extra $30-60/year does not rank high on user priorities.
While facebook might make $200/US individual, their profit margin is only 30%.
>I always wondered what's stopping a rival to Facebook that offers users a share of the money they are generating.
They sort of do. The money is converted into the utility of the product to each user. Technically, they could lower the utility of Facebook and pay you the rest.
I mean, you already have that via the Facebook EULA for example. You give them a license to use your data however they want and in return you get access to Facebook. You can not accept their terms and in return they can not allow you to use Facebook.
Would these limits apply to essays written by people who have for good reason cultivated a following of people generally interested in what they write and positively inclined towards agreement with them?
Maybe this essay should be forced to be presented on essays.com without attribution and compete on the ideas within rather than the implied authority of the author.
(I happen to agree with a lot of the content, but couldn’t completely compartmentalize that this was a persuasive essay against persuasion.)
There are two hilarious subtexts that always accompany these sorts of arguments:
-It's okay to use persuasive technology to push political orientations I agree with
-We want to defend democracy but we implicitly agree as commentators above the fray that people can't be trusted to make the right decisions and have to be manipulated towards our preferred orientation
I sometimes ask myself, in 20 years will we begin to see class action lawsuits directed at technology platforms that use notification triggered dopamine releases for growing engagement?
I thought about it further and lean towards agreement.
You know what makes it such a subjective question? The cigarette supplier sells a substance with an addictive chemical inside of it. Notifications are not a substance, the drug response is your own chemical (dopamine). Still an interesting question though.
---
If we know that humans have all sorts of cognitive biases, how come it's ok to use that fact while at the same time we insist there's some kind of free market?
Say you discover that putting good-looking women next to cars causes the sale of cars to increase. Why does nobody question whether it is legitimate to do so? It's as if there's a line between actively lying ("Studies show that men who buy this car will find many many women attracted to them") and just putting it there suggestively, for some as yet undescribed but working cognitive bias to do its magic.
Some advertisers even make a joke out of it, eg the Lynx ads where the dude is thronged by a huge horde of women. It's a cliché, for a good reason.
I suppose most people will just say you have free will and it's your own fault for thinking what was suggested, but I sense this is more of a grey zone than most people are willing to admit. How can the free market work if everyone is so easily affected by suggestion?
---
Of course this also applies to the free market in ideas. In what sense are people free to make up their minds if it's decided for them what they should see, whether or not the government is doing it or FB? Isn't this the same as the authoritarian nightmares that we've been pointing fingers at?
Human beings are not computers. No aspect of human behavior is perfectly logical or rational. You simply cannot ban emotional appeals in principle, because all appeals have an emotional component. This way lies a dystopia far more terrible than "grandpa shared some nonsense on facebook".
2 Samuel 15:1-6
The latter exists and has been used by Google to offer jobs to users for whom they may have an opening. (Or perhaps a competitor has openings and Google wants to ensure the best candidates not work there, which is what I would do if I were Google.)
That several gargantuan companies have and use this kind of insight into widescale human behavior is more terrible than "grandpa shared some nonsense on facebook."
Edit: clarification
It'd be like saying that all car driving should be banned because high speed driving is extremely risky. Of course that makes no sense, because there's a huge difference between driving 100mph in a neighborhood and just doing 25mph, and it's also the reason we have speed limits, which draw a line in the sand, past which we say "this is unacceptable."
True enough. But are you implying (not saying you are, just asking) that ~"nothing can be done"?
"Banning emotional appeals....This way lies a dystopia..." is just one option - might there be other approaches that could improve the situation?
We wouldn't lose much if advertising would just be forbidden. A traditional definition of it would probably be enough to get rid of most of it. The main problem is that huge sums of money are involved, and almost all media profit from it to some extent, so there's a huge incentive to shut down this conversation.
Precisely. Spec sheets, unbiased reviews, ratings systems that vet purchases, those add value. Almost by definition, what we consider advertisement is designed to distort humans from making rational free-market choices. Stretching the analogy: Attention is finite, therefore every ad you consume moves you further from perfect information.
That's a purely pragmatic argument without getting into the whole bit where ads use all sorts of psychological tools to shape behavior.
We would lose the current ad industry and just replace it with some other form of it.
You can't get rid odd advertising, because the existence of the product itself is already advertising. Let's imagine that advertising were banned. How could a car manufacturer still advertise their cars? Put bigger logos on them and sell lots of cheaper models for a while. That way the city will be full of your vehicles and any time someone sees a car they'll think of your cars. Same goes for Coca-Cola and other brands. In that case they just need to take up more shelf-space.
As long as people care what is popular and fashionable rather than what is good there will be ways of exploiting it.
How about just making it no longer tax deductible as a start and see how that goes? Spend some of the money on increased consumer protection.
However, ads or fake news article that takes many exposures to reprogram you (by using the cognitive bias back channels as you mention) are perfectly fine. ¯\_(ツ)_/¯
The other point is that with ads and articles you have to engage with them yourself. Something can grab your attention, but you can decide to ignore it anyway.
Well obviously that's not what it means. What it means is that the speed of light is rather impossibly constant. Likewise, economics has this same issue. "Free Market" does not mean everyone has complete free will and has total immunity to persuasion. "Free Markets" defines a philosophy of trade where individuals can own and sell property. Whether or not people are 'easily affected by suggestion' as you put it, has got no impact on the concept of free markets.
Yes, it does. A free market is not just about individuals owning and trading property. A free market requires that all trades are voluntary. If people are forced, for example by law, to make trades they would not freely choose to make, you do not have a free market: forced trades are not voluntary. But if people are manipulated into making trades they would not freely choose to make if they knew the information the manipulator is hiding from them, those trades are not voluntary either.
All the economic theory of free market (and all the advantages of it) rely on a few core assumptions, like liberty to trade and set prices, but also lots of buyers, lots of sellers, full competition, low barriers of entry and full information.
Just as it's well known that a market that devolves into monopoly or oligopoly does not work like a free market anymore, or in the case of severely unequal bargaining power, the same applies in the case of information asymmetry, which also is well known to lead to a failure of free market.
So whether "people are easily affected by suggestion" does matter, because if that becomes the case and companies are widely using effective methods to do so, then the resulting economic structure of the competition is not like a free market.
And this has changed me so much. Even seeing that stereo typical hot-rod style cars that you are clearly described as attracting women - I actively dislike people that show off even. I have never had an FB account.
I don't know where I'm going with this but yeah, cognitive biases and such and controlling people. I guess just trying to say, it doesn't work on everyone - not identically and as expected at least.
1. Philosophical - you don't want to get the threat of physical force involved unless there is a very very good reason for doing so. That is, you don't want to make things illegal. So we forbid outright lying and fraud, openly misleading customers, but allow things that you describe. If you have a private online platform, why should you not be allowed to or be compelled to moderate its content? Yes, tech companies will try to maximize user engagement, but what else should they be trying to maximize? It is up to the rest of society to develop a proper culture and information hygiene to shape their demand, and tech companies will adapt. It can develop organically and passed from generation to generation, or you can try to expedite the process through schools, media and other mechanisms that exist in a society.
2. Practical - people tend to get interested by forbidden things, and they seem to like their biases. In the Soviet Union, for example, perhaps most people were fascinated by the Hollywood movies of the 70s and the 80s, and even the way the Western brands were being advertised. They seemed 'cooler' to the Soviet products that were mostly devoid of marketing. Try to eliminate most biases, and people will vote with their feet. Arguably a better approach is to allow things, but in good faith educate people on how to best deal with them and why that is the right way of dealing with them. Same way a lot of parents today would explain to their children why they should stay away from cigarettes, for example, or wash their hands before having food.
1. Are the freedoms of corporations to act without the coercion of the state more important than the freedom of citizens to act without the subtle but effective coercion of corporate persuasion? The case for individual freedom doesn't always imply the absence of action by the state.
2. Would restricting manipulative practices by businesses really lead to more businesses exploring those practices? That may be the case in restricting consumer choices but would it also be true for corporate behavior? I'm not so sure.
Just as the environmentalists say that you can't really throw things away because there is no "away", increasingly there is no "rest of society" that exists completely outside online culture. Especially in the year we've all been quarantined inside.
I can't argue with that, but the alternatives are worse. If you centralise power, sooner or later the advertising exec gets control of the powerful body, and now you can't choose to resist even if you can see that what is happening is bad.
A key part of the free market is precisely that the world is actually quite predictable. The fact that people sometimes make predictably bad choices doesn't especially undermine the free market. The market doesn't require people make good choices, it just redirects resources to people who make better choices than the average.
The boundary between overly enthusiastic promises and actual fraud is one that's in different places for different jurisdictions and is constantly at the forefront of litigation as people invent new fraud schemes.
> "People don't act randomly. They have reasons for what they are doing. I think their reasons are bad".
Sometimes the reasons are bad in very objective ways.
A goes down to the market to buy a kilogram of apples. A vendor B advertises 1kg apples for a pound. He weighs them out on his scales. A hands over his pound. When he gets home he finds he has 800g of apples. Was A's reason for purchasing the apples from B good or bad?
(laws against short measure have been a thing since at least Roman times; I believe they also had a few product quality laws, although the canonical example there is always German beer law from 1516)
I doubt pretty girls in ads are actually supposed to mean the goods they advertise make a man more attractive. Who (except teens) would believe that, consciously or subconsciously? You just get attracted yourself and that's enough, simply seeing a pretty girl fires the hormones and neurotransmitters making you feel good about what she advertises, no semantic load necessary.
You wouldn't think eating candy bars will make you athletic. But they market it with athletes and active people playing soccer etc.
They don't show some fat dude sitting in the dark in front of the computer shoving candy and chips and coke in his face and becoming diabetic.
The associations can be immediate if repeated often enough. When picking a product you don't reason "okay this will make my life X", but you feel a familiarity, a draw, a positive emotion. Not necessarily consciously. But especially after a stressful day, in the supermarket you will be more prone to emotional, autopilot handling.
How about chewing gum and mint ads? Many try for messages that are essentially "don't get cock-blocked by your bad breath" or something similar. And they work.
https://pdfs.semanticscholar.org/c86a/142c0ae2d70d95f6292af6...
The point is, people don't buy things for their own sake. They buy them for what they think the thing can do for them. Any car will get you to point B, but some people will go for a cheap, utilitarian car because they want to save money (or maybe the utilitarian aesthetic is their thing), while others will go for the flashy car for that feeling of sex appeal (even if women don't suddenly fall all over a new car owner, the feeling of confidence is a social benefit to the buyer, even if that's not worth the asking price).
More generally, though, all communication has this sort of color to it. We see anti-privacy legislation being touted as protecting children and fighting crime. Small talk is not really about sports. So I don't think it's realistic to legislate persuasion. I would probably be behind making formal logic a part of public school curriculum, though, so people are better equipped to discern for themselves when persuasion they're exposed to is nonsense (among other benefits).
Associating emotions to a product or getting attention is an old trick that existed before modern marketing. The smoking example can be generalized for fashion and there are mechanisms like peer pressure involved that certainly incentivize consumption. But they are not overriding your will. Addictions can, but even then I would say there is still a free will involved.
> but I sense this is more of a grey zone than most people are willing to admit.
I do believe ads affect me, but the scope is limited and regulation would be more draconian than my natural inclination to give products an unjust bonus for boobs. However, it may not work, because I come to the conclusion that the product must lacking if you try to sell it with dirty tricks.
> Isn't this the same as the authoritarian nightmares that we've been pointing fingers at?
No, because people can make up their minds, otherwise they would have a lot of cars by now. They loose that, however, if you regulate too excessively because the decision is already made for you. There are sensible reasons for regulation, so it is a gray area, but I don't see it as helpful here.
Empirical counter evidence in favor of my free will for any practical purpose: There is no irresistible ad.
Taking things to the extreme: maybe people are by default violent and will kill a few people in their lifetime to get their way. But the culture counterbalances this.
Likewise everyone old enough to have a sex drive has seen enough publicity with barely-clad attractive women to know the ruse. Of course, people are still vulnerable to these things (cf. onlyfans, etc.) but at some point you have to establish that the rules are clear and if people still want to indulge the fantasy of a beautiful woman by buying a car, well, whatever.
Sure, we still put backstops to this with drugs, gambling. But the use of sex as a tool for attention grabbing is way too wide a net to cast.
Mild amusement can be had from the example cases in the sex section: https://www.asa.org.uk/advice-online/offence-sexism.html
(I note that vaunted free speech zone America is actually more restrictive in what sexual material can be shown on TV or even marketed; there isn't a US "Babestation" TV channel, is there?)
Clearly not, for the simple fact that they do it anyway. It works, else you wouldn't see it in ads.
Just today, in fact, I read this article: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3697726
Recent scholarship engaging with the impact of digital technology on contract law has suggested that practitioners and researchers will need to give proper consideration to the ‘role of equitable remedies in the context of contracts drafted in whole or part in executable code’. More generally, a raft of challenges stem from the increasingly active role digital technologies play in contractual relations.
Faced with these challenges, instinct may dictate attempting to tame the technological beast with a variety of regulatory responses spanning the full spectrum of possibilities, from legal requirements to voluntary codes of conduct or standards. While regulatory action may be a priority from a public policy perspective, the seeming trustworthiness of algorithms, and the consequent reliance placed on them by contracting parties carry the inherent risk of lack of autonomy and fully‐informed independent decision‐making that, in Australia at least, is addressed by equity through the doctrine of undue influence.
This article explores whether this traditional doctrine can adapt to operate alongside regulation in dealing with some of the challenges presented by algorithmic contracting. Specifically, it focuses on those contracts where algorithms play an active role not only in the execution, but in the formation of the contract, as these are the “algorithmic contracts” that challenge the very fundamentals of contract law.
People can make poor choices because of cognitive biases, or they can have choices made for them by other people with cognitive biases. The other people can be unelected, unaccountable leaders, or leaders that are chosen by voters, and politics seems to be where cognitive biases are worst.
In general, I would rather suffer for my own cognitive biases than the biases of elected officials and voters, but that's not to say I advocate for free markets in every scenario, because there is a lot more to consider than individual choice in that discussion.
If you start making certain things illegal to say, you can use that against people. For example, given enough money for lawyers, you can sue people for saying that "these cars don't get you women" using the same anti-free speech regulations by finding holes and exceptions in them. History has shown that lawyers can pull this off.
Or perhaps it is time to admit that talks which are drawing inferences are just talks, and while my sisters were asking dads to draw horses, I was asking dad to offer me a drawing of a powerpoint, because things that plug to each other fascinated me.
If sex creates differences in the body, it is only stunning ideology that draws us to affirm it creates no difference in average in the brain, and that everything is socially constr... No, I’m not James Damore.
Of course I don't think blandifying ads is the way to go. Making ads attractive is a cultural phenomenon.
An implicit suggestion of fame, class, social status is fine. It's part of the game. It's an implicit suggestion in the end.
But everything has a limit. What might be acceptable in an adult ad might not be so acceptable for a teenager or a kids ad.
More worrying is propaganda, which is misleading/false in non-obvious ways and not explicitly an ad.
Banning advertising would kill these products, because they add no value except to solve the problem they created.
People should affirmatively seek out sales materials, not be inadvertently subjected to them.
Of course humans aren't perfectly rational but I'd argue it's still a good assumption to make, as a society, because the alternative leads to a very disturbing path. Ultimately, assuming individuals don't really know what's best for themselves can be used to justify all kinds of authoritarian measures from limiting speech to straight up enslavement.
The new thing, and the key point here is scale.
You can discuss persuasion or cognitive bias away until everyone gets an aneurysm, sure. Some example: You can't not communicate. So even not persuading is persuading. What if the presidential candidate would just not give a scheduled speech? How do you communicate information objectively? Casual language creates bias, so does scientific language, simple language, passive language, active language. "Cognitive Bias" might as well be called "Cognition", since it is just how the brain works. You have to think "tree" immediately when you see one, even before you validated that all leaves are real and that the whole thing is not a projection on an transparent screen. Otherwise you can't function.
But: Big tech throws us in a situation where a small group of people influences our perception on a massive scale. Facebook changes a sentence on their homepage and a billion people read it. Youtube raises some parameter (yeah I know that's not how AI works) by 0.01 and the political opinion about the Grenfell tower disaster changes ever so slightly - for 30 million people. Google's filter has a tiny hole and some troll broadcasts wrong medical information about gout to 200k people.
Every time one of these things happen, the world shakes. Dozens die or survive. Demonstrations form and elections swing. Opportunities are wasted and ideas surface.
I am not arrogant to actually propose an easy solution to this, and I don't think there is one. Just be aware that "I can always go and stab someone" is not a good argument when you are discussing a fully automated drone swarm with kill authority.
If an advertising channel then shows ads that breach license they are liable. A fairly simple licensing process will come into play and we can find new ways to fund things
Edit: Yes i do get a lot of the issues around regulation of tech - it's almost like saying regulation of every day life which is really broad. And the different bodies and approaches will also need to be broad. But i am a believer in markets and individual decision making and i also believe that personal information has in the past few years become a genuine new ... commodity? And we need to raise that commodity into visibility - to be able to put prices on it openly. Maybe it won't work, maybe privacy is like a human right and can only be dealt with at that level - but i don't think so - privacy to me seems ephemeral and usually poorly defined. Longer discussion to be had
If you want to use Facebook without targeted advertising, you need to either convince them that they don't need all that money, or pay it yourself. And that's just Facebook.
In other words: the internet economy without targeted ads will be a very different place. Facebook will survive. imgurl / snopes / fivethirtyeight? Unlikely...
I hate this argument. It's tries to find a solution to sustain the status quo in the face of change. But that's illogical because if there is a change, then the status quo is going to change too.
The only way Facebook is going to stop doing targeted advertising is if they're forced to, likely due to new regulations. If that happens, Facebook's entire business model will collapse, so they'll either have to make massive fundamental changes to how they make money, or they'll die and be taken over by a competitor.
In either case, users don't "owe" Facebook anything. Consumers are the life blood of the economy, and thinking about how consumers should change their behaviors to sustain a corporation is backwards thinking.
I'm not sure how imgurl survives given the abuse workload; there is an inevitable death spiral of image hosting sites as they get cluttered by more and more adverts of worse and worse quality.
Stackexchange handles this very differently, too.
Imagine if we got rid of all advertising and what that would do to demand. I know the discussion here is centered on targeted ads, but I think there would be significant economic effects if their incidence was decreased.
The global consumer economy runs on all these extra desires created by ads, and ads do fulfil the function of informing customers.
If I have to see ads (and I really don't), I'm not against tracking ads per se, but the long-term storage of them. Given current technology and governance, there's not a way to separate these two.
In any case, there's significant economic effects to stopping tracking ads.
I wouldn't be surprised if it's much more than that. Most businesses sales follow the Pareto principle, 80% of their revenue comes from 20% of their customers.
Most users care about the enjoyment/utility/stimulation they get out of a social network. Some users care about their privacy. Needing an extra $30-60/year does not rank high on user priorities.
While facebook might make $200/US individual, their profit margin is only 30%.
They sort of do. The money is converted into the utility of the product to each user. Technically, they could lower the utility of Facebook and pay you the rest.
Maybe this essay should be forced to be presented on essays.com without attribution and compete on the ideas within rather than the implied authority of the author.
(I happen to agree with a lot of the content, but couldn’t completely compartmentalize that this was a persuasive essay against persuasion.)
-It's okay to use persuasive technology to push political orientations I agree with
-We want to defend democracy but we implicitly agree as commentators above the fray that people can't be trusted to make the right decisions and have to be manipulated towards our preferred orientation
This is only funny because it's so close to the truth.
You know what makes it such a subjective question? The cigarette supplier sells a substance with an addictive chemical inside of it. Notifications are not a substance, the drug response is your own chemical (dopamine). Still an interesting question though.