If I'm not mis-remembering, there's an interesting section in Dawkins' The Selfish Gene about cooperative grooming behavior (I think it was in some sort of water fowl) and how the birds deal with cheating behavior (individuals that accept grooming without reciprocating). My takeaway was roughly:
Any exchange of value based on trust is exploitable. The simplest cure is excluding the exploiter, but this doesn't scale well. The exploiter can skate on anonymity if the community is large enough to continually prey on someone new. Spreading news of an exploiter's behavior to others can greatly improve how well this scales, but this behavior also requires trust.
I think the more direct problem is with scale, and that the internet is at the nexus of many trust issues only because it has ramped up the scale and scope of many interactions.
I'm not super optimistic on solving this intrinsic problem of trust in social exchanges, but I do see this framing as a silver lining. It seems at least plausible to iterate offline at significantly smaller scales on mechanisms for building and maintaining trust--and rectifying its breaches--in ways that do actually scale.
It's true that "The Selfish Gene" has been a science-based classic of evolutionary biology for over 40 years. But it's also true that in those 40 years a lot of studies have taken aim at the central arguments of the book and in my opinion cast serious doubt on the accuracy of the books conclusions (Or perhaps limit the scope of those views as being a projection of western culture - but not of all cultures on earth, particularly not eastern philosophy, and not the animal kingdom at large)
If you have never read any of these counterpoints, but find that the conclusions of Selfish Gene have shaped your world view, please consider reading some of these counterpoints and seeing if they persuade you to consider new perspectives.
A particularly thorough book detailing these counterpoints is Matthieu Ricard's book:
Altruism: The Power of Compassion to Change Yourself and the World
The book you're recommending seems to be about psychological and spiritual relationships between human beings, from what I can gather from a quick skim on Amazon. But not a single one of the reviews actually describes a single argument the book is making (strange), so it's hard to judge.
Dawkins' "The Selfish Gene" is a scientific work about evolutionary selection at the gene level -- I don't recall him touching on psychology at all. (He provides evolutionary explanations for certain altruistic behaviors, but I don't recall him even starting on how they might be expressed via a psychological mechanism or at any conscious level.)
I can't quite imagine what either have to do with each other -- they seem to be such different topics. Or what Dawkins' work has to do with "western culture", or culture at all. At heart it's a quite mathematical/statistical argument.
I'm curious, what exactly do you see being refuted?
Any chance you can post some of these counterpoints?
At first glance I’m not sure how a book on altruism/compassion refutes the gene-centred view of evolution, which holds that evolution is best viewed as acting on genes and that selection at the level of organisms or populations almost never overrides selection based on genes.
I think as far back as in 1902, some of these topics were treated scientifically by Kropotkin in in his “Mutual aid – a factor in evolution” [0]. His book was based on his observations of animal behaviour in Siberia.
I think it is just a formalization of the thoughts of egoists of the 19th century.
Trust is a resource and is (only) used up if exploited. There are countless examples how this happened on the net. Remember the latest disclaimer you agreed to and consequently had your data stolen or abused? Don't even get me started on penis enlargement spam.
Then additionally, people pretending to have identified the abusers very frequently become the largest abusers themselves.
Lastly, trust needs room to grow, so trying to enforce it through surveillance and advertising will only strengthen the individuals ego. So...
> can greatly improve how well this scales, but this behavior also requires trust.
can backfire immensely. People have already tried and failed.
There is no intrinsic problem with trust. On the net you are potentially connected to everyone on the planet. Nobody can claim the trust of everyone. And I firmly believe achieving this is the wrong goal.
People need to simple not let their trust be abused and that is very possible. For example not letting ad companies handle your biases or believing the next random spam mail you get.
Anonymity is a tool to shield you from abuse. Not the only one, but for non-public personas, nobody has found something better yet.
If you keep looking for new predators, everybody will look like one at some point. Because you have let your trust be abused and have none left.
> There is no intrinsic problem with trust. On the net you are potentially connected to everyone on the planet. Nobody can claim the trust of everyone. And I firmly believe achieving this is the wrong goal.
> People need to simple not let their trust be abused and that is very possible. For example not letting ad companies handle your biases or believing the next random spam mail you get.
> Anonymity is a tool to shield you from abuse. Not the only one, but for non-public personas, nobody has found something better yet.
I'm not sure it's possible to distinguish the world you describe from the one we already live in.
Interestingly, one could view China's social credit score system as a way of dealing with this trust issue.
Note well: I am not advocating either a social credit score system, nor China's use of it. Instead, I am warning that any national-level attempt to solve the trust issue will likely be open to the same abuses as China's system.
The obvious is a robust welfare state. I mean it seems pretty clear to now that inequality goes up, social stability decreases, major overhaul, a generation of babysitting it... like just stop pandering to the meme we can’t afford universal healthcare and college education. Cause we also cannot afford leashing the next generation to soon to dead men’s gambling debts.
But that effectively ends the point in aristocracy, the meme that we must cater to all these rugged individuals living off grandpas old money. So that’ll never fly.
Violent revolt it is! Front row to the apocalypse! /s
Maybe I’m reading this wrong, but isn’t this just describing an echo chamber? Sure that’s nice with concepts like justice, but if the internet shows anything it’s that we thought we agreed on things far more than we actually do.
But I think there's a risk of sending a lot of bright, productive people off to tilt at technological/interface/regulatory windmills in a way that leaves fundamental trust issues out of scope.
Even if Amazon, Twitter, Facebook, etc. could wave a magic wand and remove every fake review, counterfeit product, scam, personal threat, or piece of false/fraudulent media, there would still be a trust issue. We still have to trust that they did what they said, that whatever disappeared was correctly identified, and that they didn't wrongly remove many legitimate items in the process.
Even if the magic wand that makes these decisions has the utmost ethical and logical integrity, there will still be some mix of skeptics, cynics, malign actors, competitors, bots, etc. who chum the waters with accusations to the contrary. We'd still have to choose whether or not to trust this process and the actors behind it.
So, I think it might be productive to focus on some smaller questions first. To be semi-arbitrary: can we find a protocol for reliably building a 50-person trust network with a limited scope/focus (maybe identifying reliable providers of a single service), where each participant knows a small fraction of the network, which is capable of meeting both its purpose and is capable of detecting and reforming or ejecting exploiters? If the first is tractable, can you expand the scope/focus of the networks and retain these properties? Can you compose a higher-order network that retains these properties?
This looks more like people new to the internet discovering what everyone else already knew: You can't trust strangers on the internet. You never could.
If anything the internet nowadays is a more regulated, safer place than it used to be. The "wild west" internet of old now only endures in certain corners. It's no longer synonymous with the thing itself.
Posting with your full name instead of anonymously or pseudonymously is now the norm for many people. This changed how people interact online, but it also changed their expectations.
A full name however doesn't make a person or their opinions any more real than a pseudonym will. People will lie to you with a fake name just as shamelessly as they would hiding behind a pseudonym.
I think it's more that the perception of it is being twisted. The media twists the way that the internet has worked since time immemorial as something that's suddenly gotten a lot worse, that all trolls are on par with criminals, and that we need to take drastic measures.
The people who write those kinds of articles are mostly just your average Joe who uses the internet exclusively to browse Facebook and the like. Of course this gives them a twisted understanding of the medium.
And then there's those who are just lying I guess (duh!). Plenty of reason for that with the whole fake news hysteria. If you don't want to risk ending up on the wrong end of that debate, your article is already written for you. Risky to defend the free(-er) internet nowadays or even just to refrain from attacking it. Further there's the fact that "a war on fake news" might give the established media an edge, but I'm unconvinced any reputable journalist would consciously consider that when deciding on the tone of his article.
In any case you can also find plenty of articles displaying a better understanding of the matter. It's not like there's some secret global conspiracy against the free internet.
This goes beyond trust, I think. The social structures of Internet communities are very different from those those in real life, and I'm highly disturbed when people try to transplant IRL social norms to the Internet thoughtlessly. This is where we get bone-headed ideas like the real-names policy for "the Internet".
Now, I am not discounting that there is actually toxic, horrific harassment that goes on the Internet. Doxxing and things like it are terrible and should never happen.
Rather, I'm arguing that people need to acquire a sort of "street smarts" when navigating the Internet. Much like when you travel to a foreign country, you may expect a different culture with different norms, so it is the same with the Internet. What's rude in one culture is simply social convention in another. What is casual conversation in one is taboo in another. I spent a lot of time on the Internet growing up and I feel like I have been inoculated against the worst parts of the Internet, but at times it feels like many people haven't.
A good example of this is how a lot of people and media take 4chan posts at face value, without realizing that a lot of it is completely self-aware and that the worst posts are almost always a game of who can post the most needlessly offensive thing possible. A lot of 4chan is an exercise is communication with no names and no filter. And yet some of my most informative conversations have been on 4chan precisely because there is no politeness filter, and posters can be as devastating critical as they want. But a reader also needs to learn how to tune in and out, correspondingly discount and read between the lines to get the most out of 4chan conversation, otherwise you simply come away with the idea that the community hates everything and everyone.
For a more tame example, I follow a couple of Twitch streamers. For example, for streamer A, most communication with their chat is saccharine and supportive. But for streamer B, the chat makes fun off and insults him the whole way especially he makes a mistake, and the streamer, and the streamer gives as good as he gets and makes fun of the chat the whole time too. And this is normal and fun engagement for all parties. From afar, streamer B's community looks extremely toxic, but that's the furthest thing from the truth. (What is abhorrent is when you take streamer B's chat behavior into streamer A's chat, and that is frowned upon by all parties.)
I believe that Internet activity should be diverse. There should be places where communication is an extension of real life (Facebook, emails), places of semi-anonymous and professional communication (Hacker News, certain subreddits/slack communitiees/GitHub), and then places where you should be able to go hogwild with whatever you want to say (fun subreddits, discord).
Twitter is unfortunately one of those places that has had all of these mixed together, which is why I decided to have multiple Twitter accounts targeting professional/personal hobby communication. From what I hear, kids are already tuning to this idea, with things like public/private instagram accounts.
The Internet is not all the same, and that's great. The Internet is not just real life, and that's great.
A serious problem with the "let the people live in their own worlds" attitude is that many communities do not stay in their realms. You allude to this a little but, in my opinion, it's not a minor problem to be glossed over, it's the core of the trust issue: any online community that doesn't keep itself small and insular (and enforce this with sufficient opsec) is at risk of being invaded and exploited (for lulz, for cash, or for political manipulation).
> A good example of this is how a lot of people and media take 4chan posts at face value, without realizing that a lot of it is completely self-aware
But a lot of it is not, and a fair number of people do become indoctrinated to alt-right ways of thinking through that sort of medium, some of them going as far as murder.
I'm not saying "Ban this sick filth!" but I am increasingly of the opinion that such places shouldn't just be left to fester, because the genuinely twisted do go there, and they recruit, and that has real world consequences.
> This looks more like people new to the internet discovering what everyone else already knew: You can't trust strangers on the internet. You never could.
Like everything, this isn't absolutely true. People meet others they met through online means all the time, be it for clubs, dating, or commerce (e.g. craigslist). There is a certain societal expectation for how strangers are treated and that requires a bit of trust in someone you've never met.
In my mind, focusing on bots is kind of barking up the wrong tree. A lot of fake reviews on Amazon are written by real humans -- my main concern is not figuring out whether a human or a bot wrote a comment, it's figuring out whether or not the comment/review is trustworthy.
If we got rid of all of the bots, that wouldn't make Amazon easier for me to use. I feel like scammers have already figured out that humans themselves are relatively cheap to buy.
Right -- it's ultimately a trust problem, and we're in an unfortunate negative feedback loop here.
Bots aside, what we post on the internet (and believe in general) is a function of the information we consume. That information is increasingly consumed through the internet, and the internet (as typically used) is feeding us increasingly poor quality information. (To be consumed, then shared. Recurse...)
We can (and I think do) cope by having different levels of trust for different sorts of internet information. I generally have higher trust in an HN post than an Amazon review, for example. This is useful, but has a dark side: some of the things I (and likely, you) take as a sign to "increase trust" happen to correlate with things that have nothing to do with trust. I trust Amazon reviews more if they have good grammar and spelling, largely because I trust general internet information more when it has good grammar and spelling. But perhaps I shouldn't when e.g. buying a screwdriver (what does writing ability -- and the things it correlates with -- have to do with evaluating a screwdriver?). Coming up with social/political-flavored-information examples is an interesting (and worrying) exercise.
I find it interesting that when something uses proper grammar and spelling, it may appear more trustworthy.
Consider this: a consumer that employs above average grammar and spelling skills to write product reviews may also be more skillful in forming and expressing valuable opinions and assessments of any product.
I think that people are naturally inclined to pick up on those signals, whether the correlation is real or not.
Amazon is weird, I bought a product in the mail the other day and they offered me a $20 coupon (the product only cost $25) if I wrote a five star review. So I wrote a one star review talking about how I don’t want to be bribed and they can’t make me be dishonest and to warn people. But amazon didn’t allow my review. So that sucks, and makes me not trust any reviews on Amazon.
The article doesn’t focus on bots ( in fact it specifically mentions times when humans pretend to be bots). It’s really about internet fakery in general eroding trust.
Yep, just expanding on the premise of the article and reinforcing that this is a general trust problem, since a large portion of the current conversation around internet trust does revolve around bots.
This is the most intelligent take on internet trust issues that I've seen. The conclusion is overly glib though: it took centuries for high-trust societies to develop the mechanisms that allow them to function as such, and even small changes can upset that balance (as we've been seeing in the real world over the last couple of years). We shouldn't assume it will be trivial to bootstrap those kinds of institutions on the internet.
People forget that institutions take time to develop. They also forget that high-trust institutions frequently begin with a dictator. The end-point is high-trust/low-coercion, but you can't get there from low-trust/low-coercion. First you have to go through a high-trust/high-coercion state, after which you can gradually taper off the coercive elements as institutions mature and the individual actors get used to the new normal.
This is exactly how the liberal societies of the west evolved. But don't remember that any more, though.
This isn't how the Unites States evolved. Arguably, the success of the experiment still hasn't been fully established, but the major institutions that govern the US system were more or less present right from the beginning.
Careful. Soon you'll be arguing for the dictatorship of the proletariat. "Dictatorship does not necessarily mean the abolition of democracy for the class that exercises the dictatorship over other classes; but it does mean the abolition of democracy (or very material restriction, which is also a form of abolition) for the class over which, or against which, the dictatorship is exercised." -- Lenin.
I'm not being completely serious, but I do find it extremely interesting that this whole idea of high-coercion at the start followed by a gradual easing off as people accept the new culture is one that has been tried explicitly. Or... at least claimed to have been tried ;-). Marx and Engels split no hairs about the authoritarian nature of revolution and when Lenin and Stalin later tried (presumably) to manage the situation they found it necessary to maintain that dictatorship in order to combat the ever present trend toward bourgeoisie and capitalism. But it was all supposed to end eventually.
For me, I see parallels between that and the current air of "It's a scary world out there. Trust your government to take care of you. Give us more powers so that we can make our communities whole." No authoritarian revolution, but a very real reach towards cracking down on bad people in the name of the community. And once utopia (though not a communist one) is reached, the power will simply not be necessary.
Having said all that, it was Confucius who said that if you are lax at the beginning and then become more strict, you will be seen as a tyrant. However, if you are very strict at the beginning and become more liberal over time, people will see you as magnanimous. When I used to teach at a high school, I used that idea and it did, indeed, work very well.
Can we trust the article that it is the internet that is becoming low trust, or is it reflecting society's trends? Trust has been fading in societies for decades [1] and apparently millenials are the most cynical [2]
First, the internet has opened up, and its prominent platforms (which were always predominantly american-culture oriented) are now global and thus reflect low trust cultures as well. Second, Goodhart's law, the internet metrics exist mainly to be gamed, thats why they should change. Reviews worked for a while, when the internet had a very different audience , they no longer work, that's normal wear and tear. Third, it's time platforms start paying specialists for crucial things like product reviews. Crowdsourcing no longer works. (Which explains why wikipedia should maintain a very conservatiove editing policy from now on)
Third, it's time platforms start paying specialists for crucial things like product reviews
Given that platforms earn a commission on every sale, why should they be trusted to procure specialists' reviews? Maybe they would prefer to hire professional charlatans to boost the reviews on everything, in order to drown out the legitimate opinions of aggrieved customers?
Sure that's a great recipe to lose your customers. Shops have an interest in having honest reviews, and if they are gaming them, someone else will provide more honest ones.
It is simply one facet of more pervasive deceit in society.
Back in the day, when you placed a long distance call, an operator would come on to ask you what your number was so the call could be billed to you. A single call might be a dollar or a few dollars, which would be ten to several tens of dollars in today's money.
No one would imagine that such a system would work today.
It is simply one facet of more pervasive deceit in society.
And that's probably rooted in people being fundamentally devalued and taken advantage of.
We are seeing worse income inequality than in The Gilded Age and people act like it's a natural and unavoidable side effect of the existence of tech, the internet, whatever. That's BS. People created these things. If these things exploit people, it's because people designed them to exploit people.
If people want people to be treated better, then people need to stop blaming machines for our social values. Tech merely magnifies those values. It doesn't cause them to exist.
Bill Gates said that automating an efficient system amplifies the efficiency and automating an inefficient system amplifies the inefficiency. I propose that you can similarly amplify whatever underlying social values you have, whether that's something good -- pro education! -- or something bad -- racism and misogyny!
And classism. We are using technology to amplify extractive economic practices, then blaming the robots as if Judgement Day had already arrived and Skynet is now in charge of our lives. (A la the Terminator movies.)
I'd almost rather society just admit it's classist so it can be handled appropriately. Instead we're just in denial and no policies (for better or worse) will ever go towards what we need. I might sound conservative but I'm definitely democrat/liberal in most of my values. I just don't want to live near someone that is thuggish and may steal from my house.
And, of course, when the automated billing system we're using now hadn't been developed yet. If asked, I'm sure the telephone company would have preferred not to trust their customers like that.
I'm a bit surprised this article has zero reference to any interdisciplinary work on trust metrics, and stuff like trust propagation algorithms. I'm even more surprised, with the blockchain having becoming such a buzzword these days, that Bitcoin's proof-of-work approach was not even mentioned, let alone evaluated for effectiveness. And where's the love for Bruce Schneier [0]?
I think the article's approach of referencing historic stragies from government and social infrastructure is a bit useless in this context because how much the problem changes when scarce information becomes abundant light-speed communication. I understand that they are trying to make the point that corporate overlords are bad, but maybe it would serve the purposes of the article better to just focus on the variables that matter, now that there's new capacities for transparency and accountability metrics based on some combination of historic behavior, web of trust node size, and other situational variables based on the medium -- like users that spend a lot of money, or consistently downvote known bad actors. Forcing users to have skin in the game with strict enforcement of transgressions is of course a reasonably effective, if coercive, strategy as well, and you can take the edge off this authoritarian approach if you pair it with some variant of restorative justice.
My approach is perhaps not rigorously useful, but for my personal conceptions of trust in a world of bad actors, I like looking at strategies from Axelrod's iterated prisoner's dilemma[1]. Tit for Tat is famously a good strategy, and there's also a good strategy where you forgive on multiple cooperations but gradually increase the punishment of defectors to n times for their nth defection [2]. Though I should mention that tribalist collusion with other bad actors is unfortunately a very viable approach as well.
It's a totally self inflicted wound too. Clickbait, outrage bait, blatant political spin, stealthy retractions, journalists on social media starting mobs...
Bots do not even register in most people's minds compared to that.
What you see is the clickbait. What you don't see are the structural changes to the media landscape over the past 20 years that led to this.
Newspapers per 100 million people fell from 1200 (in 1945) to 400 (in 2014). This is from a Brookings study cited in a Wikipedia article on the topic [0]. In 2013, the Chicago Sun Times laid off all its photographers and tasked journalists to take photos as well as provide the research and writing [1]. How would the quality of your work be affected if you had to do the job of 2 people?
The classifieds ads business is dead, and subscriptions have been declining for years because "news on the Internet is free". The only "media" that makes serious money is talk radio, which isn't journalism so much as diatribes of political invective.
As it turns out, that's what people are willing to pay for, or at least sit through ads for. If anything, "the media" is giving the people what they want.
The idea that the internet transmorphed media from oracles of truth to professional manipulators is taking the whole situation inside out. The fact that people used to trust media more is not an indicator that media used to say more truth in the past. Journalists lived in a high tower pretty much unreachable by an average reader, so producing rubbish, or being manipulative was billion times easier. In the past only an important journalist was able to stand against another important journalist, and even that was a slow inconclusive pushing. Nowadays, bullshit, and incompetence can be revealed in hours, and even small mistakes are publicly noted. Everybody can be media now. Naturally journalists has lost their semigod status. But it's important to understand that they weren't semigods before, it's just that you looked at them down up. And now you don't.
It's not just media, but the education system as well. More educated people are significantly more, not less likely to fail Intellectual-Turing-Tests about people with opposing views. i.e. far from increasing openness to experience, education itself is functioning much like indoctrination into a fundamentalist religion! The effect starts already at a High-School level and becomes worse and worse as educational attainment rises:
What that article tells me is that people say they belong to teams they don't actually belong to, and that very few people actually do. Seems to me to be a problem with the teams.
Perhaps it's more a reflection of the incentives at work? How do you drive engagement with your content? Clearly 'clickbait' is (was?) successful at driving traffic, otherwise it wouldn't be called clickbait. Polarization is used successfully by other 'non-political' sites such as YouTube to drive engagement statistics (and hence ad impressions), and so forth. If you're trying to stay afloat in a competitive media environment, what would you do?
There is no mass incentive for thought-out, contemplative, long-form journalism. In my opinion, we have dug our own grave - the truth is that the primitive, animal parts of our brain vastly overpower the analytical parts of our brain, and so your 'outrage bait' is a news (or tech) executive's bonus for the year. If the only metric by which we measure anything is $$$, then, well...
Do you think a group of journalists selling verified, accurate, non clickbait articles based on data and first hand accounts would be able to generate enough revenue to feed the journalist’s children and send them to college?
That's part of the issue; there is no need for commercial first-hand accounts anymore. If someone on social media is talking about a news event and provides a video documenting their presence, that is about as verified and accurate as you can hope for.
Maybe not, but it's still a problem. It's the same in Spain, most media is hard to trust on basically anything. They even adhere to stupid projects like "The Trust Project" and spin off factcheking brands, but none of that solved the problem. It's still the same people claiming that, oh no, this time you can trust us!
I even experienced being the target of a report (well, the company I work for), and they did an awful job. I felt that the story they made up was only tangentially related to reality.
> It's a totally self inflicted wound too. Clickbait, outrage bait, blatant political spin, stealthy retractions, journalists on social media starting mobs...
The media isn't one unified conglomerate. It's like equating the entire tech industry to just SV gig economy startups.
Ideas follow from reality. Supply and demand would dictate that infinite supply should drive prices to zero. Isn't that what we're seeing?
The other problem is that most people can't tell good writing apart from bad, but that problem is far older than all technology (apart from writing itself, of course).
The rot had set in long before they were giving away free content on the web. Before then people were paying for the distribution and not the content, the web destroyed the ability to profit off the distribution.
It's unfortunate, because while there are many bad actors, there are some outlets that don't employ these tactics that get painted with the same brush. I'm not sure how to make that better.
Any exchange of value based on trust is exploitable. The simplest cure is excluding the exploiter, but this doesn't scale well. The exploiter can skate on anonymity if the community is large enough to continually prey on someone new. Spreading news of an exploiter's behavior to others can greatly improve how well this scales, but this behavior also requires trust.
I think the more direct problem is with scale, and that the internet is at the nexus of many trust issues only because it has ramped up the scale and scope of many interactions.
I'm not super optimistic on solving this intrinsic problem of trust in social exchanges, but I do see this framing as a silver lining. It seems at least plausible to iterate offline at significantly smaller scales on mechanisms for building and maintaining trust--and rectifying its breaches--in ways that do actually scale.
If you have never read any of these counterpoints, but find that the conclusions of Selfish Gene have shaped your world view, please consider reading some of these counterpoints and seeing if they persuade you to consider new perspectives.
A particularly thorough book detailing these counterpoints is Matthieu Ricard's book:
Altruism: The Power of Compassion to Change Yourself and the World
Dawkins' "The Selfish Gene" is a scientific work about evolutionary selection at the gene level -- I don't recall him touching on psychology at all. (He provides evolutionary explanations for certain altruistic behaviors, but I don't recall him even starting on how they might be expressed via a psychological mechanism or at any conscious level.)
I can't quite imagine what either have to do with each other -- they seem to be such different topics. Or what Dawkins' work has to do with "western culture", or culture at all. At heart it's a quite mathematical/statistical argument.
I'm curious, what exactly do you see being refuted?
Aware that I might be jumping to conclusions but that doesn't sound like the title of an objective, scientific text.
At first glance I’m not sure how a book on altruism/compassion refutes the gene-centred view of evolution, which holds that evolution is best viewed as acting on genes and that selection at the level of organisms or populations almost never overrides selection based on genes.
[0]: http://www.gutenberg.org/ebooks/4341
Deleted Comment
Trust is a resource and is (only) used up if exploited. There are countless examples how this happened on the net. Remember the latest disclaimer you agreed to and consequently had your data stolen or abused? Don't even get me started on penis enlargement spam.
Then additionally, people pretending to have identified the abusers very frequently become the largest abusers themselves.
Lastly, trust needs room to grow, so trying to enforce it through surveillance and advertising will only strengthen the individuals ego. So...
> can greatly improve how well this scales, but this behavior also requires trust.
can backfire immensely. People have already tried and failed.
There is no intrinsic problem with trust. On the net you are potentially connected to everyone on the planet. Nobody can claim the trust of everyone. And I firmly believe achieving this is the wrong goal.
People need to simple not let their trust be abused and that is very possible. For example not letting ad companies handle your biases or believing the next random spam mail you get.
Anonymity is a tool to shield you from abuse. Not the only one, but for non-public personas, nobody has found something better yet.
If you keep looking for new predators, everybody will look like one at some point. Because you have let your trust be abused and have none left.
> People need to simple not let their trust be abused and that is very possible. For example not letting ad companies handle your biases or believing the next random spam mail you get.
> Anonymity is a tool to shield you from abuse. Not the only one, but for non-public personas, nobody has found something better yet.
I'm not sure it's possible to distinguish the world you describe from the one we already live in.
The name of the behavior you're looking for is Evolutionarily Stable Strategy [1] often abbreviated to just ESS.
[1] https://en.m.wikipedia.org/wiki/Evolutionarily_stable_strate...
Note well: I am not advocating either a social credit score system, nor China's use of it. Instead, I am warning that any national-level attempt to solve the trust issue will likely be open to the same abuses as China's system.
Specifically, once it becomes a means of political control it ceases to be useful as a system for trustworthiness.
I was thinking "money screws things up" and its corollary "free screws things up", then read your comment. It gets to the root of things.
With trust, trade is unlimited, so I wonder how to architect things to stand up to the problems we have.
But that effectively ends the point in aristocracy, the meme that we must cater to all these rugged individuals living off grandpas old money. So that’ll never fly.
Violent revolt it is! Front row to the apocalypse! /s
But I think there's a risk of sending a lot of bright, productive people off to tilt at technological/interface/regulatory windmills in a way that leaves fundamental trust issues out of scope.
Even if Amazon, Twitter, Facebook, etc. could wave a magic wand and remove every fake review, counterfeit product, scam, personal threat, or piece of false/fraudulent media, there would still be a trust issue. We still have to trust that they did what they said, that whatever disappeared was correctly identified, and that they didn't wrongly remove many legitimate items in the process.
Even if the magic wand that makes these decisions has the utmost ethical and logical integrity, there will still be some mix of skeptics, cynics, malign actors, competitors, bots, etc. who chum the waters with accusations to the contrary. We'd still have to choose whether or not to trust this process and the actors behind it.
So, I think it might be productive to focus on some smaller questions first. To be semi-arbitrary: can we find a protocol for reliably building a 50-person trust network with a limited scope/focus (maybe identifying reliable providers of a single service), where each participant knows a small fraction of the network, which is capable of meeting both its purpose and is capable of detecting and reforming or ejecting exploiters? If the first is tractable, can you expand the scope/focus of the networks and retain these properties? Can you compose a higher-order network that retains these properties?
Dead Comment
If anything the internet nowadays is a more regulated, safer place than it used to be. The "wild west" internet of old now only endures in certain corners. It's no longer synonymous with the thing itself.
Posting with your full name instead of anonymously or pseudonymously is now the norm for many people. This changed how people interact online, but it also changed their expectations.
A full name however doesn't make a person or their opinions any more real than a pseudonym will. People will lie to you with a fake name just as shamelessly as they would hiding behind a pseudonym.
The people who write those kinds of articles are mostly just your average Joe who uses the internet exclusively to browse Facebook and the like. Of course this gives them a twisted understanding of the medium.
And then there's those who are just lying I guess (duh!). Plenty of reason for that with the whole fake news hysteria. If you don't want to risk ending up on the wrong end of that debate, your article is already written for you. Risky to defend the free(-er) internet nowadays or even just to refrain from attacking it. Further there's the fact that "a war on fake news" might give the established media an edge, but I'm unconvinced any reputable journalist would consciously consider that when deciding on the tone of his article.
In any case you can also find plenty of articles displaying a better understanding of the matter. It's not like there's some secret global conspiracy against the free internet.
And honestly, it was better that way. The internet was built to be public space.
Now, I am not discounting that there is actually toxic, horrific harassment that goes on the Internet. Doxxing and things like it are terrible and should never happen.
Rather, I'm arguing that people need to acquire a sort of "street smarts" when navigating the Internet. Much like when you travel to a foreign country, you may expect a different culture with different norms, so it is the same with the Internet. What's rude in one culture is simply social convention in another. What is casual conversation in one is taboo in another. I spent a lot of time on the Internet growing up and I feel like I have been inoculated against the worst parts of the Internet, but at times it feels like many people haven't.
A good example of this is how a lot of people and media take 4chan posts at face value, without realizing that a lot of it is completely self-aware and that the worst posts are almost always a game of who can post the most needlessly offensive thing possible. A lot of 4chan is an exercise is communication with no names and no filter. And yet some of my most informative conversations have been on 4chan precisely because there is no politeness filter, and posters can be as devastating critical as they want. But a reader also needs to learn how to tune in and out, correspondingly discount and read between the lines to get the most out of 4chan conversation, otherwise you simply come away with the idea that the community hates everything and everyone.
For a more tame example, I follow a couple of Twitch streamers. For example, for streamer A, most communication with their chat is saccharine and supportive. But for streamer B, the chat makes fun off and insults him the whole way especially he makes a mistake, and the streamer, and the streamer gives as good as he gets and makes fun of the chat the whole time too. And this is normal and fun engagement for all parties. From afar, streamer B's community looks extremely toxic, but that's the furthest thing from the truth. (What is abhorrent is when you take streamer B's chat behavior into streamer A's chat, and that is frowned upon by all parties.)
I believe that Internet activity should be diverse. There should be places where communication is an extension of real life (Facebook, emails), places of semi-anonymous and professional communication (Hacker News, certain subreddits/slack communitiees/GitHub), and then places where you should be able to go hogwild with whatever you want to say (fun subreddits, discord).
Twitter is unfortunately one of those places that has had all of these mixed together, which is why I decided to have multiple Twitter accounts targeting professional/personal hobby communication. From what I hear, kids are already tuning to this idea, with things like public/private instagram accounts.
The Internet is not all the same, and that's great. The Internet is not just real life, and that's great.
But a lot of it is not, and a fair number of people do become indoctrinated to alt-right ways of thinking through that sort of medium, some of them going as far as murder.
I'm not saying "Ban this sick filth!" but I am increasingly of the opinion that such places shouldn't just be left to fester, because the genuinely twisted do go there, and they recruit, and that has real world consequences.
Like everything, this isn't absolutely true. People meet others they met through online means all the time, be it for clubs, dating, or commerce (e.g. craigslist). There is a certain societal expectation for how strangers are treated and that requires a bit of trust in someone you've never met.
If we got rid of all of the bots, that wouldn't make Amazon easier for me to use. I feel like scammers have already figured out that humans themselves are relatively cheap to buy.
Bots aside, what we post on the internet (and believe in general) is a function of the information we consume. That information is increasingly consumed through the internet, and the internet (as typically used) is feeding us increasingly poor quality information. (To be consumed, then shared. Recurse...)
We can (and I think do) cope by having different levels of trust for different sorts of internet information. I generally have higher trust in an HN post than an Amazon review, for example. This is useful, but has a dark side: some of the things I (and likely, you) take as a sign to "increase trust" happen to correlate with things that have nothing to do with trust. I trust Amazon reviews more if they have good grammar and spelling, largely because I trust general internet information more when it has good grammar and spelling. But perhaps I shouldn't when e.g. buying a screwdriver (what does writing ability -- and the things it correlates with -- have to do with evaluating a screwdriver?). Coming up with social/political-flavored-information examples is an interesting (and worrying) exercise.
Consider this: a consumer that employs above average grammar and spelling skills to write product reviews may also be more skillful in forming and expressing valuable opinions and assessments of any product.
I think that people are naturally inclined to pick up on those signals, whether the correlation is real or not.
Dead Comment
This is exactly how the liberal societies of the west evolved. But don't remember that any more, though.
I'm not being completely serious, but I do find it extremely interesting that this whole idea of high-coercion at the start followed by a gradual easing off as people accept the new culture is one that has been tried explicitly. Or... at least claimed to have been tried ;-). Marx and Engels split no hairs about the authoritarian nature of revolution and when Lenin and Stalin later tried (presumably) to manage the situation they found it necessary to maintain that dictatorship in order to combat the ever present trend toward bourgeoisie and capitalism. But it was all supposed to end eventually.
For me, I see parallels between that and the current air of "It's a scary world out there. Trust your government to take care of you. Give us more powers so that we can make our communities whole." No authoritarian revolution, but a very real reach towards cracking down on bad people in the name of the community. And once utopia (though not a communist one) is reached, the power will simply not be necessary.
Having said all that, it was Confucius who said that if you are lax at the beginning and then become more strict, you will be seen as a tyrant. However, if you are very strict at the beginning and become more liberal over time, people will see you as magnanimous. When I used to teach at a high school, I used that idea and it did, indeed, work very well.
https://ourworldindata.org/exports/trust-attitudes-in-the-us...
https://img.washingtonpost.com/blogs/monkey-cage/files/2014/...
First, the internet has opened up, and its prominent platforms (which were always predominantly american-culture oriented) are now global and thus reflect low trust cultures as well. Second, Goodhart's law, the internet metrics exist mainly to be gamed, thats why they should change. Reviews worked for a while, when the internet had a very different audience , they no longer work, that's normal wear and tear. Third, it's time platforms start paying specialists for crucial things like product reviews. Crowdsourcing no longer works. (Which explains why wikipedia should maintain a very conservatiove editing policy from now on)
Given that platforms earn a commission on every sale, why should they be trusted to procure specialists' reviews? Maybe they would prefer to hire professional charlatans to boost the reviews on everything, in order to drown out the legitimate opinions of aggrieved customers?
Back in the day, when you placed a long distance call, an operator would come on to ask you what your number was so the call could be billed to you. A single call might be a dollar or a few dollars, which would be ten to several tens of dollars in today's money.
No one would imagine that such a system would work today.
And that's probably rooted in people being fundamentally devalued and taken advantage of.
We are seeing worse income inequality than in The Gilded Age and people act like it's a natural and unavoidable side effect of the existence of tech, the internet, whatever. That's BS. People created these things. If these things exploit people, it's because people designed them to exploit people.
If people want people to be treated better, then people need to stop blaming machines for our social values. Tech merely magnifies those values. It doesn't cause them to exist.
Bill Gates said that automating an efficient system amplifies the efficiency and automating an inefficient system amplifies the inefficiency. I propose that you can similarly amplify whatever underlying social values you have, whether that's something good -- pro education! -- or something bad -- racism and misogyny!
And classism. We are using technology to amplify extractive economic practices, then blaming the robots as if Judgement Day had already arrived and Skynet is now in charge of our lives. (A la the Terminator movies.)
I think the article's approach of referencing historic stragies from government and social infrastructure is a bit useless in this context because how much the problem changes when scarce information becomes abundant light-speed communication. I understand that they are trying to make the point that corporate overlords are bad, but maybe it would serve the purposes of the article better to just focus on the variables that matter, now that there's new capacities for transparency and accountability metrics based on some combination of historic behavior, web of trust node size, and other situational variables based on the medium -- like users that spend a lot of money, or consistently downvote known bad actors. Forcing users to have skin in the game with strict enforcement of transgressions is of course a reasonably effective, if coercive, strategy as well, and you can take the edge off this authoritarian approach if you pair it with some variant of restorative justice.
My approach is perhaps not rigorously useful, but for my personal conceptions of trust in a world of bad actors, I like looking at strategies from Axelrod's iterated prisoner's dilemma[1]. Tit for Tat is famously a good strategy, and there's also a good strategy where you forgive on multiple cooperations but gradually increase the punishment of defectors to n times for their nth defection [2]. Though I should mention that tribalist collusion with other bad actors is unfortunately a very viable approach as well.
[0]: https://www.schneier.com/books/liars_and_outliers/
[1]: https://axelrod.readthedocs.io
[2]: http://jasss.soc.surrey.ac.uk/20/4/12.html
https://www.washingtonexaminer.com/washington-secrets/trust...
It's a totally self inflicted wound too. Clickbait, outrage bait, blatant political spin, stealthy retractions, journalists on social media starting mobs...
Bots do not even register in most people's minds compared to that.
Newspapers per 100 million people fell from 1200 (in 1945) to 400 (in 2014). This is from a Brookings study cited in a Wikipedia article on the topic [0]. In 2013, the Chicago Sun Times laid off all its photographers and tasked journalists to take photos as well as provide the research and writing [1]. How would the quality of your work be affected if you had to do the job of 2 people?
The classifieds ads business is dead, and subscriptions have been declining for years because "news on the Internet is free". The only "media" that makes serious money is talk radio, which isn't journalism so much as diatribes of political invective.
As it turns out, that's what people are willing to pay for, or at least sit through ads for. If anything, "the media" is giving the people what they want.
[0]: https://en.wikipedia.org/wiki/Decline_of_newspapers#Performa...
[1]https://www.nytimes.com/2013/06/01/business/media/chicago-su...
...
> Google Made $4.7 Billion From the News Industry in 2018, Study Says
https://www.nytimes.com/2019/06/09/business/media/google-new...
Hmm
https://www.theatlantic.com/ideas/archive/2019/06/republican...
If this doesn't scare the s--t out of you, I don't know what would.
There is no mass incentive for thought-out, contemplative, long-form journalism. In my opinion, we have dug our own grave - the truth is that the primitive, animal parts of our brain vastly overpower the analytical parts of our brain, and so your 'outrage bait' is a news (or tech) executive's bonus for the year. If the only metric by which we measure anything is $$$, then, well...
I even experienced being the target of a report (well, the company I work for), and they did an awful job. I felt that the story they made up was only tangentially related to reality.
The media isn't one unified conglomerate. It's like equating the entire tech industry to just SV gig economy startups.
No, it's really not. In a nutshell, the root cause is the currently pervasive idea that good writing should be available completely for free.
The other problem is that most people can't tell good writing apart from bad, but that problem is far older than all technology (apart from writing itself, of course).