- Contradictory facts often shouldn't change beliefs because it is extremely rare for a single fact in isolation to undermine a belief. If you believe in climate change and encounter a situation where a group of scientists were proven to have falsified data in a paper on climate change, it really isn't enough information to change your belief in climate change, because the evidence of climate change is much larger than any single paper. It's only really after reviewing a lot of facts on both sides of an issue that you can really know enough to change your belief about something.
- The facts we're exposed to today are often extremely unrepresentative of the larger body of relevant facts. Say what you want about the previous era of corporate controlled news media, at least the journalists in that era tried to present the relevant facts to the viewer. The facts you are exposed to today are usually decided by an algorithm that is trying to optimize for engagement. And the people creating the content ("facts") that you see are usually extremely motivated/biased participants. There is zero effort by the algorithms or the content creators to present a reasonably representative set of facts on both sides of an issue
I remember reading an article on one of the classic rationalist blogs (but they write SO MUCH I can't possibly find it) describing something like "rational epistemic skepticism" – or maybe a better term I can't recall either. (As noted below: "Epistemic learned helplessness")
The basic idea: an average person can easily be intellectually overwhelmed by a clever person (maybe the person is smarter, or more educated, or maybe they just studied up on a subject a lot). They basically know this... and also know that it's not because the clever person is always right. Because there's lots of these people, and not every clever person thinks the same thing, so they obviously can't all be right. But the average person (average with respect to whatever subject) is still rational and isn't going to let their beliefs bounce around. So they develop a defensive stance, a resistance to being convinced. And it's right that they do!
If someone confronts you with the PERFECT ARGUMENT, is it because the argument is true and revelatory? Or does it involve some slight of hand? The latter is much more likely
I tend to like the ethos/logos/pathos model. Arguments from clever people can sound convincing because ethos gets mixed in. And anyone can temporarily confuse someone by using pathos. This is why it's better to have arguments externalized in a form that can be reviewed on their own, logos only. It's the only style that can stand on its own without that ephemeral effect (aside from facts changing), and it's also the only one that can be adopted and owned by any listener that reviews it and proves it true to themselves.
It's usually dumb people that have so many facts and different arguments that one can't keep up with.
And they usually have so many of those because they were convinced to pay disproportionate attention to it and don't see the need to check anything or reject bad sources.
The problem isn't the PERFECT ARGUMENT, it's the argument that doesn't look like an argument at all.
Take anti-vaxxers. If you try to argue with the science, you've already lost, because anti-vaxxers have been propagandised into believing they're protecting their kids.
How? By being told that vaccinations are promoted by people who are trying to harm their kids and exploit the public for cash.
And who tells them? People like them. Not scientists. Not those smart people who look down on you for being stupid.
No, it's influencers who are just like them, part of the same tribe. Someone you could socialise with. Someone like you.
Someone who only has your best interests at heart.
And that's how it works. That's why the anti-vax and climate denial campaigns run huge bot farms with vast social media holdings which insert, amplify, and reinforce the "These people are evil and not like us and want to make you poor and harm your kids" messaging, combined with "But believe this and you will keep your kids safe".
Far-right messaging doesn't argue rationally at all. It's deliberate and cynically calculated to trigger fear, disgust, outrage, and protectiveness.
Consider how many far-right hot button topics centre on protecting kids from "weird, different, not like us" people - foreigners, intellectuals, scientists, unorthodox creatives and entertainers, people with unusual sexualities, outgroup politicians. And so on.
So when someone tries to argue with it rationally, they get nowhere. The "argument" is over before it starts.
It's not even about rhetoric or cleverness - both of which are overrated. It's about emotional conditioning using emotional triggers, tribal framing, and simple moral narratives, embedded with constant repetition and aggressive reinforcement.
repetition breeds rationalism.
variety of phrasing breeds facts.
it's how the brain works. the more cognitive and perceptive angles agree on the observed, the more likely it is, that the observed is really / actually observed.
polysemous language (ambiguity) makes it easy to manipulate the observed. reinterpretation, mere exposure and thus coopted, portfolio communist media and journalism, optimize, while using AI for everything will make it as efficient as it gets.
keep adding new real angles and they'll start to sweat or throw towels and tantrums and aim for the weak.
To add to your second point, those algorithms are extremely easy to game by states with the resources and desire to craft narratives. Specifically Russia and China.
There has actually been a pretty monumental shift in Russian election meddling tactics in the last 8 years. Previously we had the troll army, in which the primary operating tactic of their bot farms were to pose as Americans (as well as Poles, Czechs, Moldovans, Ukrainians, Brits, etc.) but push Russian propaganda. Those bot farms were fairly easy to spot and ban, and there was a ton of focus on it after the 2016 election, so that strategy was short lived.
Since then, Russia has shifted a lot closer to Chinese style tactics, and now have a "goblin" army (contrasted with their troll army). This group no longer pushes the narratives themselves, but rather uses seemingly mindless engagement interactions like scrolling, upvoting, clicking on comments, replying to comments with LLMs, etc., in order to game what the social media algorithms show people. They merely push the narratives of actual Americans (not easily bannable bots) who happen to push views that are either in line with Russian propaganda, or rhetoric that Russian intelligence views as being harmful to the US. These techniques work spectacularly well for two reasons: the dopamine boost to users who say abominable shit as a way of encouraging them to do more, and as a morale-killer to people who might oppose such abominable shit but see how "popular" it is.
> These techniques work spectacularly well for two reasons
Do they work spectacularly well, though? E.g. the article you link shows that Twitter accounts holding anti-Ukrainian views received 49 reposts less on average during a 2-hour internet outage in Russia. Even granting that all those reposts were part of an organized campaign (its hardly surprising that people reposting anti-Ukrainian content are primarily to be found in Russia) and that 49 reposts massively boosted the visibility of this content, its effect is still upper bounded by the effect of propaganda exposure on people's opinions, which is generally low. https://www.persuasion.community/p/propaganda-almost-never-w...
The best way to lie is not presenting false facts, it's curating facts to suit your narrative. It's also often that you accidentally lie to yourself or others in this way. See a great many news stories.
The act of curating facts itself is required to communicate anything because there are an infinite number of facts. You have to include some and exclude others, and you arrange them in a hierarchy of value that matches your sensibilities. This is necessary in order to perceive the world at all, because there are too many facts and most of them need to be filtered. Everyone does this by necessity. Your entire perceptual system and senses are undergirded by this framework.
There is no such thing as "objective" because it would include all things, which means it could not be perceived by anyone.
Another very good way to lie is to set up the framing such that any interpretation of any fact skews in your desired direction. Including which things are to be considered important/relevant, what kind of argument is considered valid/not. Done well, people might not even pick up that there is lying/misdirection involved. Rig the game.
The idea that people believe in climate change (or evolution) is odd considering people don't say they believe in General Relativity or atomic theory of chemistry. They just accept those as the best explanations for the evidence we have. But because climate change and evolution run counter to some people's values (often religious but also financially motivated), they get called beliefs.
You generally don't oppose to things you can grasp to the point you could understand how it challenges other beliefs you culturally or intuitively integrated.
Evolution directly challenges the idea that humans are very special creatures in a universe where mighty mystic forces care about them a lot.
Climate changes, and the weight of human industry in it, challenges directly the life style expectations of the wealthiests.
> But because climate change and evolution run counter to some people's values (often religious but also financially motivated), they get called beliefs
Thanks for your thoughts, they perfectly extend mine. I agree that it would be a sign of a very fragile belief system if it gets unwound by a single bit of contradictory evidence. And as to the "facts" that we're getting 24/7 coming out of every microwave is just a sign of complete decoupling of people's beliefs from empirical reality, in my humble opinion. Supply and demand and all that.
I would contend that empiricism is inadequate to discern what is real and what is true. Much of human experience and what is meaningful to being a person is not measurable nor quantifiable.
> Say what you want about the previous era of corporate controlled news media, at least the journalists in that era tried to present the relevant facts to the viewer.
If you think this reduced bias, you couldn't be more wrong - it only made the bias harder to debunk. Deciding which facts are "relevant" is one easy way to bias reporting, but the much easier, much more effective way is deciding which stories are "relevant". Journalists have their own convictions and causes, motivating which incidents they cast as isolated and random, and get buried in the news, and which are part of a wider trend, a "conversation that we as a nation must have", etc., getting front-page treatment.
> If you believe in climate change and encounter a situation where a group of scientists were proven to have falsified data in a paper on climate change, it really isn't enough information to change your belief in climate change, because the evidence of climate change is much larger than any single paper.
Although your wider point is sound that specific example should undermine your belief quite significantly if you're a rational person.
1. It's a group of scientists and their work was reviewed, so they are probably all dishonest.
2. They did it because they expected it to work.
3. If they expected it to work it's likely that they did it before and got away with it, or saw others getting away with it, or both.
4. If there's a culture of people falsifying data and getting away with it, that means there's very likely to be more than one paper with falsified data. Possibly many such papers. After all, the authors have probably authored papers previously and those are all now in doubt too, even if fraud can't be trivially proven in every case.
5. Scientists often take data found in papers at face value. That's why so many claims are only found to not replicate years or decades after they were published. Scientists also build on each other's data. Therefore, there are likely to not only be undetected fraudulent papers, but also many papers that aren't directly fraudulent but build on them without the problem being detected.
6. Therefore, it's likely the evidence base is not as robust as previously believed.
7. Therefore, your belief in the likelihood of their claims being true should be lowered.
In reality how much you should update your belief will depend on things like how the fraud was discovered, whether there were any penalties, and whether the scientists showed contrition. If the fraud was discovered by people outside of the field, nothing happened to the miscreants and the scientists didn't care that they got caught, the amount you should update your belief should be much larger than if they were swiftly detected by robust systems, punished severely and showed genuine regret afterwards.
You're making a chain of assumptions and deductions that are not necessarily true given the initial statement of the scenario. Just because you think those things logically follow doesn't mean that they do.
You also make throw away assertions line "That's why so many claims are only found to not replicate years or decades after they were published." What is "so many claims?" The majority? 10%? 0.5%?
I totally agree with you that the nuances of the situation are very important to consider, and the things you mention are possibilities, but you are too eager to reject things if you think "that specific example should undermine your belief quite significantly if you're a rational person." You made lots of assumptions in these statements and I think a rational person with humility would not make those assumptions so quickly.
> It's a group of scientists and their work was reviewed, so they are probably all dishonest.
Peer review is a very basic check, more or less asking someone else in the field "Does this paper, as presented, make any sense?". It's often overvalued by people outside the field, but it's table stakes to the scientific conversation, not a seal of approval by the field as a whole.
>Scientists often take data found in papers at face value. That's why so many claims are only found to not replicate years or decades after they were published. Scientists also build on each other's data. Therefore, there are likely to not only be undetected fraudulent papers, but also many papers that aren't directly fraudulent but build on them without the problem being detected.
I think it's rare that scientists take things completely at face value. Even without fraud, it's easy for people to make mistakes and it's rare that everyone in a field actually agrees on all the details, so if someone is relying on a paper for something, they will generally examine things quite closely, talk to the original authors, and to whatever extent practical attempt to verify it themselves. The publishing process doesn't tend to reward this behavior, though, unfortunately (And also as a result, an external observer does not generally see the results of this: if someone concludes that a result is BS as a result of this process, they're much more likely to drop it than try to publish a rebuttal, unless it's something that is particularly important)
Even if only 0.1% of Chinese people engaged in theft, and that would be a much lower rate than in any developed country, you'd still get a million Chinese thieves. You could show a new one every day, bombarding people with images and news reports of how untrustworthy Chinese people are. The news reports themselves wouldn't even be misinformation, as all the people shown would actually be guilty of the crimes they were accused of. Nevertheless, people would draw the wrong conclusion.
Many people are curious about truth. But because of gaslighting and no single source of truth and too much noise level, people have checked out completely. People know something is fishy, they know barbarians are at the gate. But they also know that the gate is 10,000 km away so they think, "Let me live my life peacefully in the meantime." They have lost hope in the system.
CS Peirce has a famous essay "The Fixation of Belief" where he describes various processes by which we form beliefs and what it takes to surprise/upset/unsettle them.
This blog post gestures at that idea while being an example of what Peirce calls the "a priori method". A certain framework is first settled upon for (largely) aesthetic reasons and then experience is analyzed in light of that framework. This yields comfortable conclusions (for those who buy the framework, anyhow).
For Peirce, all inquiry begins with surprise, sometimes because we've gone looking for it but usually not. About the a priori method, he says:
“[The a priori] method is far more intellectual and respectable from the point of view of reason than either of the others which we have noticed. But its failure has been the most manifest. It makes of inquiry something similar to the development of taste; but taste, unfortunately, is always more or less a matter of fashion, and accordingly metaphysicians have never come to any fixed agreement, but the pendulum has swung backward and forward between a more material and a more spiritual philosophy, from the earliest times to the latest. And so from this, which has been called the a priori method, we are driven, in Lord Bacon's phrase, to a true induction.”
Wow. I'm reminded of a great essay/blgo I read years ago that I'll never find again that said a good, engaging talk/presentation has to have an element of surprise. More specifically, you start with an exposition of what your audience already knows/believes, then you introduce your thesis which is SURPRISING in terms of what they already know. Not too out of the realm of belief, but just enough.
The bigger/more thought-diverse the audience, the harder this is to do.
I had a grad school mentor William Wells who taught us something similar. A good research publication or presentation should aim for "just the right amount of surprise".
Too much surprise and the scientific audience will dismiss you out of hand. How could you be right while all the prior research is dead wrong?
Conversely, too little surprise and the reader / listener will yawn and say but of course we all know this. You are just repeating standard knowledge in the field.
Despite the impact on audience reception we tend to believe that most fields would benefit from robust replication studies and the researchers shouldn't be penalized for confirming the well known.
And, sometimes there really is paradigm breaking research and common knowledge is eventually demonstrated to be very wrong. But often the initial researchers face years or decades of rejection.
my understanding (which is definitely not exhaustive!) is that the case between Galileo and the church was way more nuanced than is popularly retold, and had nothing whatsoever to do with Biblical literalism like the passage in Joshua about making the sun stand still.
Paul Feyerabend has a book called Against Method in which he essentially argues that it was the Catholic Church who was following the classical "scientific method" of weighing evidence between theories, and Galileo's hypothesis was rationally judged to be inferior to the existing models. Very fun read.
I completely agree with your comment. The common narrative about Galileo and the Church is often oversimplified and overlooks the intellectual context of the time. As you pointed out, it wasn’t about a crude Biblical literalism—after all, even centuries before Galileo, figures like Saint Thomas Aquinas, drawing on Aristotle, already accepted that the Earth is spherical.
By Galileo’s era, the Catholic Church was well aware of this scientific truth and actively engaged with astronomy and natural philosophy. The dispute was far more about competing models and the standards of evidence required, not a refusal to accept reason or observation.
Then I can’t help but think: if the author of the article didn’t even understand this, how can the rest of the article be correct if it started from a biased and almost false premise?
> Then I can’t help but think: if the author of the article didn’t even understand this, how can the rest of the article be correct if it started from a biased and almost false premise?
That seems pretty unfair. The article is clearly structured to treat the Galileo thing as an example, not a premise. It is supposed to be a familiar case to consider before going into unfamiliar ones. In that sense it clearly still works as an example even if it's false: does it not set you up to think about the general problem, even if it's a fictional anecdote? It's no different than using some observation about Lord of the Rings or Harry Potter as an example before setting into a point. The fact that it's fictional doesn't affect its illustrative merits.
Galileo started the troll himself depicting the opponent theory in the mouth of Simplicius.
And even with its acquaintances with the pope, he finished jailed at home. Far better than being burned alive like the Church did with Giordano Bruno.
So, yes, they are more nuances to the affair, but the case around lack of observable parallax or other indeed judicious reasoning is not going to create a great narrative to sell on the one hand, and on the other hand focusing on technical details is kind of missing the forest for the tree of what where the social issues at stake the trial examplified.
Was it during Galileo's era or was it a much earlier time with Greek philosophers when the idea of heliocentrism was rejected because the lack of parallax movement of the stars? The idea of stars being so far away they wouldn't show parallax movement wasn't acceptable without stronger evidence than what was available at the time, given how massive that would make outer space, so the simpler explanation was that the sun moved.
> By Galileo’s era, the Catholic Church was well aware of this scientific truth and actively engaged with astronomy and natural philosophy.
I'm confused. Are you saying that the Church knew the Earth was round or not? If they knew, then it doesn't matter what arguments were made, it was all in bad faith and therefore wasn't scientific.
The author doesn't use the Galileo episode as a premise, only as a catchy illustration. If anything, the more nuanced version of the story seems to support their argument better than the simplified version does.
> Then I can’t help but think: if the author of the article didn’t even understand this, how can the rest of the article be correct if it started from a biased and almost false premise?
Same way Galileo could be correct about Earth circling the Sun despite basing it on incorrect assumptions :)
> the case between Galileo and the church was way more nuanced than is popularly retold
Ex historian here. This is true. It’s a complicated episode and its interpretation is made more murky by generations of people trying to use it to make a particular rhetorical point. Paul Feyerabend is guilty of this too, although he’s at least being very original in the contrarian philosophy of science he’s using it for.
If anyone is interested in the episode for its own sake (which is rare actually, unless you’re a renaissance history buff first and foremost), I’d probably recommend John Heilbron’s biography which has a pretty balanced take on the whole thing.
I have always thought the lesson here is to be careful when insulting those with a great deal of power over you. Pope Urban VIII was originally a patron and supporter of Galileo:
>...Indeed, although Galileo states in the preface of his book that the character is named after a famous Aristotelian philosopher (Simplicius in Latin, Simplicio in Italian), the name "Simplicio" in Italian also had the connotation of "simpleton."[55] Authors Langford and Stillman Drake asserted that Simplicio was modeled on philosophers Lodovico delle Colombe and Cesare Cremonini. Pope Urban demanded that his own arguments be included in the book, which resulted in Galileo putting them in the mouth of Simplicio. Some months after the book's publication, Pope Urban VIII banned its sale and had its text submitted for examination by a special commission
It wasn't his theory, it was that he presented it in the form of a dialogue with a character who was an obvious stand-in for the Pope, and then made that character sound like a complete idiot.
The heresy charges were an excuse to punish him for being disrespectful. He'd gotten approval from the Pope to publish; he would have been fine if he'd just been polite.
Obviously that's still petty and unjustified, but science denial wasn't the real reason for it.
The Catholic Church was quite tyrannical about opining on matters of theology. Heliocentrism vs Geocentrism, which discussed in terms of whether it's really true or not (as opposed to merely an interesting mathematical model), was considered to be this. That didn't mean that the church was unwilling to discuss it, but that they wanted the public discourse to be controlled to avoid it seeming to conflict with their domain, at least until they could be convinced (and Galileo's main problem is that he could convince very few people inside and outside the church, due to lack of available evidence) and change their official line themselves.
The other problem for Galileo was that he did basically just piss off a bunch of people (he was, by all accounts, very good at publicly dunking on people, whether they were right or wrong. He'd be a natural on modern social media). There was a large group who basically started conspiring against him, trying to implicate him in going against the church, and then his book (which was 'approved' through a very chaotic, almost comical sequence of bad timings and missed communication) managed to insult and piss off the Pope, who was previously a very close friend.
So, ultimately the broad thrust of the situation is not changed: the church was ultimately wrong and unreasonable in their demands, and Galileo was ultimately correct in rejecting Geocentrism, but the church was more reasonable than generally implied in the simplified telling, and Galileo was a lot less correct, and especially lacked good rational arguments and evidence for his specific model.
Galileo's friend Barberini became Pope and asked Galileo to write a book. But Barberini became paranoid about conspiracies and thought it had seditious, secretly-critical undertones.
I just recently watched a lecture about this and was fascinated.
Specifically, the (incorrect) model of the universe that was used in Europe at the time had been refined to the point that it was absurdly accurate. Even had they adopted a heliocentric model, there would have been no direct benefit for for a long, long time. If anything, Galileo's work was rife with errors and mathematical problems that would have taken a lot of work to figure out.
So the argument was to take on a bunch of technical debt and switching costs for almost no benefits.
Regardless of what the standards of evidence were at the time, it surely wasn't "scientific" to threaten someone with prosecution for publishing a supposedly inferior hypothesis. That was politics.
Speaking of politics, the Reformation happened with nearly perfect timing and several countries became safe havens for those who had disagreements with the Catholic Church. This window of safety helped incubate modern science during its critical early years. Less than 50 years after Gelileo died, Newton published Principia. By then it was already well accepted, at least in England, that the Earth goes around the Sun, not the other way around.
Absolutely agree that it was politics, not science, but it wasn't really anti-science either. In a nutshell, his theory was fine on its own; he was punished for insulting the Pope.
> and had nothing whatsoever to do with Biblical literalism like the passage in Joshua about making the sun stand still.
The church is and was a large, often heterogenous institution. For some the issue was about conflict with literal interpretations of the bible, not merely the predominate allegorical interpretations (a more widely held concern, at least as a pedagogic matter). AFAIU, while the pope wasn't of this mind, some of the clerics tapped to investigate were. See, e.g., the 1616 Consultant's Report,
> All said that this proposition is foolish and absurd in philosophy, and formally heretical since it explicitly contradicts in many places the sense of Holy Scripture, according to the literal meaning of the words and according to the common interpretation and understanding of the Holy Fathers and the doctors of theology.
Galileo had a trial 20 years before the second, more famous one, where he was banned from promoting Copernican ideas, and Copernican books in general were banned by the Inquisition and by papal order. But because 20 years later he insulted the Pope that was protecting him from being arrested, some people act like that means it had nothing to do with Heliocentrism.
The important thing is not why they thought they were right but the fact they could not tolerate being wrong, or even tolerate dissidence on that one little inconsequential thing.
That's why you have people today pushing for flat earth and creationism.
Because their whole shtick is we are always right about absolutely everything.
To be honest, I don't ever saw the reason to make him some sort of almost-martyr. People were wrong and fighting for a good cause many times in history, stuff is always way more complex than surface glance reveals.
The moral of the story isn't how great he was, but how horrible the church was in punishing any dissent (which itself was a highly political process) and how ridiculous it was that they had any sort of power over whole society. And power they had, and rarely used it for some greater good.
As I’ve grown older and witnessed history in action I’ve begun to understand that reality is much, much more complicated than the simple narratives of history we lean on as a society.
Just think of how many different competing narratives are currently in existence surrounding this tumultuous point in history and realize that at some point some of these narratives will become dominant. Over time as the events leave social memory the key conclusions will likely be remembered but a lot of the reasoning behind them will not. As it exits living memory most of the nuance and context is lost. Over time we may change the narrative by reconsidering aspects that were forgotten, recontextualizing events based on modern concepts and concerns, misunderstanding what happened, or even surreptitiously “modifying” what happened for political ends. Or to put it more plainly, history is written by the victors and can be rewritten as time goes on and the victors change.
I wish Hacker News would let me use emojis so I could put three red sirens after this man’s name.
Sungenis isn’t a good-faith investigator trying to shed light on nuances around Galileo’s argument. He’s a tradcath (old-school Catholic who rejects Vatican II) hack who wants to cast shadows on Galileo from as many directions as possible in the hopes that he can soften people up on the idea of Geocentrism. His approach is very cautious and incremental and relies a lot on innuendo; he makes it difficult to really pin him down on the things I just said about him. But if you look up the things this guy’s written and the kinds of people he hires to “write the dirty work” when necessary, it’s pretty clear what his project is.
Edit: I will note that I am not familiar with Paul Feyerabend and the book mentioned in the top comment, it’s totally possible that those are from a different school of thought more interested in good faith discussion about the scientific method (or not, I don’t know). I would just advise taking any “turns out” argument about Galileo and the Church with huge grains of salt, given that this topic attracts some very slippery people with ulterior motives who intentionally appeal to contrarians like many of us on this site.
That title is ultimately swinging too far the other way. Galileo was a lot less correct than commonly assumed, and the church was more reasonable, but the church was still wrong in the end, both factually and morally for the level of control over the discussion which they wielded.
> Robert A. Sungenis (born c. 1955) is an American Catholic apologist and advocate of the pseudoscientific belief that the Earth is the center of the universe. He has made statements about Jews and Judaism which have been criticized as being antisemitic, which he denies. Sungenis is a member of the Kolbe Center for the Study of Creation, a Catholic Young Earth creationist group.
Very engaging look at a very difficult topic to approach analytically.
I'm reminded of something I learned about the founder of Stormfront, the internet's first white supremacist forum. His child went on to attend college away from home, her first time away from her family, and over a period of roughly two years, she attended dinners with a group of Jewish students who challenged each of her beliefs one at a time. Each time, as she accepted the evidence her friends presented to her about a particular belief, she nonetheless would integrate the new information with her racist worldview. This continued piece by piece until there was nothing left of her racist worldview at all.
It's both heartening and disheartening at the same time, because if this person can change her mind after almost two decades of constant indoctrination during her formative years, then surely anyone can change their mind. That's the heartening part: the disheartening part is, of course, that the effort it took is far from scalable at present and much more difficult to apply to someone who remains plugged into whatever information sources they are getting their current fix of nonsense from.
I think this is just gloating. Children leaving home for college and quickly abandoning the belief systems of their family is almost more common than the opposite, where they maintain them. Especially if the belief system is something as unpopular as white supremacy mythology; not easy to make new friends at your new school if you don't give that up.
I'm sure she maintains many beliefs that may people would see as racist, along with her classmates. She hasn't been educated or fixed, she just left home.
IIRC, the stats say that overwhelmingly, children will become a version of their parents, including beliefs, etc. This actually seems more like the exception than the rule.
I remember my first year in college as being the time when I solidified my own first worldview. Prior to that, I had some ideas like the existence of God (in some form) that I was ambivalent about or maybe deferring final judgement. That's when I decided that I was an atheist.
Coincidentally, around the same time my twin brother became a serious Christian. He was socially integrated into a group. He finished college. I did not.
Then years later, maybe late 20s or early 30s, I became convinced that I had been wrong about my government my whole life and that they were not trustworthy. 9/11 being a false flag (which I still believe) was evidence of that.
The interesting thing was at the time when I was in New York I had completely accepted the idea that those three buildings had all turned into dust because the jet hit them. I remember walking around lower Manhattan to pick up a check and the dust was just coating everything.
I had even done some word processing on one of the twin towers leases shortly before the event while temping at Wachtell Lipton. At the time I made no connection.
Anyway, I think an underappreciated aspect of belief graphs is their connection to social groups and identity. It was much easier for me to question institutions when I already felt more marginalized and actually partly blamed society for it being so hard for me to handle my needs and find a place in it.
Another aspect of group membership and beliefs is practical. When groups are competing strategically, they often do so in ways that are not particularly ethical. It's much easier to justify this if you think of the other group as being deeply flawed, evil, invaders, etc.
Although some of these demonization s of the other group do have some kernel of truth to them, they are largely oversimplifications in the belief graphs leading to dangerous inaccuracies.
What are the practical structural and cultural differences that lead to the group divisions? They largely seem geographic, economic, ethnic.
Could a more sophisticated, better integrated, and more accurate belief system help? Or do the social structures and networks largely define the groups?
Are we just basically mammalian ant colonies? Brutally fighting each other for dominance any time there is a resource conflict?
If the other side seems to be trying to hog important resources any time they get a chance, you perceive that you are not playing a fair game. It's not a civil interaction. The other doesn't play by the rules or tell the truth or leave any subtly in discourse. So why should your group, unless it wants to get wiped out?
In my worldview the faint hope is that having more abundance of resources will somehow lead to more civility.
> If the other side seems to be trying to hog important resources any time they get a chance, you perceive that you are not playing a fair game.
That's why I'm okay with Democratic states and counties using gerrymandering to keep seats. The Electoral College, and the Senate, and the House of Representatives, are tilted towards states that always vote Republican. It is already far from "One person, one vote". I'll fight fair when the people who want me dead and exiled fight fair first. I already have my honor in that I don't want them killed for their stupid beliefs.
To the author: I love this idea, but your blog has two problems that made it less enjoyable for me to read. The first is the pull quotes. I find them confusing and unnecessary, especially when they repeat sentences in the preceding paragraph. The second is that I got stuck on the moving graphs while scrolling on my phone. I suggest making them smaller with a different background color or simply make them static images.
Some of the core ideas here seem good, but the node/edge distinction feels too fuzzy. The node "Climate Change Threat" is a claim. Is the node "Efficiency" a claim? Can one challenge the existence of Efficiency? If one instead challenges the benefit of Efficiency, isn't that an edge attack?
I could give a bunch of other examples where the nodes in the article don't feel like apples-to-apples things. I feel less motivated to try to internalize the article due to this.
I think the structure inherently enables each node to be a claim (like "this thing exists"), but that there's value in making a node even if that node's claim is not particularly disagreeable, because the edges to that node might be disagreeable, or to provide more detail about how one node relates to another (e.g. through some intermediate node). In this case, maybe the main value in modeling "Efficiency" is to convey how innovation might lead to profit.
To me, it feels less fuzzy when you assume that all nodes and edges imply their own claims, and that it's just a matter of whether or not those claims are worth arguing. The fuzziness imo is based on the fact that the curator picks which nodes and edges exist, which therefore determines which claims exist and can be agreed or disagreed with, not to mention the overall legibility of the graph itself. But I would argue that a causal graph like this is better at representing reality than something like an argument tree, and that, while it might be fuzzy to determine which nodes should exist, at least there's less opinion involved about where nodes should be placed in relation to each other. Which imo makes the structure easier to refine given time and feedback.
The edges are labeled by transitive verbs, where the arrow points from the subject of that verb to the direct object. (I'm counting particle verbs, like "leads to", as verbs.) The nodes are labeled by nouns. If you can change a noun to a verb, I guess you would be changing what is an edge and what is a node.
Example: In the article's first diagram, there is a node labeled "Innovation". This could be replaced by a node labeled "Capitalist" and a node labeled "Improvement", with an arrow from the first to the second labeled "innovates."
So yes, if you can replace a node by an edge (and vice versa, although I don't give an example), this node vs. edge thing is fuzzy.
In 'Thus Spoke Zarathustra' the argument is made that the most important cultural changes happen outside the debate, where new structures of thought are being built without being noticed. As without a competing thought structure we are unable to even perceive the new structure. It is the dissonances and the debates that lets us introspect our own ideas. Without the dissonance we do not notice new ideas taking hold of us and changing ourselves, and it is only unnoticed that truly radical changes can take place.
I'm wary of making an "arguments are soldiers" assumption where facts are mostly useful for making arguments, in an attempt to change people's minds.
We should be curious about what's going on in the world regardless of what ideologies we might find appealing. Knowing what's going on in the world is an end in itself. An article with some interesting evidence in it is useful even if you disagree with the main argument.
Facts may not change minds, but we should still support people who do the reporting that brings us the facts.
- Contradictory facts often shouldn't change beliefs because it is extremely rare for a single fact in isolation to undermine a belief. If you believe in climate change and encounter a situation where a group of scientists were proven to have falsified data in a paper on climate change, it really isn't enough information to change your belief in climate change, because the evidence of climate change is much larger than any single paper. It's only really after reviewing a lot of facts on both sides of an issue that you can really know enough to change your belief about something.
- The facts we're exposed to today are often extremely unrepresentative of the larger body of relevant facts. Say what you want about the previous era of corporate controlled news media, at least the journalists in that era tried to present the relevant facts to the viewer. The facts you are exposed to today are usually decided by an algorithm that is trying to optimize for engagement. And the people creating the content ("facts") that you see are usually extremely motivated/biased participants. There is zero effort by the algorithms or the content creators to present a reasonably representative set of facts on both sides of an issue
The basic idea: an average person can easily be intellectually overwhelmed by a clever person (maybe the person is smarter, or more educated, or maybe they just studied up on a subject a lot). They basically know this... and also know that it's not because the clever person is always right. Because there's lots of these people, and not every clever person thinks the same thing, so they obviously can't all be right. But the average person (average with respect to whatever subject) is still rational and isn't going to let their beliefs bounce around. So they develop a defensive stance, a resistance to being convinced. And it's right that they do!
If someone confronts you with the PERFECT ARGUMENT, is it because the argument is true and revelatory? Or does it involve some slight of hand? The latter is much more likely
And they usually have so many of those because they were convinced to pay disproportionate attention to it and don't see the need to check anything or reject bad sources.
https://slatestarcodex.com/2019/06/03/repost-epistemic-learn...
Take anti-vaxxers. If you try to argue with the science, you've already lost, because anti-vaxxers have been propagandised into believing they're protecting their kids.
How? By being told that vaccinations are promoted by people who are trying to harm their kids and exploit the public for cash.
And who tells them? People like them. Not scientists. Not those smart people who look down on you for being stupid.
No, it's influencers who are just like them, part of the same tribe. Someone you could socialise with. Someone like you.
Someone who only has your best interests at heart.
And that's how it works. That's why the anti-vax and climate denial campaigns run huge bot farms with vast social media holdings which insert, amplify, and reinforce the "These people are evil and not like us and want to make you poor and harm your kids" messaging, combined with "But believe this and you will keep your kids safe".
Far-right messaging doesn't argue rationally at all. It's deliberate and cynically calculated to trigger fear, disgust, outrage, and protectiveness.
Consider how many far-right hot button topics centre on protecting kids from "weird, different, not like us" people - foreigners, intellectuals, scientists, unorthodox creatives and entertainers, people with unusual sexualities, outgroup politicians. And so on.
So when someone tries to argue with it rationally, they get nowhere. The "argument" is over before it starts.
It's not even about rhetoric or cleverness - both of which are overrated. It's about emotional conditioning using emotional triggers, tribal framing, and simple moral narratives, embedded with constant repetition and aggressive reinforcement.
it's how the brain works. the more cognitive and perceptive angles agree on the observed, the more likely it is, that the observed is really / actually observed.
polysemous language (ambiguity) makes it easy to manipulate the observed. reinterpretation, mere exposure and thus coopted, portfolio communist media and journalism, optimize, while using AI for everything will make it as efficient as it gets.
keep adding new real angles and they'll start to sweat or throw towels and tantrums and aim for the weak.
Dead Comment
There has actually been a pretty monumental shift in Russian election meddling tactics in the last 8 years. Previously we had the troll army, in which the primary operating tactic of their bot farms were to pose as Americans (as well as Poles, Czechs, Moldovans, Ukrainians, Brits, etc.) but push Russian propaganda. Those bot farms were fairly easy to spot and ban, and there was a ton of focus on it after the 2016 election, so that strategy was short lived.
Since then, Russia has shifted a lot closer to Chinese style tactics, and now have a "goblin" army (contrasted with their troll army). This group no longer pushes the narratives themselves, but rather uses seemingly mindless engagement interactions like scrolling, upvoting, clicking on comments, replying to comments with LLMs, etc., in order to game what the social media algorithms show people. They merely push the narratives of actual Americans (not easily bannable bots) who happen to push views that are either in line with Russian propaganda, or rhetoric that Russian intelligence views as being harmful to the US. These techniques work spectacularly well for two reasons: the dopamine boost to users who say abominable shit as a way of encouraging them to do more, and as a morale-killer to people who might oppose such abominable shit but see how "popular" it is.
https://www.bruegel.org/first-glance/russian-internet-outage...
Do they work spectacularly well, though? E.g. the article you link shows that Twitter accounts holding anti-Ukrainian views received 49 reposts less on average during a 2-hour internet outage in Russia. Even granting that all those reposts were part of an organized campaign (its hardly surprising that people reposting anti-Ukrainian content are primarily to be found in Russia) and that 49 reposts massively boosted the visibility of this content, its effect is still upper bounded by the effect of propaganda exposure on people's opinions, which is generally low. https://www.persuasion.community/p/propaganda-almost-never-w...
Deleted Comment
Hah, a "monkey amplifier" army! Look at garbage coming out of infinite monkeys keyboards and boost what fits. Sigh
...or USA
There is no such thing as "objective" because it would include all things, which means it could not be perceived by anyone.
Evolution directly challenges the idea that humans are very special creatures in a universe where mighty mystic forces care about them a lot.
Climate changes, and the weight of human industry in it, challenges directly the life style expectations of the wealthiests.
Hey, weren't we just talking about propaganda?
... But that algorithm is still corporate controlled.
If you think this reduced bias, you couldn't be more wrong - it only made the bias harder to debunk. Deciding which facts are "relevant" is one easy way to bias reporting, but the much easier, much more effective way is deciding which stories are "relevant". Journalists have their own convictions and causes, motivating which incidents they cast as isolated and random, and get buried in the news, and which are part of a wider trend, a "conversation that we as a nation must have", etc., getting front-page treatment.
A typical example: And third, the failure of its findings to attract much notice, at least so far, suggests that scholars, medical institutions and members of the media are applying double standards to such studies. - https://www.economist.com/united-states/2024/10/27/the-data-... (unpaywalled: https://archive.md/Mwjb4)
Although your wider point is sound that specific example should undermine your belief quite significantly if you're a rational person.
1. It's a group of scientists and their work was reviewed, so they are probably all dishonest.
2. They did it because they expected it to work.
3. If they expected it to work it's likely that they did it before and got away with it, or saw others getting away with it, or both.
4. If there's a culture of people falsifying data and getting away with it, that means there's very likely to be more than one paper with falsified data. Possibly many such papers. After all, the authors have probably authored papers previously and those are all now in doubt too, even if fraud can't be trivially proven in every case.
5. Scientists often take data found in papers at face value. That's why so many claims are only found to not replicate years or decades after they were published. Scientists also build on each other's data. Therefore, there are likely to not only be undetected fraudulent papers, but also many papers that aren't directly fraudulent but build on them without the problem being detected.
6. Therefore, it's likely the evidence base is not as robust as previously believed.
7. Therefore, your belief in the likelihood of their claims being true should be lowered.
In reality how much you should update your belief will depend on things like how the fraud was discovered, whether there were any penalties, and whether the scientists showed contrition. If the fraud was discovered by people outside of the field, nothing happened to the miscreants and the scientists didn't care that they got caught, the amount you should update your belief should be much larger than if they were swiftly detected by robust systems, punished severely and showed genuine regret afterwards.
You also make throw away assertions line "That's why so many claims are only found to not replicate years or decades after they were published." What is "so many claims?" The majority? 10%? 0.5%?
I totally agree with you that the nuances of the situation are very important to consider, and the things you mention are possibilities, but you are too eager to reject things if you think "that specific example should undermine your belief quite significantly if you're a rational person." You made lots of assumptions in these statements and I think a rational person with humility would not make those assumptions so quickly.
Peer review is a very basic check, more or less asking someone else in the field "Does this paper, as presented, make any sense?". It's often overvalued by people outside the field, but it's table stakes to the scientific conversation, not a seal of approval by the field as a whole.
>Scientists often take data found in papers at face value. That's why so many claims are only found to not replicate years or decades after they were published. Scientists also build on each other's data. Therefore, there are likely to not only be undetected fraudulent papers, but also many papers that aren't directly fraudulent but build on them without the problem being detected.
I think it's rare that scientists take things completely at face value. Even without fraud, it's easy for people to make mistakes and it's rare that everyone in a field actually agrees on all the details, so if someone is relying on a paper for something, they will generally examine things quite closely, talk to the original authors, and to whatever extent practical attempt to verify it themselves. The publishing process doesn't tend to reward this behavior, though, unfortunately (And also as a result, an external observer does not generally see the results of this: if someone concludes that a result is BS as a result of this process, they're much more likely to drop it than try to publish a rebuttal, unless it's something that is particularly important)
Even if only 0.1% of Chinese people engaged in theft, and that would be a much lower rate than in any developed country, you'd still get a million Chinese thieves. You could show a new one every day, bombarding people with images and news reports of how untrustworthy Chinese people are. The news reports themselves wouldn't even be misinformation, as all the people shown would actually be guilty of the crimes they were accused of. Nevertheless, people would draw the wrong conclusion.
Dead Comment
The essay: https://www.peirce.org/writings/p107.html
This blog post gestures at that idea while being an example of what Peirce calls the "a priori method". A certain framework is first settled upon for (largely) aesthetic reasons and then experience is analyzed in light of that framework. This yields comfortable conclusions (for those who buy the framework, anyhow).
For Peirce, all inquiry begins with surprise, sometimes because we've gone looking for it but usually not. About the a priori method, he says:
“[The a priori] method is far more intellectual and respectable from the point of view of reason than either of the others which we have noticed. But its failure has been the most manifest. It makes of inquiry something similar to the development of taste; but taste, unfortunately, is always more or less a matter of fashion, and accordingly metaphysicians have never come to any fixed agreement, but the pendulum has swung backward and forward between a more material and a more spiritual philosophy, from the earliest times to the latest. And so from this, which has been called the a priori method, we are driven, in Lord Bacon's phrase, to a true induction.”
The bigger/more thought-diverse the audience, the harder this is to do.
Too much surprise and the scientific audience will dismiss you out of hand. How could you be right while all the prior research is dead wrong?
Conversely, too little surprise and the reader / listener will yawn and say but of course we all know this. You are just repeating standard knowledge in the field.
Despite the impact on audience reception we tend to believe that most fields would benefit from robust replication studies and the researchers shouldn't be penalized for confirming the well known.
And, sometimes there really is paradigm breaking research and common knowledge is eventually demonstrated to be very wrong. But often the initial researchers face years or decades of rejection.
Paul Feyerabend has a book called Against Method in which he essentially argues that it was the Catholic Church who was following the classical "scientific method" of weighing evidence between theories, and Galileo's hypothesis was rationally judged to be inferior to the existing models. Very fun read.
By Galileo’s era, the Catholic Church was well aware of this scientific truth and actively engaged with astronomy and natural philosophy. The dispute was far more about competing models and the standards of evidence required, not a refusal to accept reason or observation.
Then I can’t help but think: if the author of the article didn’t even understand this, how can the rest of the article be correct if it started from a biased and almost false premise?
That seems pretty unfair. The article is clearly structured to treat the Galileo thing as an example, not a premise. It is supposed to be a familiar case to consider before going into unfamiliar ones. In that sense it clearly still works as an example even if it's false: does it not set you up to think about the general problem, even if it's a fictional anecdote? It's no different than using some observation about Lord of the Rings or Harry Potter as an example before setting into a point. The fact that it's fictional doesn't affect its illustrative merits.
And even with its acquaintances with the pope, he finished jailed at home. Far better than being burned alive like the Church did with Giordano Bruno.
So, yes, they are more nuances to the affair, but the case around lack of observable parallax or other indeed judicious reasoning is not going to create a great narrative to sell on the one hand, and on the other hand focusing on technical details is kind of missing the forest for the tree of what where the social issues at stake the trial examplified.
I'm confused. Are you saying that the Church knew the Earth was round or not? If they knew, then it doesn't matter what arguments were made, it was all in bad faith and therefore wasn't scientific.
EDIT: Never mind, I misread
Same way Galileo could be correct about Earth circling the Sun despite basing it on incorrect assumptions :)
Ex historian here. This is true. It’s a complicated episode and its interpretation is made more murky by generations of people trying to use it to make a particular rhetorical point. Paul Feyerabend is guilty of this too, although he’s at least being very original in the contrarian philosophy of science he’s using it for.
If anyone is interested in the episode for its own sake (which is rare actually, unless you’re a renaissance history buff first and foremost), I’d probably recommend John Heilbron’s biography which has a pretty balanced take on the whole thing.
Perhaps I'm missing some nuance here, but I don't see why a rational argument about competing models would require such drastic suppression.
>...Indeed, although Galileo states in the preface of his book that the character is named after a famous Aristotelian philosopher (Simplicius in Latin, Simplicio in Italian), the name "Simplicio" in Italian also had the connotation of "simpleton."[55] Authors Langford and Stillman Drake asserted that Simplicio was modeled on philosophers Lodovico delle Colombe and Cesare Cremonini. Pope Urban demanded that his own arguments be included in the book, which resulted in Galileo putting them in the mouth of Simplicio. Some months after the book's publication, Pope Urban VIII banned its sale and had its text submitted for examination by a special commission
https://en.wikipedia.org/wiki/Galileo_affair
The heresy charges were an excuse to punish him for being disrespectful. He'd gotten approval from the Pope to publish; he would have been fine if he'd just been polite.
Obviously that's still petty and unjustified, but science denial wasn't the real reason for it.
The other problem for Galileo was that he did basically just piss off a bunch of people (he was, by all accounts, very good at publicly dunking on people, whether they were right or wrong. He'd be a natural on modern social media). There was a large group who basically started conspiring against him, trying to implicate him in going against the church, and then his book (which was 'approved' through a very chaotic, almost comical sequence of bad timings and missed communication) managed to insult and piss off the Pope, who was previously a very close friend.
So, ultimately the broad thrust of the situation is not changed: the church was ultimately wrong and unreasonable in their demands, and Galileo was ultimately correct in rejecting Geocentrism, but the church was more reasonable than generally implied in the simplified telling, and Galileo was a lot less correct, and especially lacked good rational arguments and evidence for his specific model.
Galileo's friend Barberini became Pope and asked Galileo to write a book. But Barberini became paranoid about conspiracies and thought it had seditious, secretly-critical undertones.
Specifically, the (incorrect) model of the universe that was used in Europe at the time had been refined to the point that it was absurdly accurate. Even had they adopted a heliocentric model, there would have been no direct benefit for for a long, long time. If anything, Galileo's work was rife with errors and mathematical problems that would have taken a lot of work to figure out.
So the argument was to take on a bunch of technical debt and switching costs for almost no benefits.
Speaking of politics, the Reformation happened with nearly perfect timing and several countries became safe havens for those who had disagreements with the Catholic Church. This window of safety helped incubate modern science during its critical early years. Less than 50 years after Gelileo died, Newton published Principia. By then it was already well accepted, at least in England, that the Earth goes around the Sun, not the other way around.
Deleted Comment
The church is and was a large, often heterogenous institution. For some the issue was about conflict with literal interpretations of the bible, not merely the predominate allegorical interpretations (a more widely held concern, at least as a pedagogic matter). AFAIU, while the pope wasn't of this mind, some of the clerics tapped to investigate were. See, e.g., the 1616 Consultant's Report,
> All said that this proposition is foolish and absurd in philosophy, and formally heretical since it explicitly contradicts in many places the sense of Holy Scripture, according to the literal meaning of the words and according to the common interpretation and understanding of the Holy Fathers and the doctors of theology.
https://www.vaticanobservatory.org/sacred-space-astronomy/in...
That's why you have people today pushing for flat earth and creationism.
Because their whole shtick is we are always right about absolutely everything.
The moral of the story isn't how great he was, but how horrible the church was in punishing any dissent (which itself was a highly political process) and how ridiculous it was that they had any sort of power over whole society. And power they had, and rarely used it for some greater good.
I think the best reason is what you already describe:
> how horrible the church was in punishing any dissent
Cancel culture of the time.
I mean, he was put under a house arrest. Many nobles would make his life far more unpleasant if he would present them as he did present pope.
Just think of how many different competing narratives are currently in existence surrounding this tumultuous point in history and realize that at some point some of these narratives will become dominant. Over time as the events leave social memory the key conclusions will likely be remembered but a lot of the reasoning behind them will not. As it exits living memory most of the nuance and context is lost. Over time we may change the narrative by reconsidering aspects that were forgotten, recontextualizing events based on modern concepts and concerns, misunderstanding what happened, or even surreptitiously “modifying” what happened for political ends. Or to put it more plainly, history is written by the victors and can be rewritten as time goes on and the victors change.
Galileo Was Wrong: The Church Was Right CD-ROM – September 1, 2007 by Robert A. Sungenis (Author), Robert J. Bennett (Author)
I wish Hacker News would let me use emojis so I could put three red sirens after this man’s name.
Sungenis isn’t a good-faith investigator trying to shed light on nuances around Galileo’s argument. He’s a tradcath (old-school Catholic who rejects Vatican II) hack who wants to cast shadows on Galileo from as many directions as possible in the hopes that he can soften people up on the idea of Geocentrism. His approach is very cautious and incremental and relies a lot on innuendo; he makes it difficult to really pin him down on the things I just said about him. But if you look up the things this guy’s written and the kinds of people he hires to “write the dirty work” when necessary, it’s pretty clear what his project is.
Edit: I will note that I am not familiar with Paul Feyerabend and the book mentioned in the top comment, it’s totally possible that those are from a different school of thought more interested in good faith discussion about the scientific method (or not, I don’t know). I would just advise taking any “turns out” argument about Galileo and the Church with huge grains of salt, given that this topic attracts some very slippery people with ulterior motives who intentionally appeal to contrarians like many of us on this site.
> Robert A. Sungenis (born c. 1955) is an American Catholic apologist and advocate of the pseudoscientific belief that the Earth is the center of the universe. He has made statements about Jews and Judaism which have been criticized as being antisemitic, which he denies. Sungenis is a member of the Kolbe Center for the Study of Creation, a Catholic Young Earth creationist group.
I'm reminded of something I learned about the founder of Stormfront, the internet's first white supremacist forum. His child went on to attend college away from home, her first time away from her family, and over a period of roughly two years, she attended dinners with a group of Jewish students who challenged each of her beliefs one at a time. Each time, as she accepted the evidence her friends presented to her about a particular belief, she nonetheless would integrate the new information with her racist worldview. This continued piece by piece until there was nothing left of her racist worldview at all.
It's both heartening and disheartening at the same time, because if this person can change her mind after almost two decades of constant indoctrination during her formative years, then surely anyone can change their mind. That's the heartening part: the disheartening part is, of course, that the effort it took is far from scalable at present and much more difficult to apply to someone who remains plugged into whatever information sources they are getting their current fix of nonsense from.
I'm sure she maintains many beliefs that may people would see as racist, along with her classmates. She hasn't been educated or fixed, she just left home.
Coincidentally, around the same time my twin brother became a serious Christian. He was socially integrated into a group. He finished college. I did not.
Then years later, maybe late 20s or early 30s, I became convinced that I had been wrong about my government my whole life and that they were not trustworthy. 9/11 being a false flag (which I still believe) was evidence of that.
The interesting thing was at the time when I was in New York I had completely accepted the idea that those three buildings had all turned into dust because the jet hit them. I remember walking around lower Manhattan to pick up a check and the dust was just coating everything.
I had even done some word processing on one of the twin towers leases shortly before the event while temping at Wachtell Lipton. At the time I made no connection.
Anyway, I think an underappreciated aspect of belief graphs is their connection to social groups and identity. It was much easier for me to question institutions when I already felt more marginalized and actually partly blamed society for it being so hard for me to handle my needs and find a place in it.
Another aspect of group membership and beliefs is practical. When groups are competing strategically, they often do so in ways that are not particularly ethical. It's much easier to justify this if you think of the other group as being deeply flawed, evil, invaders, etc.
Although some of these demonization s of the other group do have some kernel of truth to them, they are largely oversimplifications in the belief graphs leading to dangerous inaccuracies.
What are the practical structural and cultural differences that lead to the group divisions? They largely seem geographic, economic, ethnic.
Could a more sophisticated, better integrated, and more accurate belief system help? Or do the social structures and networks largely define the groups?
Are we just basically mammalian ant colonies? Brutally fighting each other for dominance any time there is a resource conflict?
If the other side seems to be trying to hog important resources any time they get a chance, you perceive that you are not playing a fair game. It's not a civil interaction. The other doesn't play by the rules or tell the truth or leave any subtly in discourse. So why should your group, unless it wants to get wiped out?
In my worldview the faint hope is that having more abundance of resources will somehow lead to more civility.
That's why I'm okay with Democratic states and counties using gerrymandering to keep seats. The Electoral College, and the Senate, and the House of Representatives, are tilted towards states that always vote Republican. It is already far from "One person, one vote". I'll fight fair when the people who want me dead and exiled fight fair first. I already have my honor in that I don't want them killed for their stupid beliefs.
I could give a bunch of other examples where the nodes in the article don't feel like apples-to-apples things. I feel less motivated to try to internalize the article due to this.
To me, it feels less fuzzy when you assume that all nodes and edges imply their own claims, and that it's just a matter of whether or not those claims are worth arguing. The fuzziness imo is based on the fact that the curator picks which nodes and edges exist, which therefore determines which claims exist and can be agreed or disagreed with, not to mention the overall legibility of the graph itself. But I would argue that a causal graph like this is better at representing reality than something like an argument tree, and that, while it might be fuzzy to determine which nodes should exist, at least there's less opinion involved about where nodes should be placed in relation to each other. Which imo makes the structure easier to refine given time and feedback.
Example: In the article's first diagram, there is a node labeled "Innovation". This could be replaced by a node labeled "Capitalist" and a node labeled "Improvement", with an arrow from the first to the second labeled "innovates."
So yes, if you can replace a node by an edge (and vice versa, although I don't give an example), this node vs. edge thing is fuzzy.
We should be curious about what's going on in the world regardless of what ideologies we might find appealing. Knowing what's going on in the world is an end in itself. An article with some interesting evidence in it is useful even if you disagree with the main argument.
Facts may not change minds, but we should still support people who do the reporting that brings us the facts.