Readit News logoReadit News
rom1v · 2 years ago
On a related note, sometimes someone may be wrong because he knows more about the subject (but not enough).

A canonical example could be about the "leap year" rules.

For context, the Earth makes a full rotation around the Sun in about 365.2425 days, so we use leap years to compensate:

- add 1 day every 4 years (365 + 1/4 = 365.25)

- but not every 100 years (365.25 - 1/100 = 365.24)

- but add a day anyway every 400 years (365.24 + 1/400 = 365.2425)

Suppose most people only know the first part (1 additional day every 4 years). If we ask "is 2000 a leap year?", they would answer "Of course, 2000 is a multiple of 4". And they would get the correct result.

Now, suppose someone started to study the "subject" (here, there is nothing to study, this is a trivial example for illustration), and is aware of the second rule (but not the third one). He would say "Ahah, no, 2000 is not a leap year, because it is a multiple of 100". But he would get the wrong answer.

My impression is that this kind of mistakes happens often while learning a subject: by studying, we encounter exceptions or surprising facts, that we may apply too broadly (to the point we make absurd claims, but that appear absurd for the wrong reasons).

EGreg · 2 years ago
Far more broadly, think of logic as just a low-dimensional approximation of something complex.

Humans deal with logic and are taught “i before e except after c” etc. But even with describing human languages that may be inadequate.

What AI does is it essentially makes a model by which you can search a latent space quickly — both to clasify input and to generate output.

You can throw the recorded motions of the planets and stars at it and it might find physical laws that have 80 variables while humans want to deal only with simplified versions like Kepler’s laws of motion.

In the vacuum of space those laws may be enough but when it comes to the complexity of chemistry, biology, genetics, politica, etc. the AI might have way better model that we can never understand. Like for dietary recommendations. Would people follow them?

And what if they are wrong in some other ways? Like how humans beat AlphaGo through its blind spot or how you can fool face recognition by wearing a hoodie etc.

karaterobot · 2 years ago
The bimodal variation of this is the famous (?) quote about the U.S. civil war, which goes something like: in elementary school, you learn the Civil War was about slavery. In high school, you learn it was about economics. In college, you learn it was actually about slavery.
yllautcaj · 2 years ago
"A little knowledge is a dangerous thing."
robgibbons · 2 years ago
From "A Little Learning," by Alexander Pope

A little learning is a dangerous thing; Drink deep, or taste not the Pierian spring: There shallow draughts intoxicate the brain, And drinking largely sobers us again.

ilyt · 2 years ago
In IT we call those people "power users", the absolute bane of tech support.
gorlilla · 2 years ago
In power-user-land we call IT glorified gatekeepers.

Neither would be entirely accurate and not every of either is always one of either and either can be both at the same or different times..

fossuser · 2 years ago
This can come up in other interesting ways.

There are human behaviors that appear to be entirely selected for (and then kept around via culture) [0].

In this case a population may do something like have pregnant women avoid eating sharks which are otherwise a normal part of their diet. They don't know why they don't eat the sharks, they just don't. It turns out the sharks contain something that causes birth defects. Commonly people think someone must have realized this and then that original knowledge was forgotten, but it's quite likely it was never known and the behavior was entirely selected (pregnant women that didn't eat the sharks more successfully reproduced).

When you press the women to answer why they don't know, if forced to answer they make something up (it'll give my baby shark skin).

You could imagine someone thinking that's stupid and then eating the sharks and getting a baby with birth defects.

I think this explains a lot about why superstitious belief is so common in humans and animals.

Reason is obviously selectively advantageous, but a little reason incorrectly or over confidently applied can also be harmful, most people are bad at individual reasoning (endless conspiracy theories) and are often better off with consensus.

For every contrarian that unlocks massive value by being contrarian and correct there are ten cranks that just hold false beliefs that potentially harm themselves and others.

[0]: https://slatestarcodex.com/2019/06/04/book-review-the-secret...

behnamoh · 2 years ago
Also applies to creativity.

- Knowledge = little ==> little creativity to add something new

- Knowledge = mid ==> great creativity

- Knowledge = high ==> little creativity to add something new

ilyt · 2 years ago
I feel like that's mostly anecdotal. Nobody remembers the "mid knowledge people that did nothing interesting"
onemoresoop · 2 years ago
Could you please expand on this?
banannaise · 2 years ago
The true expert realizes what question is being asked, and grabs a calendar.

Two people, equally knowledgeable and given the same problem, will not necessarily come up with the same solution. Leap years are not a mathematical fact; they are an engineering solution to a mathematical challenge. An expert looking at orbits is unlikely to decide that we must specifically add an extra day to February; you only know that we use this solution by deriving it from calendars.

ndsipa_pomu · 2 years ago
A simpler example may be that a person with one clock knows the time, but a person with two clocks is never quite sure.
mrob · 2 years ago
It's a popular phrase, but it doesn't make sense if you think about it. The person with two clocks can average the time of both of them and get a result they can be more confident in than the person with only one.
watwut · 2 years ago
I have many clocks and typically I am sure about the time. I keep them in sync or know which one is off.

Deleted Comment

sopooneo · 2 years ago
This follows a pattern I've noticed where an expert's approach may seem similar to a pure novice's. With only the intermediate practitioner seeming to follow any rules.
ajuc · 2 years ago
This is so common there's a meme template :)

https://pbs.twimg.com/media/FffvEX5UAAAysZC?format=jpg&name=...

cfiggers · 2 years ago
First you learn the conventional wisdom.

Then, you learn that the conventional wisdom is wrong.

Finally, you learn that you didn't actually learn the conventional wisdom the first time around, typically by rediscovering it yourself.

hsod · 2 years ago
I think this is pretty much what the term "midwit" refers to
hirundo · 2 years ago

  Socrates then, by a series of ignorant-sounding questions, forced the others into such a mélange of self-contradictions that they would finally break down and admit they didn't know what they were talking about.

  It is the mark of the marvelous toleration of the Athenians that they let this continue for decades and that it wasn't till Socrates turned seventy that they broke down and forced him to drink poison.
It's seldom pleasant to be (accurately) instructed in your own ignorance, particularly by someone pretending not to know anything. Asimov's theory that Socrates' death was a result of his habit of patronizing everyone rather than the actual charges is plausible.

the_af · 2 years ago
Also, in internet arguments, there's nothing more infuriating than the other person appointing themselves as some kind of modern-day Socrates and "teaching" you.
PrimeMcFly · 2 years ago
Forgive me, if I may be so kind as to ask, but what makes you think such people are trying to "teach" you?
tacitusarc · 2 years ago
There can be a balance here. Often I’ll have a viewpoint and when I hear a different one, I won’t immediately say I disagree. Instead I’ll ask questions to discover where the gap in our understanding lies. Then, it may be helpful to share that I have additional knowledge (I’d I do), or to learn something I was missing, or to recognize we value different things and move on. Immediate disagreement is often far less productive.
sopooneo · 2 years ago
Methods for coming to understanding between parties acting in good faith really fascinates me. One thing I've noticed is the fundamental crux often lies with some assumption each party finds so fundamental that they wouldn't bother to state it, and don't even necessary realize they hold the view *.

As a gross example, my wife told me the other day of a buyer of some CraigsList item that called to say she couldn't find our address. They went back and forth each getting more and more flummoxed by the other's statement of what streets were where. Finally it was discovered that the caller was in Cambridge, Ontario, having misnavigated CraigsList, while we were in Cambridge, MA.

It is mistakes so fundamental as these that are hard to discover. One help I've often found is to bring in a fresh third party.

* Saying to "state your assumptions" seems futile to me. There are too many. The sun will rise, time will continue to tick by at the same rate, your friend didn't change his name yesterday, Coke still sells soft drinks.

ndsipa_pomu · 2 years ago
As a counterpoint, people acting in bad faith can continue asking questions. It's usually known as "sealioning"

https://wondermark.com/c/1k62/

mlsu · 2 years ago
Like many people, I took a college course that talked all about Socrates and we read the Apologia and stuff.

I had assumed that it was basically all fiction. That there was some guy who kept being annoying by asking questions, so much so that he was put to death, it's just a little too cute to be historical fact.

And that all of us discussing this stuff as if it actually happened is actually just an exercise in reiterating the truthiness of the underlying idea; that if we said it was fiction at the start, we'd be doing lit analysis, rather than philosophy.

Right?

rck · 2 years ago
No, there's pretty good evidence that Socrates did exist, though there is debate about what he actually believed and taught, since in Plato's later dialogues the character of Socrates typically was advancing Plato's position. See here: https://en.m.wikipedia.org/wiki/Socratic_problem
justrealist · 2 years ago
The impressive thing about Asimov — incidental to the quality of the writing — is he probably sat down and blasted that out in an hour top-to-bottom with no edits. Volumetrically one of the most impressive authors in history.
iainmerrick · 2 years ago
There's a funny story in one of his essays where he's talking to Heinlein about this strange habit other writers have of endlessly redrafting.

"I just write down the first draft, read it through from start to finish and write up the second draft, and that's it," Asimov says.

"What's the second draft for?" Heinlein says.

the_af · 2 years ago
This seems in contrast to Borges, who was never done with corrections and redrafting, going as far as to buy back an edition of a book already in print so that he could make additional corrections (or so the story goes...).

Anyway, I'm a fan of Asimov.

jimmaswell · 2 years ago
It made the point in the first few paragraphs and the rest was a long winded beating of a dead horse. It could have done with more editing.
irrational · 2 years ago
Not me. I’d like a lot more. Particularly I’d like this part fleshed out:

> This particular thesis was addressed to me a quarter of a century ago by John Campbell, who specialized in irritating me.

I only know of Campbell from “A Hero With a Thousand Faces”. I’d love to read more about how Campbell specialized in irritating Asimov.

kr0bat · 2 years ago
Going from discussing the curvature of the Earth to the finite speed of light wasn't to dunk on the student, it was to demonstrate the increasing relative correctness of our universal theories.
mcguire · 2 years ago
This is a very common thing in writers from the 1920s to the 60s---they made their living selling articles to magazines that paid a fraction of a cent per word. If they wanted to eat, they had to hammer something out as fast as physically possible.

Deleted Comment

LucasLanglois · 2 years ago
Interestingly, this highlights that despite popular belief, the scientific process is not about being right or wrong but rather what's powerful.

A theory is promoted because it is powerful in explaining observations and providing tools for humanity until it is improved by the next one. You can build a gun with gravitation but you need relativity for a rocket.

jgeada · 2 years ago
Quoting from the article "Newton's theory of gravitation, while incomplete over vast distances and enormous speeds, is perfectly suitable for the Solar System. Halley's Comet appears punctually as Newton's theory of gravitation and laws of motion predict. All of rocketry is based on Newton, and Voyager II reached Uranus within a second of the predicted time. None of these things were outlawed by relativity."

We need relativity to correct the atomic clocks on GPS satellites so as to maintain positional accuracy to a few feet, but getting the satellites up there needed only Newtonian mechanics.

feoren · 2 years ago
All models are wrong; some models are useful.
pixl97 · 2 years ago
> the scientific process is not about being right or wrong but rather what's powerful.

Ugh, this is something that bothers me all to hell and back about the non-scientific minded.

"Your science was wrong about (this minute detail), therefore whatever completely made up bullshit I thought up without any supporting evidence is totally valid"

beebmam · 2 years ago
Asimov is such a poignant writer, I can't help but smile when I read him.
uranusjr · 2 years ago
I’m glad he decided to invent an English Literature correspondent to pick on, and also the 1990s was a better society than one that votes to poison a disliked person.
zafka · 2 years ago
It is also funny that he gave this character the name of one of his editors, who he actually disassociated from when said editor started going a bit off the rails.
dang · 2 years ago
Related:

The Relativity of Wrong (1989) - https://news.ycombinator.com/item?id=29811788 - Jan 2022 (5 comments)

The Relativity of Wrong (1989) - https://news.ycombinator.com/item?id=24055125 - Aug 2020 (2 comments)

The Relativity of Wrong (1989) - https://news.ycombinator.com/item?id=17818069 - Aug 2018 (11 comments)

The Relativity of Wrong (1989) - https://news.ycombinator.com/item?id=13082585 - Dec 2016 (16 comments)

The Relativity of Wrong by Isaac Asimov (1989) - https://news.ycombinator.com/item?id=11654774 - May 2016 (60 comments)

Isaac Asimov: The Relativity of Wrong (1989) - https://news.ycombinator.com/item?id=9629797 - May 2015 (138 comments)

Isaac Asimov - The Relativity of Wrong (1989) - https://news.ycombinator.com/item?id=1147968 - Feb 2010 (32 comments)

fnovd · 2 years ago
As much as I enjoy Asimov, I have to say that he is wrong. The gap between what we know and what is true might have decreased immensely, but it is still infinite. Any quantifiable increase is 0 in relation to infinity. Asimov's counter-argument that we are quantifiably less wrong than we were in the past simply does not overcome this core issue. If there is an infinite amount of knowledge separating what we know from what is true, then we can learn an infinite amount of things and still have an infinite amount of things left to learn.

To feel justified in thinking the universe is "essentially" understood is to be OK with one's concept of the "essential nature" of the universe to be inherently divergent from a future concept, which according to Asimov's own argument is going to be more correct than our own.

To me, it reads as a bitterness towards mortality, a sort of sour grapes: the insights we will have about the universe at some future time must not be very interesting compared to what we know now, because I won't be around to know them.

edit: I guess I shouldn't be surprised that Asimov's perspective is shared here. It's very easy to understand the essential nature of the universe when you define the universe as the parts you understand.

I don't think human beings in 1000 years will look at our current understanding as special in any way. As transformative as our era is, it will be dwarfed by the scale of transformation in future eras. It's just the most transformative era so far. That's temporal bias, nothing more.

cnity · 2 years ago
One way to see Asimov's "infinity of wrongness" is perhaps as a fractal. You could view the bulbs in the mandelbrot as being a kind of knowledge, and the "main bulb" occupying the majority of the area belonging to the set as the set of truths known about our universe. The mandelbrot set is infinite in complexity, however its area is finite and bounded!

Or as ironing out the wrinkles on a great big t-shirt, where each wrinkle is sub-wrinkled with smaller wrinkles and so on. We've "ironed out" the biggest wrinkles, there are infinitely more but they are much smaller. We're perhaps over half-way ironed, in a quantitative sense.

fnovd · 2 years ago
I disagree fundamentally. You may as well ask me to imagine Earth as a disc, with multiple rotating spotlights shining down on it and a giant ice wall around the edges. I understand what the image is trying to convey, I simply do not agree that this is the shape of the thing I experience.
jgeada · 2 years ago
I think you missed the entire point of the essay. You should actually read it.

Science is incremental, revolutions in science mostly just adjust the edges of our knowledge, at more and more extreme corner cases (extreme high energies, extreme high/low temperatures, etc). No, we absolutely don't know it all, but as always, new knowledge and theories will only affect those edges, and refine the predictions for the nth+1 decimal place.

By and by, the science that directly affects our daily lives has remained stable and most progress has been in the engineering to put all this knowledge to practical and efficient use.

mistermann · 2 years ago
I like how in English one can make it appear if one is skilfully and logically dismissing an argument without actually even trying to. I don't presume that this was your intent, but the phenomenon is quite interesting and I quite confidently believe dangerous (in that it contributes to some degree to inaccurate models in the minds of those who ingest such text, and those models are what drive action, much of which is harmful...which is easy to see in {choose your outgroup}, but far less easy to see in one's ingroup).
ndsipa_pomu · 2 years ago
I think your use of infinity isn't particularly helpful here as it leads to the contradiction that knowing more doesn't lessen the knowledge gap, whereas it does appear to do so.

Maybe, a better interpretation would be that as we learn and understand more, we approach the limits of knowledge. Now it may take an infinite amount of knowledge to actually reach the limits of knowledge (c.f. an infinite series can approach a finite value, but takes forever to get there), but it can still be shown that we are getting nearer.

The other aspect is that as we understand more, we appreciate that there's even more to understand, but that can be thought of as our precision increasing and looking at the available knowledge in greater detail.

fnovd · 2 years ago
There is no contradiction because there is no limit of knowledge.

The limit of y = e^x is infinity. You can keep increasing x and y will increase exponentially. So you plot the function, let's say with the x axis going up to 10 and the y axis going up to e^10. The graph shows very clearly that, while there was progress before, we have even more progress now. Exponentially more, even! What came before is dwarfed by what we have now; if you look at the range of y for x values 9-10 you can see how little of a difference all those others values (1-8) had between one another, compared to the changes we have now. The rate of change is so high that we're basically in an era of semi-complete knowledge. We must be at some kind of inflection point, this is truly a unique era of understanding.

Then you repeat. Set the x axis to 100, and the y axis up to e^100. Oh wait, it's the same graph. That's because it's always the same graph. It's scale-invariant. The slope at every point is always whatever y is.

We're always at right at the limit of explaining the "essential nature" of the universe because the "essential nature" of the universe can only be (to us) what we can understand it to be. We chose e^10 as the limit of the y-axis in our first exercise because that's all the knowledge we knew about. We chose e^100 as the limit of the y-axis in our second exercise because that, too, was all the knowledge we knew about. Choosing these random values as the limits of our function (i.e. the limit of the "essential nature of the universe") leaks information into the visualization that will always paint a picture showing that we're at the most transformative time there ever was.

When we do it that way, we will always come to the same _wrong_ conclusion. We will always dwarf what came before and be dwarfed by what comes after. To think that we're actually living in an inflection point is hubris, it's wishful thinking, it's the sour grapes of mortality.

kubanczyk · 2 years ago
> The gap between what we know and what is true might have decreased immensely, but it is still infinite.

The other two commenters have taken different approaches to infinity, but it seems that your argument doesn't hold even for a plain-as-in-real-numbers infinity.

Being satisfied with finite knowledge gains, I have no hope to achieve 1% of infinity (or any other fraction of infinity).

The universe is infinite in size, another assumption. If I'd fly on vacation to Tenerife, a quantifiable shift of my position by mere thousands of miles would be "zero in relation to infinity". Yet, it's not unimportant for my rest. Talking about infinity doesn't automatically cancel all the finite measurements and bring them to zero.

nathan_compton · 2 years ago
One might plausibly assert that while the particulars of the universe may be infinite, the fundamental rules which govern the universe are finite and thus at least in principle entirely knowable. While I don't think the 20th century makes an air tight case for the latter, I think it isn't an unreasonable conclusion to draw from 20th Century Physics either.
ndsipa_pomu · 2 years ago
It's surprising that we find ourselves in a universe which does appear to obey certain laws. There's a whole bunch of assumptions made to help us understand how things work and it turns out they're mostly correct. i.e. It's more astounding that we CAN understand the universe and how well maths can act as a model/language to understand it.
glitchc · 2 years ago
Well, we can start by asserting that we do not know everything. We can assert this from the contra positive: If we did know everything, then we would have an acceptable solution to all of our problems. Since we do not, it stands to reason that we don't know everything.

Once we accept that the assertion is valid, then it raises the likelihood that our understanding of the world is incomplete in some way. And furthermore is incomplete to different degrees along multiple dimensions of knowledge. Whether incomplete or wrong is a word choice, it doesn't change what's missing. So with each new discovery, our understanding improves, our wrongness decreases.

cnity · 2 years ago
> If we did know everything, then we would have an acceptable solution to all of our problems. Since we do not, it stands to reason that we don't know everything.

This can be proven false by contradiction: it may be possible to _know_ that one of our problems has no solution.

PrimeMcFly · 2 years ago
It's amazing the amount of people who dismiss current findings by claiming we were wrong in the past, so nothing can be trusted.
ilyt · 2 years ago
I think people just yearn for certainties and unchangeable anchors in reality, which is why the beliefs that something is designed made by one constant being or force of nature are so prevalent.

"You told me this is reality and now it changed, why I should believe in it again?"

Schools don't get into people's heads how science works enough, and how it is less "discovering the truth" and more "narrowing the uncertainty on how stuff works"

LeifCarrotson · 2 years ago
Absolutely!

> "You told me this is reality and now it changed, why I should believe in it again?"

Because that's not what "belief" ought to mean. When you were told that scientific thing in the past, you should have been told that it was believed to be that way using this flawed collection of evidence, analyzed and synthesized by imperfect intelligences, which should have lead to that postulate about reality holding an asterisk with whatever degree of confidence it deserved.

If that uncertainty is something you can't accept, maybe science (and reality) is not for you. Try mathematics instead.

scns · 2 years ago
Well. For me, Immanuel Kants' (1724-1804) "Ding an sich" (The thing in itself) lays the groundwork for modern science. My translation would be: "Since our senses are so easily fooled, we will never grasp the thing in itself." IMAO a reasonable scientist says: "We can only say what is least likely wrong." Can you win elections with sentences like these?
alexvitkov · 2 years ago
I'm dumb and ignorant, therefore we all are.
irrational · 2 years ago
Well, yes, but some more than others.
somat · 2 years ago
It is an over reaction, but it is true. in both fields.

Scientific: scientific knowledge on a subject is never solved. it merely approaches the solution. it was wrong in the past, it is less wrong today, but still wrong.

Cultural: The cultural truths of today were wrong in the past and will be wrong in the future.

I think the correct take away is not so much "nothing can be trusted" as it is "trust, but verify"

feoren · 2 years ago
There is no parallel between scientific knowledge and whatever you're calling "cultural truths". To the extent that there is lasting progress in culture, it is through art, music, and literature, not through "truth". Truth is the realm of science alone. Yadda-yadda genital mutilation: that is not culture, it's ignorant cruelty, and science can indeed show that, by showing that it causes health problems, causes pain even in babies, brings none of the claimed benefits, etc. Most of the old-hat cultural-relativism arguments come down to plain-old scientific ignorance, just as if there were cultures out there that still believed the Earth is flat. Those are not "cultural truths", they are scientific ignorance.

> "trust, but verify"

We're talking about the collective knowledge and understanding of all of humanity. How are you going to verify for yourself the value of the Planck constant and the exact curvature of the Earth and the exact age of the Universe? Are you going to build your own $50-billion particle accelerator to verify the existence and strength of the Higgs field? Through decades of study you may get to the point where you can indeed verify one of the millions of foundational truths that we build further knowledge upon.

"Trust, but verify" doesn't belong here. You should not "trust", nor can you possibly "verify". It's more like "understand". These truths must fit together cohesively, or if they don't, it should be glaring unsolved problem (e.g. gravity vs. quantum mechanics). Test new ideas against the rest of the theory to relentlessly look for contradictions or poor fits. Develop your mental model. You never need to really "trust"; everything has a wrongness error-bar around it. The wrongness error-bar of the shape of the Earth is very small; it includes subtle corrections due to slight variations in the gravitational field and the pull of other bodies, but it does not include a flat Earth.

Dead Comment