Readit News logoReadit News
speak_plainly · 2 years ago
After spending many years studying philosophy I think that the right approach is that you don’t need to come to any conclusions about anything.

Aristotle often started with common peoples’ viewpoints on any given topic, along with looking at what experts thought and built his own ideas from there, and of course was very successful with his methodology.

Arguments are interesting and obviously some are more correct than others but everyone has good reasons for what they argue for and there is a genius to the collective thoughts of humanity even if it seems like insanity most of the time.

The best starting point is one of ignorance and asking other people what they think, you don’t need consensus but you should be looking for clues, solid ground, and inspiration.

Everyone should get more comfortable andmitting to and saying that they truly ‘don’t know’. Taking a position of ignorance will open the doors to real possibilities and actual progress.

godelski · 2 years ago
To speak to this directly in a scientific setting I find one of the most beneficial thing I can do when exploring a new domain is to NOT read works first. But instead to first ponder how I would go about solving the problem, maybe do some naive attempts, and THEN read. An iterative cycle of this becomes successful because of a few typical outcomes: you spend more time but generally come away with a deep understanding of motivation and why the field has moved in the direction that it now exists in (which helps you understand where to go next!) and/or in this process you find assumptions made to simplify a problem but has been forgotten and can be readdressed (at worse you have a deeper appreciation for the naive assumption). I do think this process is slower, but I find it frequently results in deeper understanding. It is (for me) faster if your intent is to understand, but slower if your intent is to do things quickly. I find that with the typical method of review to catchup without the struggling generally results in me not having a good understanding of many of the underlying assumptions or nuance at play (I think this is a common outcome, beyond my own experience). YMMV and it may vary dependent on your goals with a specific area and cross-domain knowledge. But I find great pleasure in it because it causes me to consider many things beautiful and genius where I would have otherwise saw them as obvious and and mundane. That alone is enough for me because research is a grueling task with endless and frequent failure but this helps keep me motivated and keeps things "fun" (in the way I think Feynman was describing). It's not dissimilar from doing your homework, checking someone else's work/a solution manual, and then trying to figure out where your mistakes are rather than simply correcting them. Similarly, time constraints are a pain so such a process isn't always possible and certainly not as often as I'd like.
mistermann · 2 years ago
Sounds were quite familiar to this anecdote from Feynman about his learning path:

https://literature.stackexchange.com/questions/8691/in-what-...

ziroshima · 2 years ago
Agree completely. I think the author misses the big insight here. Rather than universal deferment to authority, I think the insights here are a) that people overstate their confidence and b) not all science should be treated equally.
bluetomcat · 2 years ago
The epistemic standpoint of rationality (particularly Cartesianism) assumes a static arrangement of knowledge, where one uses analytic reason to gradually unveil bits of it like finding new territory on a map. It is rooted in analytic geometry.

David Hume challenged this view. His main insight was that an object we call "A" in time T1 may not be the same object in time T2. We also need to distinguish between "A" as an idea of an object and "A" as a particular instance of object.

voidhorse · 2 years ago
Along similar lines, critiques of Cartesianism in epistemology have also pointed out the heavily social aspects of knowledge construction, see situated epistemologies etc. Even epistemologists in the analytic tradition have begun to move away from Cartesianism due to its limitations.

TBH, taking an epistemic stance that's primarily cartesian these days mostly just shows that you're (likely) ignorant to basically the entire history of development and research in epistemology after Descartes. Cartesianism is a very useful perspective and method for certain things, but as a general epistemology it's quite crusty.

JohnFen · 2 years ago
> I think that the right approach is that you don’t need to come to any conclusions about anything.

This reminds me of a phrase I often find myself using with people: I am not required to have an opinion about everything.

ineptech · 2 years ago
This is probably true, but not very helpful. We can shrug over historical curiosities, but in a lot of cases we have to make a decision.

Consider Linus Pauling's claim that you can prevent cancer with megadoses of vitamin C. It was never widely accepted, but Pauling is a titan of science with two Nobels and he wrote books with convincing-sounding arguments, so it's tempting to think maybe this is a case where the status quo is wrong (esp. if you have cancer).

I think that's the sort of thing Alexander is trying to navigate here - no matter how comfortable you get saying "I don't know", at the end of the day you need to take the vitamins or not take them.

speak_plainly · 2 years ago
The same argument can be turned against you, someone can wait for absolute expert certainty and die in the process due to lack of action. We all act pragmatically when making decisions and simply blindly trusting people is no way to live for a thinking and intelligent being.

I was not suggesting this sort of Burdian’s ass scenario where everyone is so gripped by ignorance that they can’t act. I’m suggesting instead that ignorance is a starting point, to drop your preconceived notions and opinions or at the very least challenge them and not be afraid to come to no conclusions at all— keep everything open. To see there is gold in the common opinions of people and in the arguments of experts, that no one has a monopoly on truth.

You don’t need to work yourself up to absolute certainty over the world to make a decision. You don’t need to blindly trust experts and you don’t need to be gripped by fear of uncertainty and you don’t need to be forced into action because of arguments. You really don’t need to do anything at all. There are bigger questions to ponder and a life to live that’s worth living.

philip1209 · 2 years ago
This article talks about learned helplessness in a learning context. I talked about it in a work context, and the two could be linked. I think social media is training people for everything to be quick, but learning + work aren't necessarily quick.

> This insistence on constant availability disrupts the essence of focused work. Rather than encouraging employees to tackle complex problems independently, there’s a trend, especially among junior staff, to quickly seek help upon encountering any obstacle. The fear is that being “blocked” under-utilizes an expensive team member. However, the nature of knowledge work is solving ambiguous, complicated problems - so the expectation of constant availability can lead to a culture of learned helplessness, which shunts professional development.

https://www.contraption.co/essays/digital-quiet/

spencerchubb · 2 years ago
As a junior dev, sometimes I've been blocked for hours because I want to show that I can solve problems independently. But I've definitely had cases where I should've asked questions early. A question that takes 1 minute to answer could have saved hours.

For instance, I was stuck on figuring out how to send an email through AWS. Turns out we have a lambda function that handles all the authentication and security things that are specific to our company. Once I asked my coworker and found out about this function, it was trivial.

ZephyrBlu · 2 years ago
The distinction between trying to solve a problem yourself vs ask for help is: are you figuring out a new problem, or figuring out the system?

If you're trying to figure out the system, you should probably ask for help ASAP and build up some knowledge. Over time you'll need to ask questions about the system less frequently.

If you're trying to figure out a new problem related to your work, slogging away at it for a while is more ok because it's value-add and a good learning experience.

The lines are a bit blurry, but that's how I tend to think about this.

eminence32 · 2 years ago
As you get more and more experience, I think this will become easier. You'll eventually get a feel for what types of questions you should just work through yourself, and which types of questions you should ask someone else about, and the threshold for moving between the two (you'll develop a sense of "let me investigate theory A, B, and C, and if I still can't figure it out, I'll ask for help").

Some of this will probably be via an increased understanding of the types of problems you can and can't solve. And some of this will probably be because you'll eventually know more people who can help (something like "oh yeah, I know Sam worked on a similar problem last month, let me see if they found a solution")

NhanH · 2 years ago
The nuance here lies within the “hours” quantifier.

A junior member being blocked for 3-8 hours semi-regularly is expected. Once every other month you probably should be blocked for several work days as well.

In your example, the challenge would have been realizing you wouldn’t have been the first one trying to send an email in the team. Recognizing that and it is very easy to know that you should just ask the question right away. It’s not a technical skill, but something a bit more meta.

SkyBelow · 2 years ago
If you lost hours digging through the code and trying things to solve the problem, they aren't really lost. The knowledge you gain from this, the code you dug through, etc. all help builds up the experience that leads to one day no longer being a junior. Being given the answer is fast, but it doesn't lead to as much learning, especially in a career where knowing how find answers is more important long term than knowing the answers.
nerpderp82 · 2 years ago
Blocked for hours is you learning and totally OK if you are learning. If one is operating open loop and stabbing at configuration options in a brute force manner to "make something work quickly" that isn't learning. Not saying you did this, but I do see this in a number of folks. They don't learn, and they don't form a model of the system. Those folks can be replaced by an LLM and Z3.
Swizec · 2 years ago
As a tech lead I would rather see a team member ask me questions early than never. Nudging them in the right direction before they waste 3 days going down a path that’s never going to work is at least half my job.

But it’s important that they’ve done enough research to formulate the question. A little struggle helps the learning stick.

freedomben · 2 years ago
I agree, but in practice now I've come to see that this is an extremely difficult point to discern. As the vast majority of findings flow in as a trickle, there often isn't a large enough delta to identify as a launching point for asking for help. For example, if one continues to search, they will keep finding breadcrumbs leading them to the solution. It is painful to me when someone spends 3 days looking for a solution to a problem that is very custom and unique to our system (so they're not gonna find the answer anywhere on the internet), and one I could solve in 5 minutes, but you don't know what you don't know, so as a new person learning it is almost never clear at what point it is ideal to ask for help. Compounding all of this is that many personalities don't like to be bothersome to other people, and that can cause them to hesitate to ask for help, which further sends them on the search between small deltas. It's a very hard problem.
steveBK123 · 2 years ago
My problem is I see this most frequently with debugging. It's like no one knows how to debug anymore. Read a runbook, google an error, try a few things.. no, just pester a senior.

When I find myself responding to juniors/mids with the same list of rote, problem agnostic, runbook responses .. and it actually helps them, it's unnerving. It's like the socratic method of debugging without them actually learning anything from the experience.

arcbyte · 2 years ago
Agree. I think about 4 hours is a good rule of thumb for how long you should be stuck before getting help.
meindnoch · 2 years ago
Cool! I'll send them to you then :-)
evnc · 2 years ago
Yeah, it's a balance. I love being able to help, and I am generally in favor of asking questions early, but not ones of the form "hey so I ran this code and it errored. Help?"

"... did you read the stack trace? Did you look at the code referenced by the stack trace?"

This is where I've learned responding with "Sure! What have you tried so far?" is relevant.

astura · 2 years ago
This just has not been my experience at all.

I've never had a problem with a junior asking too many questions. Never.

I have, however, had issues with them not asking enough questions.

spenczar5 · 2 years ago
Right. I never got upset with a question. The only issue is getting the same question multiple times.

Not necessarily the same question verbatim, by the way. When I answer a question, I am trying to “teach to fish,” and so there is some system that I am explaining. My hope is that the asker will show curiosity - ask follow-up questions - and then be able to generalize. “I learned there was a lambda for sending emails, in the sysops repo. Maybe there is a lambda for sending slack messages in there too?”

Software systems are imperfect so the generalizations might break. In this case I want another question quickly, like “I couldn’t find a slack message sender like the email one. Does one exist?”

kdmccormick · 2 years ago
I have had both problems. As it turns out, unsurprisingly, it varies from person to person.

It can also vary within a single person. They might, for example, ask questions too quickly when stuck on a technical question that could be solved by reading docs, but ask questions too slowly when stuck on an ambiguous product requiment that can only be personally answered by the PM or UX person.

rincebrain · 2 years ago
Asking questions quickly is optimal for resolving the immediate problem, but not necessarily for understanding the components involved in the answer, or the time of the person being asked.

Even if you write off the latter as ~free, the former is a significant benefit to someone familiarizing themselves with a new environment, and even someone walking the junior person through the process of reasoning through how to get there probably won't cover the same benefits as they might have experienced doing it themselves because they likely don't think exactly the same way.

Of course, it's an immediate versus long term tradeoff too, and a personal tradeoff for when to ask, and with how many people tend to just refuse to ask questions because they've been trained to think this makes them look bad, it probably makes sense to aggressively incentivize asking by default, but there are benefits to spending time on a problem yourself if you have enough tools to bootstrap your way into more understanding, and I have met enough people who never understood things more than just how to lookup the answer from someone knowledgable to think this isn't something to also be concerned about.

Deleted Comment

shadowgovt · 2 years ago
Flipping the script a bit:

As a senior developer, avoid cultivating learned helplessness. You can push back on this in a couple ways:

1) Instead of answering questions, give a nudge to where the solution is documented ("Hey I'm swamped, but I'd recommend checking X for more info. If you haven't read through X yet, that's a good resource to skim"). Keep tabs on how much your junior team members are expected to know, nudge them harder if they aren't taking the time to ingest the gestalt of what's there (it feels like a waste of time to do so sometimes... Reading isn't getting code written. But knowing what's already there saves work in the long run).

2) When something isn't documented... review the docs a junior team member writes, don't write them yourself. This both encourages them to have ownership over the system and will probably generate better docs in the long run (everyone has a notion of what docs should look like, but communication is two-way: seeing what someone writes down clues you into what you missed is necessary to record. Can't tell you how many docs I've seen for cloud systems, for example, that assume the user is logged in with proper credentials when that step alone usually requires handshaking across three services and maybe an IT team).

3) Prefer mistakes to silence. Don't bash a team member for making a correctible mistake, even in production; use it as a learning opportunity both for them and for you (if that mistake was possible, you're missing a guardrail). Actively communicate to junior members that wrong code that exists is preferable to no code; wrong code can be talked about, no code is work yet to be done. And be aware that for a lot of junior devs, the reaction to making a visible error is like touching a hot stove; cultivate a soup-to-nuts environment that minimizes that hot-stove reflex.

tstrimple · 2 years ago
I agree with all of this with one minor quibble. I'd never tell someone I'm in a position to mentor / coach / lead like this that I'm swamped. It's probably true, but that's my problem not theirs. I don't want them to avoiding talking to me because they think I'm too busy. I know that was in an example statement and not necessarily something you're endorsing, but I thought the point worth bringing up.
spenczar5 · 2 years ago
This is just all excellent. There are a lot of strange attitudes elsewhere in this comment section, ones that I don’t recognize from good senior engineers. Good ones realize that a lot of their job is improving the whole team. If junior engineers are constantly asking trivial questions, maybe they need to be taught to learn!
steveBK123 · 2 years ago
Yes, and I think some of the neediest juniors (or seniors who behave like juniors) have now substituted in ChatGPT for some of their nagging.

The ones I see doing this most heavily are not really developing themselves and improving in any meaningful way.

At least a stack overflow thread will be filled with alternative solutions, arguments, counterpoints and caveats.

ChatGPT leaves the questioner with the illusion that they have received the 1 good answer.

picometer · 2 years ago
Summary: Scott Alexander recounts his gullibility to various well-reasoned crackpot arguments on a topic, and describes how he to decided to trust experts instead of investing time into learning enough to assess the topic for himself. Then he reflects on the nature of argument-accepting and its relation to rationality.

I don’t think the term “learned helplessness” fits well here. It suggests a lack of agency, whereas he exercised much of it, employing his skill of critical thinking to arrive at the right epistemic stance.

A better term might be “bounded agency”, to pair with the concept of “bounded rationality”. We recognize that we cannot know everything, and we choose how to invest the capability and resources that we do have. This is far from any type of “helplessness”.

bluetomcat · 2 years ago
He talks about the pitfalls of pure rationality. There can be competing explanatory frameworks for the same thing, and they often contradict each other. Rational arguments may seem rigorous like math, but are in practice standing on shifting sands.

It ultimately comes down to what you decide to believe in. This is where traditional values and religion come at play.

joe_the_user · 2 years ago
Yes, It's not "gullibility", it's believing things in terms of the mechanism of standard argumentation.

The basic thing is that arguments involve mustering a series of plausible explanation for all the visible pieces of evidence, casting doubt on alternatives, etc. Before Galileo, philosophy had a huge series of very plausible explanations for natural phenomena, many if not all of which turned out to be wrong. But Galilean science didn't discover more by getting more effective arguments but by looking at the world, judging models by their simplicity and ability to make quantitative predictions and so-on.

Mathematics is pretty much the only place where air-tight arguments involving "for all" claims actually work. Science shows that reality corresponds to mathematical models but corresponds only approximately and so given a model-based claim can't be extended with an unlimited number of deductive steps.

fooop · 2 years ago
I, for one, am glad that the rationality-bubble is popping.
picometer · 2 years ago
A further thought that is too much for an edit… one of Alexander’s final conclusions is:

> I’m glad that some people never develop epistemic learned helplessness, or develop only a limited amount of it, or only in certain domains. It seems to me that […] they’re also the only people who can figure out if something basic and unquestionable is wrong, and make this possibility well-known enough that normal people start becoming willing to consider it.

I think there’s better framing here as well: he is glad that a few people direct their own bounded resources towards what I’d call high-risk epistemic investments.

I’m also thankful for this. As species, we seem to be pretty good at this epistemic risk/reward balancing act - so far, at least.

thadt · 2 years ago
> The medical establishment offers a shiny tempting solution. First, a total unwillingness to trust anything, no matter how plausible it sounds, until it’s gone through an endless cycle of studies and meta-analyses.

Isn't this just... science? We learned from the ancient philosophers that really smart people can reason powerfully about a whole lot of things. But - if we want to know whether that reasoning holds up, it has to be tested. Modern science is the current unit testing framework for philosophy. At least, for the thoughts that lend themselves to unit testing.

broscillator · 2 years ago
This is all very noble on paper.

Yes, people can be persuasive to twist the truth.

And testing is good for arriving at proof.

But what the ancient philosophers didn't count on, is that there is a machine in charge of what gets tested and how, and that choosing what and how things get tested can also be twisted to be persuasive. I can develop and thoroughly test my own drug, and then persuade you that a much cheaper chemical is not tested enough (because I blocked all attempts at testing it), and then I would point to the noble principles of ancient science as to why things need verification.

In other words, the current world is teaching us that really powerful corporations can test selectively about a whole lot of things.

ilovetux · 2 years ago
You are correct, it is science.

The differentiator that I see is that the medical community are under pressure to apply the most cutting edge science in real life-and-death scenarios, so they are uniquely positioned to be burned by trusting a new hypothesis which appears correct but is subtly but completely wrong which could absolutely cause someone to lose their life.

AnimalMuppet · 2 years ago
But the alternative, waiting to apply new-but-actually-correct hypotheses until totally, absolutely proven, also causes lost lives.

And as is usual with "Type 1" vs "Type 2" errors, saving lives by avoiding one problem costs lives due to the other problem. The trick is to sit in the minimum of "lives lost due to Type 1 errors plus lives lost due to Type 2 errors". Unfortunately, that's not an analytic function with a known formula and computable derivative...

gnramires · 2 years ago
Science is more than just testing, although testing is an important part of science. I believe the reasoning aspect can, in principle, be made strong enough to be robust, and almost independent of experimental confirmation (essentially from analyzing already existing data).

For example in Machine Learning you can trust a model to work well in the context of a data stream by simply using (cross) validation, and so on (in general avoiding overfiting, and assuming your data stream will continue your training data without changing too much).

Mathematics is an example of a non-experimental science as well: in a way, we can perform purely theoretical experiments where our 'tests' are given by internal consistency. The equivalent of experiments in mathematics for ruling out incorrect statements (inconsistent theories) are proofs and (almost tautologically) the concept of consistency. I think there is some risk that even within math this process derails, but it's only when we start lowering too much the standards for what constitutes a proof. And the concept of proving is so solid we can automate it and computer-verify them. As long as we can (and do) verify what we create, in a sense we can't (or are extremely unlikely to) go off rails.

I think relativity is a good example of a theory that was largely conceived from mathematical principles (from a few observed essential properties of reality). I think in the future this kind of approach will be highly significant to physics, math, philosophy and even social sciences.

It is true though that experiments (and just real life) are very useful to 'keep us in check' in a general, less rigid way as well.

Honestly I think those are the fantastic tool (consistency and proof), fundamentally connected to the notion of truth.

bloaf · 2 years ago
Right, and I think the distinction he draws between this and engineering is interesting.

My experience (as an engineer) is that engineers get paid to manipulate the world around them in precise, predictable ways. As such, they tend to cultivate a core set of relatively simple principles (e.g. heat and material balances) which they take seriously by using to plan/design/etc their manipulations of the world. The engineer's creativity lies in finding new ways to combine the known-reliable principles, and stake their reputation on the result.

Scientists, on the other hand, are expected to come up with new principles. If they take too many ideas seriously, there is no room left for creative new explanations.

Medicine lies somewhere in between, insofar as doctors are trying to manipulate patient's health with the best principles they can find, but they also have to wear a scientists hat via diagnosis, i.e. figuring out which principles to apply to a patient.

And so the reaction to e.g. fundamentalism makes sense:

An engineer likes having a fundamental set of universal principles, and is comfortable using them to make plans for manipulating the world. They expect the religious world to work the same way as they do, and take those principles seriously.

A scientist wants to know "well I've got a new idea for a fundamental principle, how can we find out if I'm right?" And the fundamentalist has no answer.

A doctor will want to know how certain the fundamentalist is, and how they know those principles even apply to current situation, and the fundamentalist will have no answer.

helicalmix · 2 years ago
One counterexample for why I think it's not always the best approach is that there are sometimes disproportionate rewards for being right when everyone else is wrong.

Doing so, however, requires you to take an idea seriously before it has gone through analyses so extensive that everyone else believes the same idea too.

Deleted Comment

scotty79 · 2 years ago
Medical thickheadedness sometimes makes them performing useless, harmful procedures for many years after the harm became known. So it's not a virtue the article postulates.
josh_cutler · 2 years ago
Yes, but not all "scientific" disciplines adhere to this the same way that medicine does. Take for example Psychology, Political Science, or some of Economics where unreplicable studies in prestigious journals almost immediately become cannon for new grad student seminars.

N.B. I have direct experience with this in Political Science, the other disciplines I have just anecdotes about so apologies if they are mischaracterized.

MichaelZuo · 2 years ago
It does seem like odd phrasing. It's correct to not fully trust anything until it's gone through a long verification process.
ketzo · 2 years ago
Don't you have to put some amount of trust in an idea to even consider verifying?

And have even more trust in the idea to perform your experiments in the first place?

Even in medicine, there has to be someone willing to challenge status-quo ideas, or there's nothing to feed into the "long verification process" in the first place. How do you decide when to be that person?

jodrellblank · 2 years ago
> "Like I mean that on most topics, I could demolish their position and make them look like an idiot. Reduce them to some form of “Look, everything you say fits together and I can’t explain why you’re wrong, I just know you are!”"

That isn't making them look like an idiot, reducing them, or demolishing their position. That is completely failing to demolish their position! While also failing to convincingly explain your position.

> "If you have a good argument that the Early Bronze Age worked completely differently from the way mainstream historians believe, I just don’t want to hear about it. If you insist on telling me anyway, I will nod, say that your argument makes complete sense, and then totally refuse to change my mind or admit even the slightest possibility that you might be right. (This is the correct Bayesian action: if I know that a false argument sounds just as convincing as a true argument, argument convincingness provides no evidence either way. I should ignore it and stick with my prior.)"

Maybe the correct action is to see that how the Early Bronze Age worked has so little effect on your life, no testable hypothesis to confirm one way or the other, that it doesn't matter which one you believe, or if you believe both (even if they are contradictory, that's a thing humans can do). Instead of doubling down on one, let go of all of them.

InSteady · 2 years ago
>Maybe the correct action is to see that how the Early Bronze Age worked has so little effect on your life, no testable hypothesis to confirm one way or the other, that it doesn't matter which one you believe, or if you believe both (even if they are contradictory, that's a thing humans can do).

This is a great point, and I'd suggest it is worth taking even further. Even for things that have moderate or substantial impact on your life, holding space for the possibility that multiple competing/overlapping explanations could be true can be an extremely valuable (if cognitively expensive) skill.

oh_sigh · 2 years ago
It also ignores the question of how the person even got their prior in the first place. Presumably they heard a convincing argument at one point and accepted that - but then later changed their standards to not accept convincing arguments. In fact, how do they even know "if I know that a false argument sounds just as convincing as a true argument, argument convincingness provides no evidence either way"? Presumably they were convinced of it at some point.
Jensson · 2 years ago
> Presumably they heard a convincing argument at one point and accepted that - but then later changed their standards to not accept convincing arguments

No, many people base their views on what their friends thinks, they never reasoned themselves into it.

It is rational to base your beliefs on your friends, and to keep those beliefs to ensure you can continue to fit in with your friends. That is less work and less risky than changing your mind, only few really go against the grain and try to think for themselves, it isn't helping them but some people has to do that for the benefit of our pack.

circlefavshape · 2 years ago
> That isn't making them look like an idiot, reducing them, or demolishing their position

I think the quote is in Scott's opponent's voice, not his own.

> Instead of doubling down on one, let go of all of them

Yes. We don't need better ways to form opinions - the world would be a better place if we all just had fewer opinions

SteveDR · 2 years ago
I believe the parent comment is aware of your first point
_armchair · 2 years ago
> That isn't making them look like an idiot, reducing them, or demolishing their position.

I think he's saying that he can reduce his opponent to those words - i.e., the author's argument fits together, the opponent says "it's just wrong" and gets frustrated.

jodrellblank · 2 years ago
I agree that's what he's saying, and I'm saying that is not demolishing the opponent's position. Scott is going up to a castle, talking to the walls until he thinks they should collapse, then when they don't collapse he's declaring victory by "making the walls look idiotic" and telling people he "demolished the castle walls".

If you haven't actually convinced your opponent, and you haven't changed their mind, and you haven't understood their true objections, and you haven't presented a convincing enough case for your position for them (arguing in good faith) to accept, you haven't won, there's an underwater-iceburg chunk still missing.

Like, if I claim 2 is the largest possible number, and show you 1+1, and you say "1+1 is coherent and logical and fits together ... I can't place the flaw but something's not right and I still don't believe you", I can't go around reasonably telling people I demolished you with my proof that 2 is the biggest number and you look idiotic for not believing me.

Exoristos · 2 years ago
> believe both (even if they are contradictory, that's a thing humans can do)

It really isn't, though.

nineplay · 2 years ago
>engineering trains you to have a very black-and-white right-or-wrong view of the world based on a few simple formulae, and this meshes with fundamentalism better than it meshes with subtle liberal religious messages.

Back in the usenet days it was taken as given that any creationist was also an engineer. Creationism was nice and neat and logical unlike that handwavy big bang thing that was probably dreamed up by woolly headed academics with no practical experience.

jewayne · 2 years ago
I think of it as lowering the cognitive dissonance. Highly analytical individuals are going to tend to be more highly sensitive to any contradictions in their belief system. On which side you land might be remarkably random -- one might become an atheist, and another a Christian fundamentalist. The key is getting to a place where they see no contradictions.
FredPret · 2 years ago
Don't know about this one. Engineering taught me that reality is complex and nuanced and that success is defined on a spectrum, not zero or one.
marcosdumay · 2 years ago
Back in the usenet days everybody using it was either an engineer or a physicist, and every single one of those people had a bias about one of those groups being wrong more often than the other.
nineplay · 2 years ago
I wouldn't necessarily disagree with that. There were also professors of all stripes, and of course first-year students which were wrong more often than everybody.
0xdeadbeefbabe · 2 years ago
> Creationism was nice and neat and logical

Also requires less faith

nineplay · 2 years ago
To some degree the idea of a Creator is easier to wrap one's head around than the idea of nothing-->something.

Of course it really just rolls the question uphill since you're now faced with 'Where did the Creator come from?'. Happily there are no academic theories on this matter so there's no need to engage with it. Ultimately satisfaction doesn't come from the belief itself, satisfaction comes from the feeling of being Right while others are Wrong.

stuaxo · 2 years ago
How?
scotty79 · 2 years ago
Fantasy always requires less thinking than knowledge.

Deleted Comment

Dead Comment

JackFr · 2 years ago
I have to admit a weakness for reading not-quite-crackpot-but-likely-wrong theories. In particular, big fan of Julian Jaynes and The Development of Consciousness in the Breakdown of the Bi-cameral Mind, and the aquatic ape hypothesis https://en.wikipedia.org/wiki/Aquatic_ape_hypothesis

I get that they're probably not true, but I do enjoy reading novel thinking and viewpoints by smart people with a cool hook.

jerf · 2 years ago
I think if you want to start down that sort of road, it's important to read lots of them. Read zero, you're probably fine. Read lots of them, you're probably fine. "One or two" is where the danger is maximized.

And I would agree with "likely" wrong. Some of them probably aren't entirely wrong and may even be more correct than the mainstream. Figuring out which is the real trick, though. Related to the original article, I tend to scale my Bayesian updates based on my ability to test a theory. In the case of something like the Breakdown of the Bi-cameral Mind, it ends up taking such a deduction as a result of that heuristic that it is almost indistinguishable from reading a science fiction book for me; fun and entertaining, but doesn't really impact me much except in a very vague "keep the mind loose and limber" sense.

I have done a lot of heterodox thinking in the world of programming and engineering, though, because I can test theories very easily. Some of them work. Some of them don't. And precisely because it is so easy to test, the heterodoxy-ness is often lower than crackpot theories about 10,000 years ago, e.g., "Haskell has some interesting things to say" is made significantly less "crackpot" by the fact that plenty of other people have the ability to test that hypothesis as well, and as such, it is upgraded from "crackpot" to merely a "minority" view.

So my particular twist on Scott's point is, if you can safely and cheaply test a bit of a far-out theory, don't be afraid to do so. You can use this to resolve the epistemic learned helplessness in those particular areas. It is good to put a bit down on cheap, low-probability, high-payout events; you can even justify this mathematically via the Kelly Criterion: https://www.techopedia.com/gambling-guides/kelly-criterion-g... If there is one thing that angers me about way science is taught, it is that it is something that other people do, and that it is something special that you do with either the full "scientific method" or it's worthless. In fact it's an incredible tool for every day life, on all sorts of topics, and one must simply adjust for the fact that the lower effort put in, the less one should trust it, but that doesn't mean the total trust must necessarily be uselessly low just because you didn't conduct your experiment on whether or not fertilizer A or B worked better on your tomatos up to science journal standards.

ineptech · 2 years ago
Same. I found a book back in college claiming (on the basis of some theory about the Egyptian pyramids) that if you made a pyramidal shape with certain dimensions out of cardboard, it would make plants grow faster and keep your razorblades sharp. I didn't believe it, but I did make one for fun. All my physics-major friends made fun of me for being gullible. I was like, isn't testing stuff what we're supposed to be doing here?

(It didn't work)

scotty79 · 2 years ago
Is there solid evidence against aquatic ape? The only argument I've seen was that it's unnecessary because multitude of previous explanations of every single feature work just fine, thank you very much.
thriftwy · 2 years ago
I can read a novel idea, get excited by it, remember it and return to it later without being convinced.
noqc · 2 years ago
I think this article is badly argued, but about a topic which interests me greatly.

There are basically three epistemologies. There's the constructive (mathematical, proscriptive), the empirical (science, emotional), and trust. The constructive and empirical epistemologies don't separate as neatly as we would like them to, but a constructive argument basically looks like: "here's a thing that you definitely believe, and here is an implication of that, therefore you believe in the implication" aka modus ponens.

The empirical epistemology goes: "You have made a lot of observations, here's a simple explanation for all of them that you may go check", more or less the scientific method.

The trust based epistemology is just: "If I can establish facts about you, then I can establish facts about the things that you claim to believe, without having to see the receipts"

Each of these epistemologies has its own definition of argument, and they're all similar, but they're distinct, and the author isn't being clear about which one he means. In my estimation, the passages about ancient history are a reflection of him mistaking trust based arguments for empirical ones. This is a very common mistake.

An empirical argument is of the form "Here is the test that I used to differentiate my explanation from other explanations, and why I think this is a good test", whereas the trust based argument for the same explanation would be "Here is the data that my theory explains".