I am very skeptical of research about software development. Whether it's about tooling (static typing, etc.) or process (pair programming, etc.).
It's really hard (I would go so far as to say impossible) to set up useful metrics and it's really hard to create comparable scenarios. And even if you had those, very few things apply generally to all fields of software, let alone the different types of personalities that developers have, even if the stereotypes have a bit of truth to them.
Software "engineering" (little to no actual engineering in this area) is still in its infancy. Civil and mechanical engineering are millenia old and we still had the Tacoma Narrows Bridge less than a hundred years ago; electrical engineering is roughly a century old and things still self-ignite on the regular, and there are all kinds of weird and poorly understood quantum thingoes at the really small scales; software engineering is just decades old.
Everything that's written on the subject -- every bit of research, fad, and personal anecdote -- is just one more experiment flailing around in the darkness while we try to figure out what it means to "engineer" software.
I don't think that makes research about it valueless. Some is good, some is not, some will get revised over time because science and engineering gradually tack towards discovering natural laws. But also, yes, an enormous amount of human effort is wasted on following practices that are often not well-founded.
> Everything that's written on the subject -- every bit of research, fad, and personal anecdote -- is just one more experiment flailing around in the darkness while we try to figure out what it means to "engineer" software.
I think Ian Sommerville does a pretty good job in the book "Software Engineering" (I have the 10th edition), from the section 1.1.1:
> Software engineering is an engineering discipline that is concerned with all aspects of software production from the early stages of system specification through to maintaining the system after it has gone into use. In this definition, there are two key phrases:
> 1. Engineering discipline: Engineers make things work. They apply theories, methods, and tools where these are appropriate. However, they use them selectively and always try to discover solutions to problems even when there are no applicable theories and methods. Engineers also recognize that they must work within organizational and financial constraints, and they must look for solutions within these constraints.
> 2. All aspects of software production: Software engineering is not just concerned with the technical processes of software development. It also includes activities such as software project management and the development of tools, methods, and theories to support software development.
These quotes don't do it justice. I'd really recommend this book.
Wait, so just because there's fuzziness to the data you're going to instead defer to what? What's the proposed alternative/status quo that you're giving a data-less free pass?
Decision driven from bad data is IMO worse than data-less decision.
Data grants authority to a decision that gut feel driven one doesn't. It is hard to argue against evidence as it should be, but that assume a certain level of quality in the evidence.
Second, if practice doesn't match the expected outcome, the first thing you will look at is what the team is doing wrong, not review the decision as not working.
That said, parent is far from unique in his skepticism, so I think the problem is more often reversed in the industry. Having some data, even flawed can help your company decide to try something new.
There is more than some fuzziness to the data! At some point, measuring something sufficiently poorly is worse than not measuring it at all, and the empirical efforts I've seen to evaluate things I care about (programming languages, static types, etc) all fall into that category.
The alternative is to rely on rich experience and good taste. If you want to make it a bit more rigorous, you can approach this in terms of qualitative research—which makes sense for academic research, but isn't necessarily the best way to learn for yourself or to design tools.
Expert experience is far more effective at capturing complex, multi-dimensional phenomena than anything we can reduce to a small number of easily-gathered quantitative metrics.
I think fuzziness understates by far the wild variations that uncontrolled (mostly uncontrollable) elements often create.
Very few of these kinds of studies follow scientific method well enough to be vaguely useful let alone generally applicable. How many have you ever read of being successfully reproduced?
>Wait, so just because there's fuzziness to the data you're going to instead defer to what?
I'm constantly amazed at how people are willing to throw away data if it's not perfect. All you need is a bit of signal, and it's, literally, better than nothing.
> Hype: "Formal Verification is a great way to write software. We should prove all of our code correct."
> Shower: Extensive literature review showing that formal methods are hard to learn, extremely expensive to apply, and often miss critical bugs.
Glad you had the caveat "Written in 2000". What was hard and perhaps not worthwhile in 2000 has changed. Computers are a bit faster and software is more pervasive.
A colleague of mine was involved in the formal verification of a really tricky cellular network bug. That was around year 2000. It was hard but was still necessary and successful.
GM has a buggy car. That wouldn't have been an issue in 2000. Now it's an issue that forced them to withdraw a car from the market after about two weeks. Not saying that formal methods could have for sure avoided that, but I suspect that it very well may have.
I think the author should have expanded a bit upon "extremely expensive to apply", because that phrase is technically true but really hides the actual problems with formal verification and why it is seldom used in practice.
People unfamiliar with formal verification think it means "the code is provably correct", but that's not what it does. Formal verification can prove that your code correctly implements a specified standard. You still have to define and write that standard. And that's where the problems begin:
- Standards themselves can have bugs, or the standard you wrote is not what you actually wanted. Note that this is the same problem that often occurs in code itself! You've just pushed it up a level, and into an even more arcane and less-debuggable language to boot (formal standards are generally much, much harder to write, debug, and reason about than code)
- The standard is constantly changing as feature requests come in, or the world around you changes. Modern software engineering mostly consists of changing what you or somebody else already built to accommodate new features or a new state of the world. This plays very badly with formal verification.
Formal verification can work in areas where you have a long time to do development, or where your problem space is amenable to having a formal standard (like math problems). In most other cases it just runs completely opposite to the actual problems that software engineering is trying to solve.
A lot of the types of formal methods that Hillel Wayne (the author) writes about are the opposite of what you describe—you write a specification which is provably correct, and then it's up to you to translate that specification into code. The error-prone part is the translation more than the specification.
It amounts to the same thing: it takes a long time to develop, the specification can get out of sync with the code, and once out of sync you now have no provable characteristics of the new system.
Though this does make me wonder: If there exist some formal methods that can prove that code implements a standard, and others that can prove that the standard is correct, it seems like there ought to be something in the middle that can prove both.
> Formal verification can work in areas where you have a long time to do
> development, or where your problem space is amenable to having a formal
> standard (like math problems). In most other cases it just runs completely
> opposite to the actual problems that software engineering is trying to solve.
Have you actually tried, or is this just the standard line?
I've heard this from people I've worked with, and when I followed up to ask what kinds of projects they've worked on with Agda, Idris, Coq, Lean, whatever, they had nothing.
I don't have anything either, other than that I've toyed around with some of these systems a little bit. But it seems to me like there's a lot of potential in dependently-typed programming, and we just don't have a lot of experience with it -- which is fine, but that's a very different situation than what seems to be the standard line, "all formal methods take a ton of time and are only viable in a few niche projects" (what are people imagining here? NASA space probes and stacks of binders? really it doesn't seem obvious to me, I'm not trolling).
I'm not sure I agree with your conclusion. There's a large grey area in the formal methods space where general software engineering thrives (in my opinion). Especially the "lightweight formal method" variety. I doesn't go for complete proofs, but instead pursues a style of exhaustive (or something that approximates it) checking. Going down this route puts a very positive design pressure on your applications as well as your thinking in general.
The big pay off in formal methods is learning to think abstractly (imho). Even if you don't make a full blown model, sketching out the problem in something like TLA+ can be extremely valuable just by forcing you to think about the modeling independently of code. Even in the world of general software engineering, being able to reframe requirements as temporal invariants has felt something like a super power.
I like that this calls out caveats to the downsides and mentions places where things really did hold up okay ("Most bugs were at the system boundaries; none were found in the implemented protocols. Formally verified systems, while not perfect, were considerably less buggy than unverified systems."). Good to keep perspective in both directions:)
> Hof's first attempt the day before failed when he began his swim without goggles and his corneas froze solid and blinded him. A rescue diver pulled him to the surface after he passed out.
For anyone wanting to try this, I‘d just warn that many extremes that humans haven’t evolved to endure (sitting for long periods, spending no time in sunlight, spending too much time in sunlight, eating no fat, eating too much fat, etc) have been shown repeatedly to shorten lifespan. I’d see daily ice baths as an unnatural extreme and wouldn’t consider doing this, at least not for a long period of time.
Humans weren't optimized to endure the unnatural extreme heat of a 175 degree sauna, yet studies point to frequent sauna use being associated with a reduction in all-cause mortality
Depends, if you're very diligent, you could use a "smart constructor" (in some languages you can make it so there's only one way to construct a type, and then enforce validation at construction time) and instead of taking in a float/int for temperature, take in a Temperature or OutsideTemperature type as an argument.
A strictly newtype can be very powerful, but needing to "unwrap" your datatype to use it in normal operations of that datatype can be a little unwieldy so tagged types are a pretty interesting mechanism that provides type safety and convenience. I don't think it's strictly better though, it depends heavily on context.
> You could put 999 into a variable that's supposed to hold the outside temperature
I've never done this, but you could also define types for stuff like this. type: PositiveInteger, or type: BoundedTemperature that would only ever hold valid values.
Or one thing where it might come in handy would be when dealing with user/potentially malicious input - SanitizedString etc.
There will obviously be a complexity tradeoff, but instead of using integer to pass this value around you can create an OutsideTemperature type that validates this rule.
Yeah the main benefits are reducing bugs in developer write-run-debug cycles, and acting as machine-readable documentation (including to generate tooltips and autocomplete). The “cold shower” doesn’t check those.
You should be documenting most type info at least at the function/method signature and data structure definition level anyway. May as well do it in the most-useful format possible.
This is what gets me about “even modern relatively-low-boilerplate type systems just slow me down” folks. That means you’re skipping documentation you ought to be doing.
Hype: "Static Typing reduces bugs."
Shower: A review of all the available literature (up to 2014),
showing that the solid research is inconclusive, while the
conclusive research had methodological issues.
Static typing lets you do more complicated things by offloading a subset of complexity-management to robots. The remaining human-managed complexity expands until new development slows to a crawl, and no further human-managed complexity can be admitted to the system, similar to adding more lanes on a freeway.
Even if it doesn't reduce bugs (and how do we even measure this? in terms of bugs per loc? bugs per unit time?), it does make APIs easier to use (not even in terms of correctness, but in terms of time required to grok an API).
"Reduce bugs" is kind of a loaded term anyway. Static typing doesn't reduce bugs in an absolute sense, but I think it does reduce bugs per unit of value delivered. That's a lot harder to measure in a formal study.
> Static typing lets you do more complicated things by offloading a subset of complexity-management to robots
I've read some of the research on this! Yes, static typing improves documentation and helps you navigate code.
It also correlates with code quality and reduces smells. Inconclusive whether that's because of static typing or because more mature teams are likelier to choose static typing.
But all the research agrees: Static typing does not reduce logic bugs. You can build the wrong thing just as easily with dynamic and with static typing. The only type of bug that static typing reduces is the sort of bug you'll find by running the code.
In my experience, static typing is best thought of as a way to reduce the need for manually written unit tests. Instead of writing tests that break when a function signature changes, you write types that break when you call functions wrong.
You still need tests for logic. Static typing doesn't help there.
This seems like a strong statement to make based on the research. What I've seen falls into several camps:
- research that made some conclusion about logic bugs for complete beginners on small assignments, with languages that have bad type systems
- research that had significant limitations making it impossible to generalize
- research that failed to demonstrate that static typing reduced bugs—which is very different from demonstrating that it didn't!
I haven't done a super thorough review of the literature or anything, but I have looked through a decent number of software engineering papers on the subject. The only strong conclusion I got from the research is that we can't get strong conclusions on the subject through purely empirical means.
Hell, the whole question is meaningless. "Static typing" is not one thing—there's way more difference between Java and Haskell than between Java and Python, even though both Java and Haskell are statically typed and Python isn't. (This is even assuming you completely ignore Python's type annotations and gradual typing!)
> The only type of bug that static typing reduces is the sort of bug you'll find by running the code.
This is a pretty solid argument in favor of static typing, then, unless you somehow have a test suite that exercises every possible code path and type variation in your codebase, and also keeps itself perfectly up to date. Because otherwise you're rarely running all of your code and verifying the result.
If "type bugs are an obvious thing and happen all the time" and "static Typing reduces type related bugs" then it should be easy to demonstrate this empirically. However, "a review of all the available literature (up to 2014), show[s] that the solid research is inconclusive while the conclusive research had methodological issues."
I think the important question is: at what cost? E.g., if it takes me 4x more time to write statically-typed code, and it saves me 10% fewer bugs (completely made up numbers here), is that worthwhile? Maybe, if I'm programming self-driving cars or autopilot software for aircraft. Probably not if I'm programming a web calendar for dog sitters.
Dynamic typing doesn't change/improve this, though, so I'm not sure what the point being made is. I'm also not sure I agree with it at its premise anyway.
A literary review I made in 2020 was actually pretty conclusive about it also reducing bugs. I think we might be missing some of the later literature here.
Stepping back, is this simply pointing out that Rule 34 also applies to scientific research? If a claim can be made, there is a scientific study that "proves" the claim true.
If someone came to me with some outlandish claim, "Studies show that singing in the shower lowers cancer risk," I honestly won't be surprised if they could produce 2-4 white papers published in modern journals in support of it. So much of modern science seems to be reading between the lines and meta-analyses of scientific studies.
Scott Alexander's review of Ivermectin[0] is a great example of this. Bold claim is made, everyone divides into two camps, and in fact both camps have multiple peer reviewed studies backing their side, and to arrive at some semblance of understanding of the topic you need to spend hours diving into the studies and checking off boxes: were they peer reviewed, were there confounders, do the authors have a history of fraud, and on and on and on.
I get what you're saying, but "rule 34 also applies to scientific research" is a bizarre way to word it; the way I'd interpret that phrase out of context is just that it logically follows because scientific research is a subset of "things that exist".
I thought you were going a different tack when I read rule 34 - That I think is more apropos to the article's intent - If something is interesting, someone is getting excited about it, and charging forward into fan art. Papers and news articles are unfortunately just real-world fan art.
To add, if a claim can be made that can be turned into a product, there will be a paid-for study that proves it to be effective.
When it comes to ivermectin though, I'm not convinced that was for profit per se, I think that was a crowd looking for a cure or something to latch onto because ????.
There was certainly profit made, but not so much by manufacturers. Many folks looking for miracle cures were duped into “health consultations” and bilked for $100-200 a pop during the pandemic. Frequently these services were advertised to church mailing lists.
I like the one that considered the additional factor of people just having invasive worms in them already, especially those living in squalor in the heavily Republican southeast.
American healthcare is such that this would never be tested for until it becomes a problem, when your immune system is strained by other things the worms become a burden to your body. COVID is that other thing. So taking Ivermectin kills the worms as it was designed to do and this results in the body being able to focus just on fighting COVID. Given the "improved outcomes" being such low percentages to begin with, that fits pretty well to the distribution of the population that maybe have worms. So they say, "aha ivermectin did it, big pharma doesn't want you to know!" despite ivermectin being big pharma and just not being in on the conspiracy club
so then the other camp tries to do a controlled study of just covid patients and finds no link. their headline says "no link found, not recommended" of course, because they're John Hopkins instead of their next door neighbor, this is just a symptom of the conspiracy theory and leftist institutions.
but beyond that, it is a symptom of Republicans feeling accurately underserved by those institutions. There is an obsession with pointing out how dumb someone is, instead of empathizing and doing the additional study. "ah I see how you reached that conclusion, a bunch of people have been in a symbiotic relationship with parasites the whole time!"
On the other side, though, there was a huge financial incentive to discredit ivermectin.
If it turned out that ivermectin was effective at treating covid, the emergency authorization would no longer hold, and it would have ended the extremely lucrative business of covid vaccines. There was incentive to design studies that, on purpose, applied ivermectin incorrectly (not the right time, not the right dosage).
In the media, ivermectin was described as horse dewormer, a fake story about hospitals being overwhelmed by people overdosing ivermectin was repeated for weeks in mainstream media, even though before covid, it was considered a low risk medication, and was described as a wonder drug.
I'm not implying ivermectin successfully treated covid, mind you. I'm just saying that it's hard to trust the people that were saying it doesn't and, going further, they said it was a silly dangerous idea to even try using it for treating covid under medical supervision.
With that in mind, I can see how some people could come to the conclusion that, if big pharma and its lackeys in media and government want to discredit it so hard, it might even work.
It's really hard (I would go so far as to say impossible) to set up useful metrics and it's really hard to create comparable scenarios. And even if you had those, very few things apply generally to all fields of software, let alone the different types of personalities that developers have, even if the stereotypes have a bit of truth to them.
Everything that's written on the subject -- every bit of research, fad, and personal anecdote -- is just one more experiment flailing around in the darkness while we try to figure out what it means to "engineer" software.
I don't think that makes research about it valueless. Some is good, some is not, some will get revised over time because science and engineering gradually tack towards discovering natural laws. But also, yes, an enormous amount of human effort is wasted on following practices that are often not well-founded.
I think Ian Sommerville does a pretty good job in the book "Software Engineering" (I have the 10th edition), from the section 1.1.1:
> Software engineering is an engineering discipline that is concerned with all aspects of software production from the early stages of system specification through to maintaining the system after it has gone into use. In this definition, there are two key phrases:
> 1. Engineering discipline: Engineers make things work. They apply theories, methods, and tools where these are appropriate. However, they use them selectively and always try to discover solutions to problems even when there are no applicable theories and methods. Engineers also recognize that they must work within organizational and financial constraints, and they must look for solutions within these constraints.
> 2. All aspects of software production: Software engineering is not just concerned with the technical processes of software development. It also includes activities such as software project management and the development of tools, methods, and theories to support software development.
These quotes don't do it justice. I'd really recommend this book.
1. Derek Jones. (2021, January 17). Software effort estimation is mostly fake research. The Shape of Code. https://shape-of-code.com/2021/01/17/software-effort-estimat...
2. Andy Oram, Greg Wilson, & Andrew Oram (Eds.). (2011). Making software: what really works, and why we believe it (1st ed). O’Reilly.
3. Bossavit, L. (2015). The leprechauns of software engineering: how folklore turns into fact and what to do about it. Leanpub.
Data grants authority to a decision that gut feel driven one doesn't. It is hard to argue against evidence as it should be, but that assume a certain level of quality in the evidence.
Second, if practice doesn't match the expected outcome, the first thing you will look at is what the team is doing wrong, not review the decision as not working.
That said, parent is far from unique in his skepticism, so I think the problem is more often reversed in the industry. Having some data, even flawed can help your company decide to try something new.
The alternative is to rely on rich experience and good taste. If you want to make it a bit more rigorous, you can approach this in terms of qualitative research—which makes sense for academic research, but isn't necessarily the best way to learn for yourself or to design tools.
Expert experience is far more effective at capturing complex, multi-dimensional phenomena than anything we can reduce to a small number of easily-gathered quantitative metrics.
Very few of these kinds of studies follow scientific method well enough to be vaguely useful let alone generally applicable. How many have you ever read of being successfully reproduced?
I'm constantly amazed at how people are willing to throw away data if it's not perfect. All you need is a bit of signal, and it's, literally, better than nothing.
> Shower: Extensive literature review showing that formal methods are hard to learn, extremely expensive to apply, and often miss critical bugs.
Glad you had the caveat "Written in 2000". What was hard and perhaps not worthwhile in 2000 has changed. Computers are a bit faster and software is more pervasive.
A colleague of mine was involved in the formal verification of a really tricky cellular network bug. That was around year 2000. It was hard but was still necessary and successful.
GM has a buggy car. That wouldn't have been an issue in 2000. Now it's an issue that forced them to withdraw a car from the market after about two weeks. Not saying that formal methods could have for sure avoided that, but I suspect that it very well may have.
People unfamiliar with formal verification think it means "the code is provably correct", but that's not what it does. Formal verification can prove that your code correctly implements a specified standard. You still have to define and write that standard. And that's where the problems begin:
- Standards themselves can have bugs, or the standard you wrote is not what you actually wanted. Note that this is the same problem that often occurs in code itself! You've just pushed it up a level, and into an even more arcane and less-debuggable language to boot (formal standards are generally much, much harder to write, debug, and reason about than code)
- The standard is constantly changing as feature requests come in, or the world around you changes. Modern software engineering mostly consists of changing what you or somebody else already built to accommodate new features or a new state of the world. This plays very badly with formal verification.
Formal verification can work in areas where you have a long time to do development, or where your problem space is amenable to having a formal standard (like math problems). In most other cases it just runs completely opposite to the actual problems that software engineering is trying to solve.
It amounts to the same thing: it takes a long time to develop, the specification can get out of sync with the code, and once out of sync you now have no provable characteristics of the new system.
Though this does make me wonder: If there exist some formal methods that can prove that code implements a standard, and others that can prove that the standard is correct, it seems like there ought to be something in the middle that can prove both.
Have you actually tried, or is this just the standard line?
I've heard this from people I've worked with, and when I followed up to ask what kinds of projects they've worked on with Agda, Idris, Coq, Lean, whatever, they had nothing.
I don't have anything either, other than that I've toyed around with some of these systems a little bit. But it seems to me like there's a lot of potential in dependently-typed programming, and we just don't have a lot of experience with it -- which is fine, but that's a very different situation than what seems to be the standard line, "all formal methods take a ton of time and are only viable in a few niche projects" (what are people imagining here? NASA space probes and stacks of binders? really it doesn't seem obvious to me, I'm not trolling).
The big pay off in formal methods is learning to think abstractly (imho). Even if you don't make a full blown model, sketching out the problem in something like TLA+ can be extremely valuable just by forcing you to think about the modeling independently of code. Even in the world of general software engineering, being able to reframe requirements as temporal invariants has felt something like a super power.
Those problems have been with every EV it has put on the market since the Bolt (which was developed in South Korea, and largely not by GM.)
https://insideevs.com/news/702126/ultium-platform-owners-gm/
It's just sheer incompetence and mismanagement in one company.
https://news.ycombinator.com/item?id=31793030 (June 18, 2022 — 629 points, 355 comments)
https://news.ycombinator.com/item?id=23935943 (July 24, 2020 — 588 points, 241 comments)
Cold Showers - https://news.ycombinator.com/item?id=31793030 - June 2022 (355 comments)
Cold Showers: For when people get too hyped up about things - https://news.ycombinator.com/item?id=23935943 - July 2020 (241 comments)
Awesome Cold Showers: for when people get too hyped up about things - https://news.ycombinator.com/item?id=17604379 - July 2018 (3 comments)
Awesome Cold Showers - https://news.ycombinator.com/item?id=16019844 - Dec 2017 (1 comment)
[0] https://en.wikipedia.org/wiki/Wim_Hof
> Hof's first attempt the day before failed when he began his swim without goggles and his corneas froze solid and blinded him. A rescue diver pulled him to the surface after he passed out.
this probably has code analogies too...
Let's stick to bickering about Formal Verification and Static Typing and leave that cold shower stuff to the pros...
I guess a "cold shower" is that it only checks the type. You could put 999 into a variable that's supposed to hold the outside temperature.
An interesting implementation of this idea is outlined here too: https://medium.com/iterators/to-tag-a-type-88dc344bb66c
A strictly newtype can be very powerful, but needing to "unwrap" your datatype to use it in normal operations of that datatype can be a little unwieldy so tagged types are a pretty interesting mechanism that provides type safety and convenience. I don't think it's strictly better though, it depends heavily on context.
I've never done this, but you could also define types for stuff like this. type: PositiveInteger, or type: BoundedTemperature that would only ever hold valid values.
Or one thing where it might come in handy would be when dealing with user/potentially malicious input - SanitizedString etc.
type OutsideTempC is digits 3 range -100.0 .. 70.0;
(Edit: others mentioned this)
``` type Temperature<min: number, max: number> = { x: number | x >= min && x <= max } ```
[1] - https://arxiv.org/pdf/2010.07763.pdf
This is what gets me about “even modern relatively-low-boilerplate type systems just slow me down” folks. That means you’re skipping documentation you ought to be doing.
I've read some of the research on this! Yes, static typing improves documentation and helps you navigate code.
It also correlates with code quality and reduces smells. Inconclusive whether that's because of static typing or because more mature teams are likelier to choose static typing.
But all the research agrees: Static typing does not reduce logic bugs. You can build the wrong thing just as easily with dynamic and with static typing. The only type of bug that static typing reduces is the sort of bug you'll find by running the code.
In my experience, static typing is best thought of as a way to reduce the need for manually written unit tests. Instead of writing tests that break when a function signature changes, you write types that break when you call functions wrong.
You still need tests for logic. Static typing doesn't help there.
This seems like a strong statement to make based on the research. What I've seen falls into several camps:
- research that made some conclusion about logic bugs for complete beginners on small assignments, with languages that have bad type systems
- research that had significant limitations making it impossible to generalize
- research that failed to demonstrate that static typing reduced bugs—which is very different from demonstrating that it didn't!
I haven't done a super thorough review of the literature or anything, but I have looked through a decent number of software engineering papers on the subject. The only strong conclusion I got from the research is that we can't get strong conclusions on the subject through purely empirical means.
Hell, the whole question is meaningless. "Static typing" is not one thing—there's way more difference between Java and Haskell than between Java and Python, even though both Java and Haskell are statically typed and Python isn't. (This is even assuming you completely ignore Python's type annotations and gradual typing!)
This is a pretty solid argument in favor of static typing, then, unless you somehow have a test suite that exercises every possible code path and type variation in your codebase, and also keeps itself perfectly up to date. Because otherwise you're rarely running all of your code and verifying the result.
Not hype, but an obvious fact: "Static Typing reduces type related bugs."
Are you calling compilers robots?
If someone came to me with some outlandish claim, "Studies show that singing in the shower lowers cancer risk," I honestly won't be surprised if they could produce 2-4 white papers published in modern journals in support of it. So much of modern science seems to be reading between the lines and meta-analyses of scientific studies.
Scott Alexander's review of Ivermectin[0] is a great example of this. Bold claim is made, everyone divides into two camps, and in fact both camps have multiple peer reviewed studies backing their side, and to arrive at some semblance of understanding of the topic you need to spend hours diving into the studies and checking off boxes: were they peer reviewed, were there confounders, do the authors have a history of fraud, and on and on and on.
[0] https://www.astralcodexten.com/p/ivermectin-much-more-than-y...
When it comes to ivermectin though, I'm not convinced that was for profit per se, I think that was a crowd looking for a cure or something to latch onto because ????.
https://theintercept.com/2021/10/13/intercepted-podcast-covi...
American healthcare is such that this would never be tested for until it becomes a problem, when your immune system is strained by other things the worms become a burden to your body. COVID is that other thing. So taking Ivermectin kills the worms as it was designed to do and this results in the body being able to focus just on fighting COVID. Given the "improved outcomes" being such low percentages to begin with, that fits pretty well to the distribution of the population that maybe have worms. So they say, "aha ivermectin did it, big pharma doesn't want you to know!" despite ivermectin being big pharma and just not being in on the conspiracy club
so then the other camp tries to do a controlled study of just covid patients and finds no link. their headline says "no link found, not recommended" of course, because they're John Hopkins instead of their next door neighbor, this is just a symptom of the conspiracy theory and leftist institutions.
but beyond that, it is a symptom of Republicans feeling accurately underserved by those institutions. There is an obsession with pointing out how dumb someone is, instead of empathizing and doing the additional study. "ah I see how you reached that conclusion, a bunch of people have been in a symbiotic relationship with parasites the whole time!"
*-scientifically proven.
If it turned out that ivermectin was effective at treating covid, the emergency authorization would no longer hold, and it would have ended the extremely lucrative business of covid vaccines. There was incentive to design studies that, on purpose, applied ivermectin incorrectly (not the right time, not the right dosage).
In the media, ivermectin was described as horse dewormer, a fake story about hospitals being overwhelmed by people overdosing ivermectin was repeated for weeks in mainstream media, even though before covid, it was considered a low risk medication, and was described as a wonder drug.
I'm not implying ivermectin successfully treated covid, mind you. I'm just saying that it's hard to trust the people that were saying it doesn't and, going further, they said it was a silly dangerous idea to even try using it for treating covid under medical supervision.
With that in mind, I can see how some people could come to the conclusion that, if big pharma and its lackeys in media and government want to discredit it so hard, it might even work.