Readit News logoReadit News
dgfitz · a year ago
I spent two days trying to get repeatable data from a piece of hardware. Was not able to replicate a dataset once, found out we had a bad high-pressure hose for the brake system and the numbers were all broken.

I have no fucking clue how a soft science study could replicate itself.

stogot · a year ago
If even an author can’t replicate it, shouldn’t that make the results meaningless?
avs733 · a year ago
not neccesarily? It depends on what the claims are. If prior art says 'this is impossible' then a study with one result that is hard to replicate is an interesting insight about the margins. If it is commonly accepted wisdom, we may need to address what that is. If it is your experiment, it doesn't mean your result is wrong but it means it should be significantly questioned. If I say its more that is on me...if I say it is what it is then the results are the results.

I think it is a growing, and problematic, misconception that science is about just reporting facts. It is about adding knowledge...soemtimes that is new facts, but more often it is theories, observations, and questions increasingly at the margins of randomness and probability as our species grows and learns.

domofutu · a year ago
There's also all the (potentially relevant) factors no one's considering - https://www.thetransmitter.org/methods/mouse-housing-tempera...
aaron695 · a year ago
A bunch of women going on about giving mice air-conditioning or not?

Probably. We've filled science with Imposters. It seems predominantly women, they are the largest change in science.

The worst non-replicable sciences are also female majority.

Busywork on mice conditions is exactly the false science we are talking about. More excuse why it's not reproducible when mice are mice and are great sub-ins for humans.

"In mice" is the meme create to excuse the garbage that's produced. The studies never originally worked on mice, but now they have an excuse why they don't work on humans.

I wouldn't necessarily blame it 100% on women though. But I see your point.

mapt · a year ago
This guy doesn't get to say shit to us about literally anything after what he did. 2020's John Ioannidis stood on the shoulders of giants (2005's John Ioannidis) and was one of the first "Experts" to proudly proclaim that in his professional judgement, COVID-19 was no big deal and didn't warrant countermeasures.

https://sciencebasedmedicine.org/mistakes/

If you choose to become a policy entrepreneur and your success ends in a Holocaust-scale outcome, you get to shut up, retire, and thank your lucky stars we live in a society that strongly discourages blood debts. That's the deal. No book tour, do not pass go, do not collect $200, and stay the fuck off social media.

ttoinou · a year ago
Seems like he was correct though. This article doesn’t debunk him, you can’t use official statistics to debunk a claim that the statistics are not computed correctly

Deleted Comment

decUser3 · a year ago
> in an effort— well intentioned—to control the coronavirus, we may inflict great damage on ourselves.

And you believe he was wrong and you believe Covid was a "Holocaust-scale outcome" ? He said the rates would be exaggerated and now with the benefit of hindsight, although some would claim common sense after seeing the sensationalism of the media, we can see he was correct. Or do you believe that's not true ?

bjourne · a year ago
Did no one watch the talk? :) After 13 minutes it ends and tells you to go to a IAI website to continue to watch. Fucking infuriating.
dang · a year ago
Yeah that's bad. And https://iai.tv/video/why-most-published-research-findings-ar... appears to do the same thing after 30 minutes. Does anyone have a link to the whole thing?
moomin · a year ago
I think it’s fair to say that if someone is that determined for you to not hear what they have to say, you should oblige them.
d0mine · a year ago
It is worth pointing out that compared to * everything else on the internet, science is the epitome of truth, facts, reproducibility, etc.

It might not appear so at the first glance due to Gell-Mann Amnesia effect.

7thaccount · a year ago
I don't think this isn't what the reproducibility issue is about.

The problem is there is a broken system in place that rewards publishing in high volumes, only getting positive results, and also doing research that supports those in charge. That doesn't mean new ideas don't win out when clear evidence exists but it is the exception.

All of this leads to economic incentives to lie with the data and then you get to scientific papers that people are making decisions based off (like where to put research dollars for Alzheimers) that are fraudulent. This is not good at all.

llamaimperative · a year ago
There are multiple reproducibility crises. The incentive structure is one of them, the other being the general challenge of reproducing results further "up the stack" in highly chaotic fields (the stack being physics -> chem -> bio -> psychology -> sociology, obviously nonexhaustive)

I actually find the reproducibility crisis you're mentioning to be both more problematic and to receive far less attention than the other. More problematic because it infects every form of science, even the harder/more fundamental sciences, and because it's way less clear how to fix it.

The reproducibility crisis near the top of the stack is just: "do more science and don't believe results until they've been replicated, refined, and matured."

vacuity · a year ago
The practice of the scientific method is the path to empirical truth. This doesn't necessarily mean that a nominal scientific finding is true. It is likely true that, on a given topic, an acredited scientist in that field has fairly correct opinions, but I would not take this too far. Conflicts of interest, personal biases, incentives to obtain results, simple lack of reproduction, corrupt peer review, etc. are clearly issues, and it's all the more unfortunate that we can't even say just how deep these issues run.

Science is timeless and powerful. Scientists are human beings that nominally, preferably, fallibly practice science.

karaterobot · a year ago
Gell-Mann amnesia is when you notice that somebody who is writing about something you're an expert in has made critical mistakes, and should not be trusted, but then you go on and trust people writing about topics you don't know as much about, failing to recognize they're probably also making critical mistakes about those other topics. I'm not sure the connection here.

Regarding the comparison between science and folk explanations on the internet, I think it's reasonable to hold science to a higher standard than we hold the general population. If most published research findings are false—as this video claims—and most of what idiots on the internet say is also false, that's neither an equivalence nor a victory for science. On the contrary, it erodes the value of science, both literally and figuratively. Right now we need sources of authority.

d0mine · a year ago
It is not black or white. Yes, there are issues. No, it doesn’t mean a random comment on the internet should have as much authority as scientific paper. There is a strong background of anti-intellectualism right now.

The connection with Gell-Mann amnesia is that people overestimate truthiness of what they read online. Combined with reading about [real] issues in science, It might create an impression that scientific findings are relatively at the same level as everything else. My point is even all the troubles science is still a head above despite the perception.

jMyles · a year ago
This paper was life-changing for me as an undergrad (and I didn't discover the rest of his body of work until I ran into it later here on HN in 2010 or so).

We are blessed as a species that John stuck to his principles - and his thirst for empiricism - during the COVID-19 panic, and supported / encouraged his colleagues to do likewise.

This video is only the first 12 minutes of the talk. The rest is here (though it is possibly semi-paywalled? It let me watch it, even though it said it was going to make me sign up for a trial):

https://iai.tv/video/why-most-published-research-findings-ar...

matthewdgreen · a year ago
My understanding was that Ioannidis hugely underestimated the IFR of COVID and he did it mostly by cherry picking a handful of small-sample-size studies that were friendly to his political views. It was very much a “your heroes will absolutely let you down” moment in my scientific life, and to the extent that non-scientists have forgotten the episode, that’s kind of what I expect.

https://www.dailymail.co.uk/news/article-8843927/amp/Just-0-...

nostrebored · a year ago
No, the IFR of Covid was hugely overstated, which is why the projected population level impacts were completely wrong, even in places with limited interventions. Attributing cause of death is not as easy as it might seem.
MrMcCall · a year ago
Thanks. This guy (John Ioannidis) is the real deal. You can tell both by the dense, detailed fact sets, presented one after the other in logical order. And hearing the honest, intelligent tone of voice ensures his fidelity.

"The voice never lies." --Blind woman speaking to a friend

And his story about his hearing of Theranos is lowkey hilarious. And topical, because he's a Dunning-Kruger true-expert.

sdenton4 · a year ago
During the early pandemic, he underestimated the rate of fatality from COVID (which, remember, was much higher before vaccines and paxlovid were deployed), and forcefully advocated for policy based on his lower fatality rate estimates. It was a stunning display of hubris: working on very limited information, he was pushing incautious policy responses which could have cost millions of lives.

https://en.m.wikipedia.org/wiki/John_Ioannidis

DAGdug · a year ago
As someone who felt the policy reaction to COVID was poor (no balanced assessment of the cost of false positives and negatives in decision-making, poor accounting of uncertainty), I concur that he didn’t apply his usual rigor or his critiques to his own work. He also had, IIRC, a conflict of interest and was funded by the airline industry for this research.
cempaka · a year ago
It's funny that the wildly overestimated IFRs like 3.4%, which were taken as gospel in the early days of the pandemic and drove the actual policy of sweeping shutdowns of schools, preventive medical care, and the economy at large -- a totally experimental and unprecedented measure with no empirical evidence whatsoever to demonstrate the benefits would outweigh the costs -- are not subjected to this same label of "hubris".

Iaonnidis's estimates of IFR in the 0.1% to 0.2% range were much closer to the mark.