> When I attended my first scientific conference at the tender age of 20, one of my mentors surprised me with the following bit of advice. Transcribed directly from memory:
> "You should be sure to attend the talk by so-and-so. You can always trust his results."
Perhaps around the same time, a professor said something similar to me, pointedly.
I paid attention to what I thought I heard, and I think I still remember the exact words, but in hindsight, I realize that they were probably implying something more than what I thought they were.
The bit of wisdom, which was actually a warning, whooshed over my head, so wasn't followed.
Though, if they could've known fully the fiasco that I was walking into, and had warned me directly of that in advance, I don't know that I could've believed them. :)
And I can understand why they'd speak indirectly or hesitate to say anything. Even having had this experience, and not wanting others to go through it, I still don't give advice as directly or strongly as I might, about some things. For a variety of reasons: they might be fine and not need the warning/advice, they might put too much weight on my advice, they certainly won't fully understand it, they might repeat something indiscreetly or misconstrued, there could be suppressing action against them and/or me, etc.
Science might have some problems but these are nothing in comparison to philosophy. That's what I think after 16 years of working in the discipline. To quote one of my colleagues, who made a great career: "Who cares if it's wrong, it's another publication!" I'll be happy to leave when my current contract runs out.
This is really more a problem with our University system and how academics gain prestige though publication. The more you publish the better for your career and this applies to many fields, not just Philosophy. The reduction of tenure positions also plays into this as it ramps up the competitive aspect. Actually doing anything that makes a real contribution to your field is of little consequences.
Did you weight your measurement by magnitude of influence (including negative reactions from "non-fans", to both science itself but also scientism, and the behavior of science's fan base) they each have in the world?
Some people think science has a proper lane, and that it (and those who speak on its behalf, cheerlead for it, etc) should stick to it in a more disciplined manner.
I would like to see scientific results presented in a standardized form, I suggest using ISO 10303 (STEP), this could make it easier to see exactly what is needed to reproduce those results.
It's not a coincidence the public's trust in "science" is at all time lows. This was probably always happening but it's definitely more visible / exposed now.
Is it low? That the issues of quality and rigor are being discussed is a good sign in my opinion. The expectation of continual progess at some constant, quantifiable rate is what has been most at fault with industrial science, but that expectation is now diminishing.
Also, I think that the wider public now has a greater appreciation that vigourous debate and disagreement is to be expected amongst the scientific community and that it's a good thing too.
> Americans’ confidence in higher education has fallen to 36%, sharply lower than in two prior readings in 2015 (57%) and 2018 (48%).
Not surprising when all academic institutions are politically captured by one side which suppresses and promotes ideas to further its agenda.
Recently, student support for Hamas after October 7 was a spectacular flower that could only grow in an ideological hothouse like what academia's become, and I don't think that's helped.
They seem to be getting the idea a little with Ivies now doing away with their creepy diversity statement requirements.
I've also heard, "Trust in the science." (emphasis mine).
I find this second form even more troubling. IMHO, it implies that there's only one conclusion that (real) science can reach, and the audience has been shown that conclusion.
To doubt that particular conclusion (no matter how badly researched) is to doubt the scientific method itself. It's implicitly a conversation stopper.
---
That said, I'm making a lot of guesses about the intentions of whoever says that stuff. So maybe I'm being unfair.
I love Sabine Hossenfelder's personal take on the social problems in science (https://www.youtube.com/watch?v=LKiBlGDfRU8), which reminded me a lot of my own experience in academic science. She also has some critiques of particle physics that I find compelling - but outside my knowledge domain - where the sociological dynamics sound similar to things I've bumped into in biology.
> Given the public awareness that science can be low-quality or corrupted, that whole fields can be misdirected for decades (see nutrition, on cholesterol and sugar), and that some basic fields must progress in the absence of any prospect of empirical testing (string theory), the naïve realism of previous generations becomes quite Medieval in its irrelevance to present realities.
"basic fields must progress in the absence of any prospect of empirical testing (string theory)" What does string theory have to do with cholesterol, or sugar, or Elsevier, or non-replicability, or fake data?
Perhaps "the naïve realism of previous generations becomes quite Medieval" is just some clods of mud being slung, and string theory's lack of testability is one more clod. His real agenda is "scientists must admit they're no better than liberal arts majors."
Until science admits that pride greed and envy have a profound influence and come up with a paradigm that thwarts those human drives, I am not sanguine that we can trust the publications to be completely altruistic
> When I attended my first scientific conference at the tender age of 20, one of my mentors surprised me with the following bit of advice. Transcribed directly from memory:
> "You should be sure to attend the talk by so-and-so. You can always trust his results."
Perhaps around the same time, a professor said something similar to me, pointedly.
I paid attention to what I thought I heard, and I think I still remember the exact words, but in hindsight, I realize that they were probably implying something more than what I thought they were.
The bit of wisdom, which was actually a warning, whooshed over my head, so wasn't followed.
Though, if they could've known fully the fiasco that I was walking into, and had warned me directly of that in advance, I don't know that I could've believed them. :)
And I can understand why they'd speak indirectly or hesitate to say anything. Even having had this experience, and not wanting others to go through it, I still don't give advice as directly or strongly as I might, about some things. For a variety of reasons: they might be fine and not need the warning/advice, they might put too much weight on my advice, they certainly won't fully understand it, they might repeat something indiscreetly or misconstrued, there could be suppressing action against them and/or me, etc.
Some people think science has a proper lane, and that it (and those who speak on its behalf, cheerlead for it, etc) should stick to it in a more disciplined manner.
Also, I think that the wider public now has a greater appreciation that vigourous debate and disagreement is to be expected amongst the scientific community and that it's a good thing too.
> Americans’ confidence in higher education has fallen to 36%, sharply lower than in two prior readings in 2015 (57%) and 2018 (48%).
Not surprising when all academic institutions are politically captured by one side which suppresses and promotes ideas to further its agenda.
Recently, student support for Hamas after October 7 was a spectacular flower that could only grow in an ideological hothouse like what academia's become, and I don't think that's helped.
They seem to be getting the idea a little with Ivies now doing away with their creepy diversity statement requirements.
I've also heard, "Trust in the science." (emphasis mine).
I find this second form even more troubling. IMHO, it implies that there's only one conclusion that (real) science can reach, and the audience has been shown that conclusion.
To doubt that particular conclusion (no matter how badly researched) is to doubt the scientific method itself. It's implicitly a conversation stopper.
---
That said, I'm making a lot of guesses about the intentions of whoever says that stuff. So maybe I'm being unfair.
"basic fields must progress in the absence of any prospect of empirical testing (string theory)" What does string theory have to do with cholesterol, or sugar, or Elsevier, or non-replicability, or fake data?
Perhaps "the naïve realism of previous generations becomes quite Medieval" is just some clods of mud being slung, and string theory's lack of testability is one more clod. His real agenda is "scientists must admit they're no better than liberal arts majors."