As an academic I will again make the dull but necessary point: figuring out the fundamental laws of nature is hard, stagnation is the norm, and the advances of the mid-20th century were the aftershocks of the great revolutions of quantum mechanics and relativity. You can't just make that happen again on demand. Fundamental laws are in short supply.
Whenever people tell me we're stagnating I ask them to name an alternative. So far, from best to worst case, every answer has fallen into one of the following buckets:
- a subfield that already exists and already has plenty of people working on it
- an idea that was extensively investigated and carefully ruled out over 50 years ago
- something that requires more money than the field will receive in total over the next 50 years
- a mathematical formalism which simply refactors existing laws in a way that makes them unreadable to almost everyone, without a chance of leading to any new predictions
- a complicated, ad hoc model that isn't any more predictive than simpler models, but gives the modeller hundreds of shiny knobs to tune
- a metaphysical suggestion about how to view what nature "truly" is, which amounts at best to rewriting existing laws with bigger, more "profound" words
I can't see how this is going to be fixed by changing how citations work. If anything, in my experience the particularly bad stuff has been correctly punished by low citations.
I see that you work in particle physics. I don't think particle physics is representative of science in general. My impression is that particle physics is seen as sexy and attracts a larger proportion of better researchers than many other fields for that reason.
The stagnation in fluid dynamics (my general field) seems fairly obviously linked to incentives. Pumping out papers is seen as more important than doing a good job as far as I can tell. In the past few years I think I've made a lot of progress in my subfield by simply compiling tons of data from the open literature [1], which apparently no one thought to do on the scale that I've been doing. (It's a lot of work up front, which would lead to a delay in publishing results.) In the process I've found that a lot of what's written in review articles and books on the subject is obviously wrong. This is not a sign of a healthy field! And I don't think you can do this in particle physics, but I think you can in many fields of engineering.
My experience in research comes from starting in very fundamental fluid mechanics (vortex dynamics and instabilities), moving into biomedical (fundamental and applied) and turbulence (fundamental) and now working in applied stuff.
I think part of the problem is that the field is stagnating partly because of the amazing work done in the last century ticked soooo many of the boxes. It was really the golden period ending with Prandtl and Karman - like the end of the ultimate share house. A lot of the tree has been stripped bare by the 70s and most of the advancements have been refinements and application specific. Obviously computational techniques have exploded and experimental methods have improved and there are still new discoveries, but in terms of impact, outside of microfluids and the never-ending grant gift of the turbulence closure model we are entering a dry season where grants are very much application specific and the fundamental physics is simply not rewarded. We used to have a bet what year the fluid mechanics chair will no longer be a thing at universities. It doesn't help that the fluids community is not super tight knit (lots of beef!)
I also 100% agree with you in terms of papers vs good job - but that is science in general nowadays - it will always be a tiny percentage that actually progress the field, a bunch that try and fail as research is want to do and a majority just try and stay relevant by pumping out rubbish.
Could you elaborate on some of the obviously wrong stuff that you've seen? I'm curious. I have an ME background but strayed away from it into electronics, then oddly enough ended up in a fluids heavy business.
I strongly disagree. You make a great argument for why stagnation in physics is to be expected, but advances in physics were a minority factor in the technological advances of the 20th century. Advances in chemistry were by far the primary driver, followed by biology, and then by physics. (I love physics, I minored in it in undergrad, and didn't minor or major in chemistry, lest you think I'm biased.)
Chemistry brought us:
- the Haber-Bosch process, artificial nitrogen fixation that revolutionized food production
- the Bessemer process, modern steel production, necessary not just for modern buildings but also engines, modern guns, cars/boats/planes, modern supply chain via shipping and trucks, industrial factory and farm equipment, etc
- modern plastics. Seriously, look around your room, wherever you are. How many things don't have some amount of plastic in them? Less than half, probably?
- petroleum extraction and processing, which powers those steel engines, powers most of our electricity, and is the raw materials for plastic
- modern explosives, especially smokeless gunpowder (did you know that people totally built Gatling-style rotary machine guns in the Victorian Era? The were useless because black powder creates too much smoke for you to be able to aim a black powder-powered machine gun)
Biology brought us germ theory and modern medicine, and of course these fields are all interrelated because engines required thermodynamics and electricity is mostly physics, but none of that required relativity or quantum mechanics.
Quantum mechanics is the foundation for the physical chemistry subfield of chemistry, is crucial to modern silicon integrated circuit manufacturing, and of course—the atomic age. But none of those have had the impact on society as oil alone, for example.
As mentioned, I love physics, but the fact is, details about the fundamental laws of nature just did not have the impact on society that higher-level, less fundamental scientific advancements have had. The Wright brothers created aviation without any quantitative understanding of aerodynamics; and 100% of the societally impactful advancements in aerodynamics since then has had no relation to relativity or quantum mechanics.
Hmm, I agree, the answer to the question posed varies a lot between different fields. That's another aspect of these articles that I don't like: they talk about "problems with science" as a whole as if it were a monolith.
> Advances in chemistry were by far the primary driver
Seems like a pretty biased view of recent progress, to be honest. One could come up with similar lists from any number of other areas (Physics - Nuclear Energy, Biology - DNA & biotech, Computer Science - The Internet, etc.)
In physics we have two (smaller?) areas that are ripe for some real progress: Optics, electronics.
Electronics: The memristor. Though it seems that the HP breakthrough was a bit premature, the promise of a solid-state memristor would allow for very energy efficient computation. Think top-of-the-line modern GPUs that can run off a small solar cell or watch battery. The actual physics of such circuits may have some fun things hidden in them. I think it'll really change how we use computers.
Optics: As our manufacturing revolution reaches out of semi-conductors and into more difficult materials, we're seeing cheap and interesting optics happen. I'm not sure where it's going, but grads are 'playing' more in the lab with cheaper stuff, allowing for kismet to happen faster. Especially as bio becomes 'thirstier' for optics, as there is a LOT of money there from disease research.
Though these aren't re-writing the fundamental laws, they are looking to have real impact on human life, and not in ways that are just making things more efficient (though there is a lot of that too).
What you wrote make total sense for fundamental sciences. However, a lot if not most scientific fields are removed from that, and the supply of ideas or things to investigate is way larger. In this context, gaming the system is easier. I see it in my field, where lot of published stuff is junk written for publication count.
1. You have effectively attempted to straw man all possible suggestions and conversation around modifying STEM-based progress by using a few anecdotes as a basis. This is not 'a point'. These are only a handful of observations.
2. You seem to be framing all possible STEM-based progress from the perspective of progress within the physics community. And this comment being at the top (currently) really diverts so many other possible enriching discussions around this topic. As another comment pointed out progress in chemistry during the 20th century that effectively donated as much to human progress as physics even though it goes mostly unnoticed by a larger audience.
3. Your argument only mentions how citations have allowed for better filtering of true-negative papers/authors, but it does not explain away overly citated papers/authors which are essentially false-positives as to being progressive.
4. It does not take extraordinary logic to find flaws in any system, so I am not sure why so many from an academic background (although not all) adopt a more defensive stance towards the existing system as if it was provably a global optimum. Would you mind engaging in conversation around this?
> It does not take extraordinary logic to find flaws in any system, so I am not sure why so many from an academic background (although not all) adopt a more defensive stance towards the existing system as if it was provably a global optimum. Would you mind engaging in conversation around this?
I have seen similar defenses of various things in academia over the years. It's very easy for someone who is successful in a particular system to think that system must be generally good. I can recall going to a talk nominally about writing good grant proposals where the speaker would add random comments about how the current funding system is the most effective known to man, etc. The speaker was a tenured professor. They're going to be inclined to think that whatever system they succeed in must be fundamentally good. After all, it recognized their brilliance!
Similarly, knzhou is a graduate student at Stanford on a NSF fellowship. He seems like a very smart guy. But his experiences are not representative of science as a whole. It seems obvious that his opinions are going to differ from someone like me. I was rejected by Stanford, MIT, and Caltech and also rejected from every graduate fellowship I applied to. (Don't read too much into the schools: Later I decided that those schools would not have been a good fit for me, so I'm glad to have been rejected from them.) I think I do good quality research that isn't recognized by the current system. To do my research I've had to take whatever scraps of funding were available or work as a TA. This doesn't strike me as optimal. People like knzhou haven't had these experiences. This is triply true because I've been in grad school for about 9 years now, but knzhou has only been in grad school for about 3. Maybe in a couple of years, knzhou's opinions will sour? After 3 years I didn't have the same opinions I do now.
The opportunity today to do great work is seemingly better than at any other time in history (internet, number of people interested in a field, absence of war/famine) so it does feel like we are stagnant despite having many great tools and collaborative potential. Of course in reality the most important work often happens often under very difficult conditions, Hamming said that.
The most difficult thing of allowing for more novel science is accepting that more 'useless' research will be done. If you read stories about the old days, you'll see how often professors/researchers used the freedom they had to do absolutely useless things, just based on their own personal preferences. The same freedom of course allowed others to do great things.
I think nowadays there is much more awareness about what researchers are doing, and they will be held accountable, not just by the people who fund them, but also by the general public (Why are we funding this research about levitating toads?!?!). Shielding the researchers from such outrage and building acceptance for (seemingly) useless research will be just as important as the article's suggested new metrics for novelty.
People say you need to let scientists do useless exploration in order to promote innovation. I think that is true to a degree, but definitely should not be the spirit of any policy where innovation is the goal.
Many of the scientific innovations that pushed the 20th century forward were in fact purposefully done. The Von Neumann architecture was not invented to have fun, it was invented to aim artillery and design nukes.
Similarly, much of the tech infrastructure in Silicon Valley descends directly from radio engineers coming out of WWII theater.
Bardeen, Brittain, and Shockley didn't invent the solid state transistor because it would be interesting, they invented it because it would allow miniaturization of vacuum tube computers.
The laser was invented for telecommunications purposes. There is now an entire field of "quantum electronics" describing the theory behind them.
Even going back further, the invention of the steam engine preceded the development of thermodynamics, which initially sought to describe the limitations of these engines. IF "science leads technology", then one would expect steam engines to be rationally designed from the results of thermodynamics. The opposite is true.
I think there are enough examples of scientific fields emerging from technological innovation that "uselessness" should not be considered correlated with innovation.
The von Neumann architecture became possible because Church and Turing wrote their highly theoretical, mathematical works.
Lasers became possible because people like Fresnel studied the properties of a highly impractical phenomenon, coherent light, while other people discovered another short-lived curiosity, inverse energy levels population.
Many such works had to be done decades before any engineering applications, or a prospect thereof.
Science is when you study the literally unknown, including no known practical applications.
When you study small pockets of unknown in a generally understood and practically fertile field, it's engineering.
> I think there are enough examples of scientific fields emerging from technological innovation that "uselessness" should not be considered correlated with innovation.
I was going to cite lots of "useless" stuff that suddenly became really important when some "blocking technology" got removed, but "useless" is the wrong problem and phrasing.
The problem really is: "How do you compensate someone who tackles a big problem but winds up making no progress or even being wrong?"
Right. However, the inventions you mention, were not expected to be useful the same year. The problem today is not that there is a demand to focus work on potentially useful areas; that's fine as far as it goes. The problem is that there is too much demand that work bear fruit in the short term, whereas in fact we have picked most of the fruit from the branches currently in reach, to mix metaphors, and we need to go back and do some more long-term investment.
My father is a researcher in electrical engineering. A while ago, he and his colleagues envisioned a method of developing solar panels that could have blown past the efficiency of the day. Only problem is it involved creating and working with very small crystalline structures. Their research into said structures was novel for its time. Turned out once they learned more about the actual physics, the theory behind the panel design broke down and they had to abandon the project.
But was the result useless? Hardly. To this day he gets contacted by scientists from time to time trying to do the same thing and he has to tell them why their plan won’t work.
Preventing people from going down dead ends is valuable, like knowing how to look for the solution to an obscure software error. Or, to paraphrase Edison, it’s knowing ahead of time how not to make a lightbulb.
This problem exists in schools as well: education is being increasingly restricted and tuned to focus on where the puck is, not where it will be, and increasing numbers of worthless metrics are demanded to ensure this is the case.
Not that this is a new problem (consider Dickens’ Hard Times) but I have been shocked how my 1970s/80s education (mostly broad, fun stuff with the only “skills” being mathematics and very concrete things like operating a car or camera or making a nutritious meal) contrasts with that offered to my kid or even worse to my gf’s kids in the Palo Alto schools.
This especially astonishes me as I consider one of the best things in the US is its non-specialized approach to undergraduate education.
> This emphasis on citations in the measurement of scientific productivity shifted scientist rewards and behavior on the margin toward incremental science
this sounds like the science equivalent of how search engines lead to the creation of content farms. citation scores are to spam science as pagerank is to wikihow?
Imagine going through life with a high-tech spam filter from the future that can filter out wikihow and boring science. Impossible to build, I suspect, but that would be the life.
> this sounds like the science equivalent of how search engines lead to the creation of content farms. citation scores are to spam science as pagerank is to wikihow?
Or a science equivalent to what money and market competition does with all human ventures. You need more and faster than your competitors to progress, else you'll fall into obscurity. "First to publish" is isomorphic to "first to market". If you treat research as a videogame, citations fit perfectly as an in-game currency.
That is to say, it's an example of a general problem of optimizing short-term metrics, which are a decent proxy of the actual goal up to a point.
Science could definitely use a spam filter. Journals unfortunately do a poor way of being one; they're thoroughly gamed, the same way Google is via SEO.
At least in the market a junk product doesn't survive despite being first to market. The product needs to be good or at least inspire others to be regarded as a real contribution. This additional requirement is what's missing from the incentive system in science today.
About the search aspect, I wonder if there is any way to create a custom search engine or filter out crap from google.
I would love a browser addon that auto-filters out quora, wikihow, w3schools, techcrunch, etc...
None of those garbage sites are what I'm looking for, ever. I think that, amusingly, filtering out the top 5% of best SEO optimized sites would make results much better.
w3schools loads faster than MDN but I'm mostly with you
reevaluating our social attitudes towards 'trust in experts' will be an aftermath of the covid crisis, and it may bleed into search & social media as well -- is there value in 'peer-reviewed everything'?
It's doubly tragic if it simultaneously incentivized the mass production of boring, low-value science and punished the boring but valuable work of replication.
>Imagine going through life with a high-tech spam filter from the future that can filter out wikihow and boring science.
Happily it's available now and it's called intuition. If something is boring then avoid; if something is exciting then pursue.
Problems being that it's purely anecdotal and you have to know and trust yourself. If you're too attracted to prestige, money or job security then it's going to return a distorted signal. Which is why organised science is now bureaucratic and slow despite the fact that there are more scientists than ever before.
This premise that novelty should be better-rewarded seems odd to me, because I thought a common complaint about scientific journals was overly rewarding novelty instead of robustness? Isn't that considered one of the contributing factors to the replication crisis?
I think it's interesting to think about the incentives in scientific publication by comparison with the incentives in HN (or Reddit) comment posting.
Someone could spend days of time writing a well researched HN comment that was exceptionally informative and accurate. There are occasionally comments that represent an hour or two of work on them... But multiday-effort comments are non-existent: the incentives of the venue don't reward that effort. And if you do take the time the discussion will have moved on before you get it published. If you published them they would likely be lost in a see of other low effort comments, or if they were acknowledged-- not likely much more than comments that merely took an hour.
You don't just see fewer multiday comments, you see essentially none at all. Nothing technical about HN or Reddit prevents people from writing some epic work of commentary or research in a comment, and there would often be value from such works existing...
The same pattern exists in academic publishing but with the effort levels shifted up one or two orders of magnitude: the maximum moves from an hour to (say) weeks (exact threshold varies by field). Works taking more effort than some field specific cutoff are extremely seldom done. Why make one 10x effort paper when you will serve your interests much better and with lower risks making 10 1x effort papers?
This would be fine if all of science in that field could be done in the window of efforts allowed by those incentives, but that isn't the case... especially since a lot of the low hanging fruit-- results that can be obtained below the cutoff-- is already picked.
So this is part of why you see things like immunologists pointing out that there has been relatively little academic investigation of virus seasonality-- though its an apparent, interesting, and seemingly important phenomena. Studying it in any depth would require experiments spanning years and likely dealing with human subjects on top of the possibility that your ideas don't turn up anything new... a lot of risk for someone who actually needs to get things published.
The extent to which this is an issue depends on the field, though. In my field, you can get massively rewarded, in both soft and hard criteria, if you put effort into writing a good review paper. And many of the giants of the field have sunk even more effort into writing textbooks, which yield even greater rewards. People can do this precisely because the incentives are better than for internet comments; a good textbook can be celebrated for decades.
There have been converastions for years about the slow down in scientific discovery. Often times the explanation is that science is getting more expensive, something i've often felt was a bit of a straw man argument.
Instead pointing at the current incentive structure (that everyone already agrees is broken) makes a lot more sense.
So lets start a conversation about a better incentive structure. What do we want to incentivize? How do we do that? How do we fix science?
I would start by cutting academic salaries at elite schools. These are $150k-$200k for a typical professor, and $1 million at the top. Maybe $100k at the bottom.
I'd place these at $60k-$100k: enough to live on, but not enough to go into it for anything other than love-of-science. A university president might hit $200k.
I'd hire many more academics, and given them much more freedom. Not as much publish-or-perish, and more intellectual exploration. Anyone qualified to do research should have the option to do so in their field of interest.
That's kind of how academia used to work before massive endowments.
I might also do something about tenure. It seems like an obsolete idea as structured right now. It's not a horrible idea, but it's obsolete in a lot of ways. It forces people to put in massive efforts early-career. That, for example, it doesn't line up with biological clocks, and puts in many other bizarre incentives. I don't mind someone gaining tenure if they've done fantastic work, mind you, but it shouldn't be a 7-year clock. For example, perhaps you're a professor with a 5-year renewable contract. If you do fantastic work, you become a professor-with-tenure, whether that's 4 years in or 40.
I will disagree with every single suggestion you make:
1. Salaries should be -higher-, not lower! Why will any self respective smart person want to throw their intelligence away for a pittance? You want them to be smart with science and stupid with money is it? Live like Diogenes?
2. There shouldn't be more researchers, there should be less. My decade-long experience with academia has been that too many people who aren't exactly scientifically smart (more smart at socializing and grant writing) are too established. We need to recruit the type of minds that are truly capable of innovation and make sure they don't have to compete with beuraucrats who're there simply because they chose biology in undergrad and just kept making the default career choice every time they were presented one. These new people should also be REALLY smart, not just marginally better than public. Which means that there can't be too many of them anyways. They should then be given resources that don't inherently convert the entire system into a Ponzi scheme (like the phd system now does). In the grand scheme of things they can be given lower resources if they are given structures to manage things well.
3. I'd argue that the tenure system worked quite well despite its flaws. If anything tenure doesn't give the same guarantees it gave half a century ago, so people are still incentivized to continue running the rat race. If you still want to hold them accountable maybe a much longer cycle might be okay, perhaps 15 years? 5 year contract sounds like hell for most fields. Some of the most interesting work I did took more than that time to bear fruition and that's not uncommon.
Only thing I'll agree with you is that we should make sure that whatever new process is conceived must try to correct perverse incentives for women, given how the current system plays against some common life choices they might want to make (having kids).
I don’t think you really understand what professors make already. Even a CS professor at UCLA is barely breaking $100k, and they would be in dire straights for their market if the university didn’t back their housing loan. Sometimes professors have the option of topping up their salary out of their research grants, but that has a few problems in itself.
Wouldn't that pay scale push many talented people to work in private industry? At that pay it would be difficult to retain computer science faculty for example, where universities are already struggling with a shortage.
In my view, $150k-$200k is already low. Anybody with the drive and focus to become a professor, can already earn more as a doctor, or even as a computer programmer. It's already the case that it only attracts people who are in it for the love of science.
In a similar fashion, lowering the incomes of classical musicians won't make classical music any more creative.
> It forces people to put in massive efforts early-career.
From what I understand and have heard, a lot of advancements in math and physics in particular have come from people that are quite young, often in their 20s. That's apparently when brains are at their peak; it seems to me that's the period in which massive efforts are most likely to pay off big.
Einstein was 26 when he published his annus mirabilis papers. I believe Newton was in his mid-20s when he began developing calculus.
Incentives, and the corresponding KPIs, are always a good place to start looking if one is trying to find out why systems and people behave the way they do.
When I read the title I thought the paper was about business ideas and similar things. Having quickly read the paper, citations is a likely reason for, what the authors called, me too science.
Well, yeah, but bold is nearly impossible to tell from crazy, and people really don't like being told that "actually, the most efficient management technique in this case is to throw money randomly at a lightly-filtered set of projects."
People want to believe that blue-sky research can be predicted, managed, and optimized, no matter the staggering mountains of evidence to the contrary.
A lot of things, but this paper points to one issue:
* In order to get an academic position, you need to have letters from a research community.
* In order to get citations, you need your papers to be used by a research community.
This gives a strong advantage to work done within existing lines of research. Virtually anything outside of one of the existing "academic cottage industries" or "mutual adoration societies" is at a huge disadvantage.
You end up with research communities going down long rabbit holes with tunnel vision, and big broad areas of potential research are never explored.
There is so much about papers like this that make me skeptical.
1) It assumes we've been good at measuring growth. Which to me is dubious. Our current system counts the production, deployment and detonation of a bomb all as positive production meanwhile not counting unpaid domestic work. I know economists don't like to out a value to different types of services and goods because they feel they are putting a finger on the scale. But they are doing just as much by using a blind metric which ends up counting some work and not other.
2) It never cost adjusts growth. For example, all that wonderful growth in the midst of the 20th cwntury incurred significant externalities. The system we have now tries much harder to make producers realize their externalities. This curbs growth which is not necessarily a bad thing.
3) It assumes we can atomize historic growth and single out what things contributes to what. But this seems ridiculous especially how these factors interact and are not necessarily seperable. For example, we happen to live in a universe in which the movement of electrons can be used transmit energy and appropriate it to many tasks. The story of the 20th cwntury's economy is very much the story of how we mastered this one property of nature. Electrification doesn't just provide heat and light. It enables the transportation of water. It enables the construction of more and larger structures. It enables the creation of aluminum and plated metals. Aluminum itself allows many features of our world we take for granted. From airplanes to electronics. But does abundant aluminum lay at the feet of scientific innovations? In its infancy for sure but after some basic science it is widespread cheap electrification which allows us to mass produce aluminum. How much of that economy do we count to cutting edge science? It's not easily disentangled from popular politics which allowed the mass construction of hydroelectric damns throughout the U.S. and other countries. There is a possible future in which for centuries, growth has slowed to something more than Renaissance levels but well below mid 20th century levels. In such a future we might look back at this period as the time we mastered the single most useful physical property of our universe and so of course it was an era of unprecedented growth.
I'm not saying they're wrong. But every time someone from Bill Gates to NBER talks about this issue of growth, whether it has slowed, why it as slowed, how it can be increased, I just get this feeling that people are failing to realized what a unique time we live in. How unique the 150 years preceding it really are. And how little we know about why it happened.
Economists, policymakers, and scientists studying how to improve science output have themselves been hamstrung for decades because economists have to measure something, and for so long it's been nothing but bibliometrics.
Until we can come up with a way to measure the other outputs of science productivity, we're stuck with this citation-based machine that has all of this institutional and cultural inertia behind it. Which is why I've come to believe this kind of change or new introduction of value isn't going to come from within academia. E.g. novelty is in some fields directly at odds with what makes you a "productive" professional scientist.
Last year I was in the NBER's Science of Science Funding working paper session, and most of the datasets discussed are still heavily focused on patents, citations, and bibliometrics (https://projects.nber.org/drupal/SOSF/data).
I think the answer is that trust should be part of the equation.
"If you can't measure it, you can't manage it" is a description of how to keep track of a low-trust system. Scientists are generally conscientious people, and bean counting is demoralizing.
If there is some organizational solution, I think it should be to keep organization size small enough that it can be governed by personal relationships and trust.
Otherwise social trust takes decades to build up, and I don't think there is a quick fix.
Whenever people tell me we're stagnating I ask them to name an alternative. So far, from best to worst case, every answer has fallen into one of the following buckets:
- a subfield that already exists and already has plenty of people working on it
- an idea that was extensively investigated and carefully ruled out over 50 years ago
- something that requires more money than the field will receive in total over the next 50 years
- a mathematical formalism which simply refactors existing laws in a way that makes them unreadable to almost everyone, without a chance of leading to any new predictions
- a complicated, ad hoc model that isn't any more predictive than simpler models, but gives the modeller hundreds of shiny knobs to tune
- a metaphysical suggestion about how to view what nature "truly" is, which amounts at best to rewriting existing laws with bigger, more "profound" words
I can't see how this is going to be fixed by changing how citations work. If anything, in my experience the particularly bad stuff has been correctly punished by low citations.
The stagnation in fluid dynamics (my general field) seems fairly obviously linked to incentives. Pumping out papers is seen as more important than doing a good job as far as I can tell. In the past few years I think I've made a lot of progress in my subfield by simply compiling tons of data from the open literature [1], which apparently no one thought to do on the scale that I've been doing. (It's a lot of work up front, which would lead to a delay in publishing results.) In the process I've found that a lot of what's written in review articles and books on the subject is obviously wrong. This is not a sign of a healthy field! And I don't think you can do this in particle physics, but I think you can in many fields of engineering.
[1] https://github.com/btrettel/pipe-jet-breakup-data
My experience in research comes from starting in very fundamental fluid mechanics (vortex dynamics and instabilities), moving into biomedical (fundamental and applied) and turbulence (fundamental) and now working in applied stuff.
I think part of the problem is that the field is stagnating partly because of the amazing work done in the last century ticked soooo many of the boxes. It was really the golden period ending with Prandtl and Karman - like the end of the ultimate share house. A lot of the tree has been stripped bare by the 70s and most of the advancements have been refinements and application specific. Obviously computational techniques have exploded and experimental methods have improved and there are still new discoveries, but in terms of impact, outside of microfluids and the never-ending grant gift of the turbulence closure model we are entering a dry season where grants are very much application specific and the fundamental physics is simply not rewarded. We used to have a bet what year the fluid mechanics chair will no longer be a thing at universities. It doesn't help that the fluids community is not super tight knit (lots of beef!)
I also 100% agree with you in terms of papers vs good job - but that is science in general nowadays - it will always be a tiny percentage that actually progress the field, a bunch that try and fail as research is want to do and a majority just try and stay relevant by pumping out rubbish.
Chemistry brought us:
- the Haber-Bosch process, artificial nitrogen fixation that revolutionized food production
- the Bessemer process, modern steel production, necessary not just for modern buildings but also engines, modern guns, cars/boats/planes, modern supply chain via shipping and trucks, industrial factory and farm equipment, etc
- modern plastics. Seriously, look around your room, wherever you are. How many things don't have some amount of plastic in them? Less than half, probably?
- petroleum extraction and processing, which powers those steel engines, powers most of our electricity, and is the raw materials for plastic
- modern explosives, especially smokeless gunpowder (did you know that people totally built Gatling-style rotary machine guns in the Victorian Era? The were useless because black powder creates too much smoke for you to be able to aim a black powder-powered machine gun)
Biology brought us germ theory and modern medicine, and of course these fields are all interrelated because engines required thermodynamics and electricity is mostly physics, but none of that required relativity or quantum mechanics.
Quantum mechanics is the foundation for the physical chemistry subfield of chemistry, is crucial to modern silicon integrated circuit manufacturing, and of course—the atomic age. But none of those have had the impact on society as oil alone, for example.
As mentioned, I love physics, but the fact is, details about the fundamental laws of nature just did not have the impact on society that higher-level, less fundamental scientific advancements have had. The Wright brothers created aviation without any quantitative understanding of aerodynamics; and 100% of the societally impactful advancements in aerodynamics since then has had no relation to relativity or quantum mechanics.
Seems like a pretty biased view of recent progress, to be honest. One could come up with similar lists from any number of other areas (Physics - Nuclear Energy, Biology - DNA & biotech, Computer Science - The Internet, etc.)
Electronics: The memristor. Though it seems that the HP breakthrough was a bit premature, the promise of a solid-state memristor would allow for very energy efficient computation. Think top-of-the-line modern GPUs that can run off a small solar cell or watch battery. The actual physics of such circuits may have some fun things hidden in them. I think it'll really change how we use computers.
Optics: As our manufacturing revolution reaches out of semi-conductors and into more difficult materials, we're seeing cheap and interesting optics happen. I'm not sure where it's going, but grads are 'playing' more in the lab with cheaper stuff, allowing for kismet to happen faster. Especially as bio becomes 'thirstier' for optics, as there is a LOT of money there from disease research.
Though these aren't re-writing the fundamental laws, they are looking to have real impact on human life, and not in ways that are just making things more efficient (though there is a lot of that too).
2. You seem to be framing all possible STEM-based progress from the perspective of progress within the physics community. And this comment being at the top (currently) really diverts so many other possible enriching discussions around this topic. As another comment pointed out progress in chemistry during the 20th century that effectively donated as much to human progress as physics even though it goes mostly unnoticed by a larger audience.
3. Your argument only mentions how citations have allowed for better filtering of true-negative papers/authors, but it does not explain away overly citated papers/authors which are essentially false-positives as to being progressive.
4. It does not take extraordinary logic to find flaws in any system, so I am not sure why so many from an academic background (although not all) adopt a more defensive stance towards the existing system as if it was provably a global optimum. Would you mind engaging in conversation around this?
While I'm not knzhou, knzhou recognizes this:
https://news.ycombinator.com/item?id=22659861
> It does not take extraordinary logic to find flaws in any system, so I am not sure why so many from an academic background (although not all) adopt a more defensive stance towards the existing system as if it was provably a global optimum. Would you mind engaging in conversation around this?
I have seen similar defenses of various things in academia over the years. It's very easy for someone who is successful in a particular system to think that system must be generally good. I can recall going to a talk nominally about writing good grant proposals where the speaker would add random comments about how the current funding system is the most effective known to man, etc. The speaker was a tenured professor. They're going to be inclined to think that whatever system they succeed in must be fundamentally good. After all, it recognized their brilliance!
Similarly, knzhou is a graduate student at Stanford on a NSF fellowship. He seems like a very smart guy. But his experiences are not representative of science as a whole. It seems obvious that his opinions are going to differ from someone like me. I was rejected by Stanford, MIT, and Caltech and also rejected from every graduate fellowship I applied to. (Don't read too much into the schools: Later I decided that those schools would not have been a good fit for me, so I'm glad to have been rejected from them.) I think I do good quality research that isn't recognized by the current system. To do my research I've had to take whatever scraps of funding were available or work as a TA. This doesn't strike me as optimal. People like knzhou haven't had these experiences. This is triply true because I've been in grad school for about 9 years now, but knzhou has only been in grad school for about 3. Maybe in a couple of years, knzhou's opinions will sour? After 3 years I didn't have the same opinions I do now.
Deleted Comment
I think nowadays there is much more awareness about what researchers are doing, and they will be held accountable, not just by the people who fund them, but also by the general public (Why are we funding this research about levitating toads?!?!). Shielding the researchers from such outrage and building acceptance for (seemingly) useless research will be just as important as the article's suggested new metrics for novelty.
Many of the scientific innovations that pushed the 20th century forward were in fact purposefully done. The Von Neumann architecture was not invented to have fun, it was invented to aim artillery and design nukes.
Similarly, much of the tech infrastructure in Silicon Valley descends directly from radio engineers coming out of WWII theater.
Bardeen, Brittain, and Shockley didn't invent the solid state transistor because it would be interesting, they invented it because it would allow miniaturization of vacuum tube computers.
The laser was invented for telecommunications purposes. There is now an entire field of "quantum electronics" describing the theory behind them.
Even going back further, the invention of the steam engine preceded the development of thermodynamics, which initially sought to describe the limitations of these engines. IF "science leads technology", then one would expect steam engines to be rationally designed from the results of thermodynamics. The opposite is true.
I think there are enough examples of scientific fields emerging from technological innovation that "uselessness" should not be considered correlated with innovation.
Lasers became possible because people like Fresnel studied the properties of a highly impractical phenomenon, coherent light, while other people discovered another short-lived curiosity, inverse energy levels population.
Many such works had to be done decades before any engineering applications, or a prospect thereof.
Science is when you study the literally unknown, including no known practical applications.
When you study small pockets of unknown in a generally understood and practically fertile field, it's engineering.
I was going to cite lots of "useless" stuff that suddenly became really important when some "blocking technology" got removed, but "useless" is the wrong problem and phrasing.
The problem really is: "How do you compensate someone who tackles a big problem but winds up making no progress or even being wrong?"
Engineering always has a practical goal, and for that field you're right.
https://www.youtube.com/watch?v=y2I4E_UINRo
But was the result useless? Hardly. To this day he gets contacted by scientists from time to time trying to do the same thing and he has to tell them why their plan won’t work.
Preventing people from going down dead ends is valuable, like knowing how to look for the solution to an obscure software error. Or, to paraphrase Edison, it’s knowing ahead of time how not to make a lightbulb.
Not that this is a new problem (consider Dickens’ Hard Times) but I have been shocked how my 1970s/80s education (mostly broad, fun stuff with the only “skills” being mathematics and very concrete things like operating a car or camera or making a nutritious meal) contrasts with that offered to my kid or even worse to my gf’s kids in the Palo Alto schools.
This especially astonishes me as I consider one of the best things in the US is its non-specialized approach to undergraduate education.
Edit: typo
this sounds like the science equivalent of how search engines lead to the creation of content farms. citation scores are to spam science as pagerank is to wikihow?
Imagine going through life with a high-tech spam filter from the future that can filter out wikihow and boring science. Impossible to build, I suspect, but that would be the life.
Or a science equivalent to what money and market competition does with all human ventures. You need more and faster than your competitors to progress, else you'll fall into obscurity. "First to publish" is isomorphic to "first to market". If you treat research as a videogame, citations fit perfectly as an in-game currency.
That is to say, it's an example of a general problem of optimizing short-term metrics, which are a decent proxy of the actual goal up to a point.
Science could definitely use a spam filter. Journals unfortunately do a poor way of being one; they're thoroughly gamed, the same way Google is via SEO.
http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf
I would love a browser addon that auto-filters out quora, wikihow, w3schools, techcrunch, etc...
None of those garbage sites are what I'm looking for, ever. I think that, amusingly, filtering out the top 5% of best SEO optimized sites would make results much better.
reevaluating our social attitudes towards 'trust in experts' will be an aftermath of the covid crisis, and it may bleed into search & social media as well -- is there value in 'peer-reviewed everything'?
(for some definition of peer)
the distracting stuff is ultimately low value
we've cranked up the gain on spam -- spam is our culture's moai
Happily it's available now and it's called intuition. If something is boring then avoid; if something is exciting then pursue.
Problems being that it's purely anecdotal and you have to know and trust yourself. If you're too attracted to prestige, money or job security then it's going to return a distorted signal. Which is why organised science is now bureaucratic and slow despite the fact that there are more scientists than ever before.
For example:
"Novelty in science – real necessity or distracting obsession?" https://phys.org/news/2018-01-novelty-science-real-necessity...
"Facts Are More Important Than Novelty: Replication in the Education Sciences" https://journals.sagepub.com/stoken/rbtfl/w5mrNxPVD8zSg/full
In the text: "Funding bodies and academic journals that value “novelty” over replication deserve blame too." https://theconversation.com/science-is-in-a-reproducibility-...
Etc.
I haven't made it through the entire 43-page paper yet, but a quick search for "replic" and "repro" suggests they don't address this point at all.
Someone could spend days of time writing a well researched HN comment that was exceptionally informative and accurate. There are occasionally comments that represent an hour or two of work on them... But multiday-effort comments are non-existent: the incentives of the venue don't reward that effort. And if you do take the time the discussion will have moved on before you get it published. If you published them they would likely be lost in a see of other low effort comments, or if they were acknowledged-- not likely much more than comments that merely took an hour.
You don't just see fewer multiday comments, you see essentially none at all. Nothing technical about HN or Reddit prevents people from writing some epic work of commentary or research in a comment, and there would often be value from such works existing...
The same pattern exists in academic publishing but with the effort levels shifted up one or two orders of magnitude: the maximum moves from an hour to (say) weeks (exact threshold varies by field). Works taking more effort than some field specific cutoff are extremely seldom done. Why make one 10x effort paper when you will serve your interests much better and with lower risks making 10 1x effort papers?
This would be fine if all of science in that field could be done in the window of efforts allowed by those incentives, but that isn't the case... especially since a lot of the low hanging fruit-- results that can be obtained below the cutoff-- is already picked.
So this is part of why you see things like immunologists pointing out that there has been relatively little academic investigation of virus seasonality-- though its an apparent, interesting, and seemingly important phenomena. Studying it in any depth would require experiments spanning years and likely dealing with human subjects on top of the possibility that your ideas don't turn up anything new... a lot of risk for someone who actually needs to get things published.
Instead pointing at the current incentive structure (that everyone already agrees is broken) makes a lot more sense.
So lets start a conversation about a better incentive structure. What do we want to incentivize? How do we do that? How do we fix science?
I'd place these at $60k-$100k: enough to live on, but not enough to go into it for anything other than love-of-science. A university president might hit $200k.
I'd hire many more academics, and given them much more freedom. Not as much publish-or-perish, and more intellectual exploration. Anyone qualified to do research should have the option to do so in their field of interest.
That's kind of how academia used to work before massive endowments.
I might also do something about tenure. It seems like an obsolete idea as structured right now. It's not a horrible idea, but it's obsolete in a lot of ways. It forces people to put in massive efforts early-career. That, for example, it doesn't line up with biological clocks, and puts in many other bizarre incentives. I don't mind someone gaining tenure if they've done fantastic work, mind you, but it shouldn't be a 7-year clock. For example, perhaps you're a professor with a 5-year renewable contract. If you do fantastic work, you become a professor-with-tenure, whether that's 4 years in or 40.
1. Salaries should be -higher-, not lower! Why will any self respective smart person want to throw their intelligence away for a pittance? You want them to be smart with science and stupid with money is it? Live like Diogenes?
2. There shouldn't be more researchers, there should be less. My decade-long experience with academia has been that too many people who aren't exactly scientifically smart (more smart at socializing and grant writing) are too established. We need to recruit the type of minds that are truly capable of innovation and make sure they don't have to compete with beuraucrats who're there simply because they chose biology in undergrad and just kept making the default career choice every time they were presented one. These new people should also be REALLY smart, not just marginally better than public. Which means that there can't be too many of them anyways. They should then be given resources that don't inherently convert the entire system into a Ponzi scheme (like the phd system now does). In the grand scheme of things they can be given lower resources if they are given structures to manage things well.
3. I'd argue that the tenure system worked quite well despite its flaws. If anything tenure doesn't give the same guarantees it gave half a century ago, so people are still incentivized to continue running the rat race. If you still want to hold them accountable maybe a much longer cycle might be okay, perhaps 15 years? 5 year contract sounds like hell for most fields. Some of the most interesting work I did took more than that time to bear fruition and that's not uncommon.
Only thing I'll agree with you is that we should make sure that whatever new process is conceived must try to correct perverse incentives for women, given how the current system plays against some common life choices they might want to make (having kids).
Why shouldn't universities be able to compete for labor with corporations?
>A university president might hit $200k.
University presidents aren't scientists, nobody would take that job "for the love of science."
In a similar fashion, lowering the incomes of classical musicians won't make classical music any more creative.
From what I understand and have heard, a lot of advancements in math and physics in particular have come from people that are quite young, often in their 20s. That's apparently when brains are at their peak; it seems to me that's the period in which massive efforts are most likely to pay off big.
Einstein was 26 when he published his annus mirabilis papers. I believe Newton was in his mid-20s when he began developing calculus.
When I read the title I thought the paper was about business ideas and similar things. Having quickly read the paper, citations is a likely reason for, what the authors called, me too science.
Maybe it's time to be more bold.
People want to believe that blue-sky research can be predicted, managed, and optimized, no matter the staggering mountains of evidence to the contrary.
* In order to get an academic position, you need to have letters from a research community.
* In order to get citations, you need your papers to be used by a research community.
This gives a strong advantage to work done within existing lines of research. Virtually anything outside of one of the existing "academic cottage industries" or "mutual adoration societies" is at a huge disadvantage.
You end up with research communities going down long rabbit holes with tunnel vision, and big broad areas of potential research are never explored.
In what field?
1) It assumes we've been good at measuring growth. Which to me is dubious. Our current system counts the production, deployment and detonation of a bomb all as positive production meanwhile not counting unpaid domestic work. I know economists don't like to out a value to different types of services and goods because they feel they are putting a finger on the scale. But they are doing just as much by using a blind metric which ends up counting some work and not other.
2) It never cost adjusts growth. For example, all that wonderful growth in the midst of the 20th cwntury incurred significant externalities. The system we have now tries much harder to make producers realize their externalities. This curbs growth which is not necessarily a bad thing.
3) It assumes we can atomize historic growth and single out what things contributes to what. But this seems ridiculous especially how these factors interact and are not necessarily seperable. For example, we happen to live in a universe in which the movement of electrons can be used transmit energy and appropriate it to many tasks. The story of the 20th cwntury's economy is very much the story of how we mastered this one property of nature. Electrification doesn't just provide heat and light. It enables the transportation of water. It enables the construction of more and larger structures. It enables the creation of aluminum and plated metals. Aluminum itself allows many features of our world we take for granted. From airplanes to electronics. But does abundant aluminum lay at the feet of scientific innovations? In its infancy for sure but after some basic science it is widespread cheap electrification which allows us to mass produce aluminum. How much of that economy do we count to cutting edge science? It's not easily disentangled from popular politics which allowed the mass construction of hydroelectric damns throughout the U.S. and other countries. There is a possible future in which for centuries, growth has slowed to something more than Renaissance levels but well below mid 20th century levels. In such a future we might look back at this period as the time we mastered the single most useful physical property of our universe and so of course it was an era of unprecedented growth.
I'm not saying they're wrong. But every time someone from Bill Gates to NBER talks about this issue of growth, whether it has slowed, why it as slowed, how it can be increased, I just get this feeling that people are failing to realized what a unique time we live in. How unique the 150 years preceding it really are. And how little we know about why it happened.
Until we can come up with a way to measure the other outputs of science productivity, we're stuck with this citation-based machine that has all of this institutional and cultural inertia behind it. Which is why I've come to believe this kind of change or new introduction of value isn't going to come from within academia. E.g. novelty is in some fields directly at odds with what makes you a "productive" professional scientist.
Last year I was in the NBER's Science of Science Funding working paper session, and most of the datasets discussed are still heavily focused on patents, citations, and bibliometrics (https://projects.nber.org/drupal/SOSF/data).
"If you can't measure it, you can't manage it" is a description of how to keep track of a low-trust system. Scientists are generally conscientious people, and bean counting is demoralizing.
If there is some organizational solution, I think it should be to keep organization size small enough that it can be governed by personal relationships and trust.
Otherwise social trust takes decades to build up, and I don't think there is a quick fix.