To understand the fundamental problem with this paper and the "CD" measure of "disruptiveness", one need only look at Figure 1a, where three Nobel Prize winning papers and three patents are ranked on the "CD5" scale. Apparently, there are more disruptive discoveries than Watson and Crick's discovery of the double-stranded structure of DNA (CD5 0.62 on a scale of -1.0 to 1.0), and Baltimore's discovery of reverse transcription is -0.55. One might argue the ranks of these two discoveries should be reversed, since DNA had already been shown to be the genetic material, but no one imagined that RNA could be converted to DNA. Likewise, the Wigler patent for transformation into eukaryotic cells (CD5=0.70) has certainly has had less effect on the world economy than Monsanto's patent on glyphosphate resistant plants (CD= -0.85).
It's easy to make up a measure and then develop a story that explains why that measure is useful. Was Newton's theory of gravity disruptive, or consolidating. I'm thinking consolidating, since it allowed a lot of other things to make sense. Likewise, perhaps quantum mechanics was disruptive -- it sounds disruptive -- but it also consolidated a lot of confusing observations.
(And lets not even mention the problem that when there are 1,000-times as many papers between 1940 and 2010, we might not expect 1,000-times as many disruptions.)
This paper has a measure that sounds useful, and is certainly good for headlines, but it is very unclear that it provides any insight into the progress of science.
Boring title: "Scientific authors follow more citations when reading modern papers".
When I read a modern paper, I skim a lot more of the references. Getting to a cited paper is often just two clicks: there's a reference to the bibliography, and from there a link to arXiv (or some other public repository). As papers get older the bibliography might not link to arXiv, or might not link to a digitized paper at all. Those older papers are more time consuming to use as a reference. Scientists aren't lazy but they also don't have infinite time.
Put another way: papers form a graph, this result suggests that older nodes are traversed less frequently. I think the metric they propose is useful, but we could rephrase this all as measure of how useful papers are as a node (where CD = -1 is most useful, CD = 1 is least useful).
In short, this result might just mean paper discovery is getting easier. That's a good thing. We can wring our hands endlessly about how there's less low hanging fruit (there is) or about how academia is rotting away (as though it was ever different), but those conclusions are big jumps from what this paper says.
You misunderstand why W&C was important (although, given your username, I assume you've already sat through the lectures): in elucidating (not discovering) the structure of DNA, they had a biophysical mechanism that explained heredity (specifically, that it was an antiparallel double helix with complementary bases, which immediately suggests a mechanism for replication).
That recognition immediately led to a flurry of follow-on work confirming the mechanism and birthing molecular biology as we know it, which opened up vast new areas of science.
Yes, I know why it was disruptive. The question (for this paper) is, what other biological papers were more disruptive (since, based on the range of the scale, they barely made the first quintile).
I think the more important question is, how do we validate the usefulness of the CD5 metric. And, why is it better to be "disruptive" than "consolidating"?
Based on the examples shown (and others that I can imagine), it is unclear that it measures anything related to scientific significance.
The trend over time could equally be capturing changes in referencing practice (either that really important, influential papers/inventions in the past didn't reference other really important, influential papers/inventions nearly as often as equally novel papers/inventions today, or that modern referencing is much more exhaustive about referencing minor and tangentially relevant papers/inventions. There is, of course, also more significant prior art in journals and patent indices to actually cite in later years...). They carry out some robustness checks, but I'm not sure any of them really dent that hypothesis. It's also a conclusion based on proportions (ie. the effect mostly comes from by there being a lot more "non-disruptive" papers published rather than fewer "disruptive" papers) and there's actually a trend rise in the raw number of "most disruptive" academic papers from the 1950s through to the 1990s, at which point the numbers drop so sharply it looks like a discontinuity in the data (but would probably coincide with lots of journal content becoming easily accessible online...)
When I was in grad school I had done measurements of the magnetic anisotropy of amorphous compounds. I had found an interesting link between the structure and the magnetic properties which is surprising since the material is amorphous, the material properties should be isotropic yet these had a preferred magnetization axis. During the discussions of these results with my advisor I was told in very clear terms that he would not let me publish results that contradicted a paper of his from the 1980s.
I didn't push back, because I had no power and I just wanted to get the fuck out of there and start making money after being scammed out of 5 years of my life where I poured a lot of effort and got shit pay. I lost respect for my advisor and I was stunned by how un-scientific his response was when it came to correcting his 1980s results. My measurements were more accurate and were strongly supported by structural measurements he never did and couldn't do back then. But he was more interested in protecting his "legacy" of his paper with 13 citations.
So petty and small. I dropped those efforts and was able to get out completely obviating those results, never mentioned them in my thesis even.
Anyway, my point is that I'm not surprised that scientific results are lame, vanilla, uninspired. When the committees that decide on what gets funded are all 65+ dinosaurs stuck in their ways, very few disruptive ideas will be founded because you're disrupting the very research that got the committee members clout and recognition.
It used to come a funeral at a time, but now that the next generation is forced to build their careers on the mistakes of their advisors, mistakes can easily outlive their creators.
I see this kind of stuff all the time and I'm convinced that Academia is corrupt. While working against a Grad Student doesn't seem particularly evil, rather just a bit backstabby/political, I suggest that it is 'really bad' because it contravenes the basic organizing principle of science itself.
Kind of like a Judge ruling one way or another for personal reasons, or ruling with a heavy conflict of interest.
There needs to be much stronger ethical principles around this and an advisor hinting at obfuscating work for such reasons should be up for review.
That timeline correlates well with the corporatization of academic research timeline, which got started in the late 1970s, boomed in the 1980s and has steadily risen ever since. The emphasis in applied science is now on patents and return-on-investment, aka safe bets. This is certainly the direction that much of NIH fundings, as well as DOE funding, has taken. Part of this is that entrenched industries (natural gas and coal power, say), don't want federal funding going to disruptive competition (monocrystalline silicon PV development, say). This is reflected in how federal funds are distributed as well as what research programs are supported at the university level. For example, count the number of renewable energy departments in the USA academic system: miniscule funding -> no programs.
It can't all be blamed on corporatization, however. The control of funding agencies at the federal level by vested interests seems to be a major problem, i.e. innovative lines of research that threaten the status quo could threaten those cabals that control fund disbursement, so such efforts don't get funded. The stagnation of Alzheimer's research and the focus on the amyloid hypothesis seems to fall into this category. This kind of bureaucratic ossification is not new; the classic example is how Trofim Lysenko in the Soviet Union controlled the direction of agricultural research for about three decades, much to the detriment of the understanding of plant genetics and crop development research in the USSR over that period.
So it can be attributed to the rise of monopolistic corporatization on university campuses on one hand, and the growth of entrenched bureaucracies at the federal funding level on the other. Neither group wants the rug pulled out from under them by 'disruptive innovation'. There are many historical antecedents for this kind of situation, incidentally.
> The control of funding agencies at the federal level by vested interests seems to be a major problem
This is a fundamental problem with the nature of science and the nation state in that modern science is constituted by the nation state, not merely imposing into or upon it.
Given that Tech Transfer offices don't make much money at all, and patents are mostly worthless, I doubtful of this theory. There simply is no 'return on investment' on most of the work.
I wonder if this is to do with major vs minor breakthrough.
Take physics. As I read it, modern fundamental physics is mostly some combination of general relativity and quantum mechanics, which are both early 20th century inventions.
They weren't necessarily genius in particular in-depth analysis, rather they were genius new concepts that created a whole new blank canvas to fill in.
What did the field look like just before them? I always forget the details, but was it lord Kelvin who said, physics just needs to figure out the ultraviolet catastrophe and it's done? I wonder if it looked similarly uninnovative. The canvas they had was full, and they needed a new canvas for true innovation.
Now the thing is, I think true disruption, like GR and quantum physics, is not limited by the number of postdocs, but by truly genius insights. Einstein wasn't even working as an academic when he came up with Special Relativity. So the number of papers written isn't a proxy for pace of revolution.
Not to mention, since number of published papers has become a target, it has become useless as a measure of progress.
> but was it lord Kelvin who said, physics just needs to figure out the ultraviolet catastrophe and it's done?
This is apparently a common misconception regarding a lecture given by Lord Kelvin in 1900 concerning "two clouds" in physics.[1] In fact, Max Planck solved the ultraviolet catastrophe, also in 1900, by assuming electromagnetic radiation can only be emitted or absorbed in discrete packets of energy called quanta. Albert Einstein also solved the ultraviolet catastrophe in a paper he published in 1905, for which he was awarded the Nobel Prize in 1922, by, as always, standing on the shoulders of giants, hypothesizing that Planck's quanta were actual particles. These particles are known today by a word coined by physical chemist Gilbert Newton Lewis in a letter to Nature in 1926, namely, photons.
It also seems unlikely that a patent clerk, even one with a PhD in physics, would be allowed to publish in a physics journal today. The gatekeeping has gotten a lot more strict.
This article mentions Einstein being offended that his paper was submitted to peer review by Physical Review, and submitting it elsewhere instead:
"We (Mr. Rosen and I) had sent you our manuscript for publication and had not authorised you to show it to specialists before it is printed. I see no reason to address the – in any case erroneous – comments of your anonymous expert. On the basis of this incident I prefer to publish the paper elsewhere."
which argues, not unconvincingly IMO, that peer review is a failed experiment which tries to stop bad science at the price of also stopping brilliant, unconventional science, where instead we would profit more from letting the scientific public do the sorting instead of hand-picked specialists. The author phrases it as science being as strong as its strongest links, not as weak as the weakest one, and we should do everything to get the brilliant crazy ideas.
I disagree, at least in math there have been many instances recently where laymen (with a PhD) found some new and exciting results[0] and got to publish them, and I don't see why it would be any other way in physics.
I agree, I think we're discounting genius insight as the primary driver of innovation and ground breaking science, and we can't expect genius insight to occur in some predictable time frame or a time frame even within a lifespan.
I think perhaps we've also partially lost the meaning of ground breaking research given the constant news articles on "ground breaking science" and the constant overinflation and exaggeration of one's papers and achievements.
> we can't expect genius insight to occur in some predictable time frame
Yes, and we can’t expect it to occur in a socially confined over-structured environment either. Academia has changed a lot since then, with peer review hell, publishing pressure, etc. I mean, are we really pretending to be surprised that a system of incrementalist incentives produces incrementalist results?
A lot of the obvious questions on the table were answered, even without an answer we at least have the question. Today true, groundbreaking innovation needs someone to identify the new questions. It's like doing science in a bubble and you need someone to take a massive new one and merge it into our own. Right now we're not opening wide doors, but just refining and adding almost imperceptibly to an already huge bubble. We're grinding in the game because we can't find the boss on the map or can't beat it yet.
Besides the current structure of research and the known weaknesses (misaligned incentives, funding issues, etc.) science in general today ran out of low hanging fruit. The last unexplored or unanswered riddles may need prerequisites that are outside of the current boundaries so someone needs to open the door do a vast new bubble that has more relatively low hanging fruit.
Also humans probably (don't quote me on that :) ) can't come up with any new questions or answers unless they are already at the edge of that bubble. And they each see and understand a smaller patch of that surface as it expanded. Stumbling onto new thinks is exponentially harder.
This is what those revolutionary discoveries did for us in the past: Opened the door to a new world where the hard answers were suddenly a bit easier, they made sense.
We get around this in part by not understanding (or at least remembering our understanding of) many of the basics. Such as this guy who looked things up in a textbook: https://news.ycombinator.com/item?id=34236889
A big part is that science has become very “sanitized”. Work on the wrong issues and you won’t get funding or will even get ridiculed out of a career.
It’s not even just going against headline consensus issues that will get you in trouble. Any findings that overturn what a large group of scientists have spent their careers working on won’t be well received.
More than that though. Modern science research is a highly hierarchal and managed enterprise. It's led, or at least its funding is controlled by, the risk averse managerial class who can only think it terms of returns on investment.
9/11 (believe it or not) had a lot to do with that in IT/computer engineering research (and likely other fields) with government funding. After the attacks, the US Federal government moved to stressing development more than blue sky research. That drew all the funding away from fundamental science to projects that could be made into a product in 3-5 years and focused on addressing some issue of the War on Terror.
Alternative theory: we've pretty much nailed the basics and it takes a PhD before people can even understand the frontier of most scientific fields. These frontiers are manifold, highly technical, and extremely boring to the layperson. Number theory is a very clear example of this: go read the Birch and Swinnerton-Dyer conjecture. Come back in a decade after you understand the Tate–Shafarevich group. (hahahaha just kidding, if you understood that you'd have proved the conjecture)
I’ve heard this too about number theory. Perhaps in that field, the paradigm shift lies in reducing incidental complexity? Or even managing it in a way that is better for humans?
> it takes a PhD before people can even understand the frontier of most scientific fields
It’s interesting that we have this implicit view that hyper-specialization is the only way to advance. I always thought so too, but why really? If you think about it, specialization is very close to incrementalism. Groundbreaking paradigm-shift stuff can often be written down on a napkin, and rarely does it require a PhD.
That said, to find treasure you have to go down a LOT of wrong paths before you (or someone else) ends up in the right one.
This was considered in the paper: "Some point to a dearth of ‘low-hanging fruit’ as the readily available productivity-enhancing innovations have already been made19,27. Others emphasize the increasing burden of knowledge; scientists and inventors require ever more training to reach the frontiers of their fields, leaving less time to push those frontiers forward18,28. "
Would you say this is a natural consecuence of human emotions (e.g ego), economic interests, powerful players that have corrupted the game etc? I'm genuintely curious. As an ignorant on science and its processes in academia or markets, I cannot feel nothing other than dissapointment and loss of hope everytime I heard this kind of things about a matter I've always considered quite rational and focused on pursuing the truth over anything else. Naive me, I guess...
It’s always been a problem but I’d say these factors make it worse:
1. It’s harder for individuals or small groups to make breakthroughs. We need larger groups and more expertise as well as expensive equipment to continue making discoveries.
More hands in the pie make more gatekeepers and less risk taking.
2. In general our entire society is “circling the wagons”. I’m not sure why but we’re more tribal than normal. Science isn’t immune from the trend.
This is a very natural consequence of the higher difficulty of maintaining one's career in academia. What is a natural consequence of the increased competition for those jobs.
> Any findings that overturn what a large group of scientists have spent their careers working on won’t be well received.
Hasn't that always been the case?
It's said science advances one grave at a time.
I suspect that there isn't a single cause but a cluster of them.
One possible issue is that high level science needs more and more energy (like LHC) and more and more intelligence to crack (most science prizes are teams).
I blame the grant funding process. What gets published is what gets funded when written up as a grant proposal. What gets written up as a grant proposal is what is called for by grant funding agencies for their own research initiatives. If you want disruptive science, fund it then. That's all everyone wants to do in science, but the bills need to be payed somehow, so you play the game the funding agencies want you to play.
Plus, there is a large degree of "orthodoxy" in these grants, even most of them proclaim that they are "high-risk, high-gain". In the end, it seems that if you are too much on the "risk" scale, you will not get the funding since committees prefer "a little more of the same, please" to more wild ideas. I realise that such ideas are of course hard to classify in terms of their feasibility, but that's the main purpose of science, right? Venturing boldly into the unknown and all that; sometimes you come back with the treasure, sometimes you come back with lessons learned.
The current small-independent-grant culture is flat. This means that all projects are small and there are is almost no hierarchical 'higher level' science. PIs are happy to publish minor descriptions of the clouds, and journals are happy to publish them , in lack of something more impressive. So much value is lost in creating processes and bureaucracy.
The focus is clearly on "getting the grant". And the grant acts as an entity in itself, as it becomes a line in the CV of scientists, which will help them "get the next grant". The outcome of the grant becomes thus irrelevant. Maybe competition is missing?
It almost seems as a new model would take the world by storm, but it doesnt seem to arise. Even private research often falls for the same pitfalls, like how Deepmind keeps seeking to publish proprietary research in Nature, instead of creating an open ecosystem that will drive reinforcement-learning science forward.
> The focus is clearly on "getting the grant". And the grant acts as an entity in itself, as it becomes a line in the CV of scientists, which will help them "get the next grant". The outcome of the grant becomes thus irrelevant. Maybe competition is missing?
I think there's plenty enough of competition in academia. And competition alone doesn't help, because the metrics over which one loses or wins the race are not correlated with short or long-term worthiness. And while we can describe, even if somewhat vaguely, what makes worthy science, there's no way I know of to have a stable system in which that metric drives funding.
> Deepmind keeps seeking to publish proprietary research in Nature, instead of creating an open ecosystem that will drive reinforcement-learning science forward
This research at least has some feedback: Google funds it because it expects to make money off using its results in practice, so the research has to be at least somewhat correlated with reality. Unfortunately, this structure also means that "open ecosystem" isn't pursued.
I think that the expectation that "ground-breaking" discoveries in science should follow some linear or predictable timetable is not reasonable. Is the time period between "ground-breaking" discoveries supposed to be every 5 years, every 10 years?
The volume of papers has increased significantly, and publish or perish kinda stinks, but it's not just an issue of funding, but some (many?) researchers publish and exaggerate about the importance and difficulty of their research to receive an "I am smart" badge on social media. Although, despite this increase in volume of crap tier papers, the article seems to think it's not correlated:
"Declines in disruptiveness are also not attributable to changing publication, citation or authorship practices..."
What you're pointing to is far more insidious than I think is usually recognized.
People can point to incentive structures like grants etc, and to the implications in terms of the meaning of metrics etc but the real harm is in how it shifts mental focus and attention collectively.
It's not ok to let minor things or things you think are obvious go anymore to focus on bigger picture issues, that might be "riskier" but nonobvious. The way that plays out too is incredibly dependent on your institutional social environment etc.
So much of this is difficult to easily quantify in ways that can be easily studied. I loved this paper for example, but it's easy for me to think of papers in my own field that would look "disruptive" in terms of citation networks but are really the same content. There's these weird shifts I've seen happen in reading old literature and during my career, where big shifts in who is being cited will happen for quixotic social reasons. Usually it's basically politics or ignorance of big parts of a field, who will get introduced to an idea by a particular author or group. Lots of chaotic citation patterns and feedback loops.
Add in shifts in what's motivating grant and paper topics and it really damages authentic scientific discourse. So much is driven by looking like a brilliant scientist and not by scientific progress. People aren't dumb either and they are very very good at looking like brilliant scientists.
Higgs, who hypothesized the Higgs boson back in the 70s, has only published 2-3 papers in the decades since. Before it was announced that he won the Nobel prize, his University was deciding between forcing him out or gambling on him potentially winning the Nobel prize which would bring more reknown to the department. He has commented recently saying if he were a young, freshly minted PhD today he would not have gotten tenure with his publication record. That seems insane to me and a bitter critique of today's incentive structures created by the funding system we have that rewards incremental work over real breakthroughs.
Alan Kay wrote an essay on how large scientific breakthroughs where done [1], further discussed in [2]. The funders and the systems of funding of science are to blame for the deciline, for example they won't fund problem finding anymore.
It's easy to make up a measure and then develop a story that explains why that measure is useful. Was Newton's theory of gravity disruptive, or consolidating. I'm thinking consolidating, since it allowed a lot of other things to make sense. Likewise, perhaps quantum mechanics was disruptive -- it sounds disruptive -- but it also consolidated a lot of confusing observations.
(And lets not even mention the problem that when there are 1,000-times as many papers between 1940 and 2010, we might not expect 1,000-times as many disruptions.)
This paper has a measure that sounds useful, and is certainly good for headlines, but it is very unclear that it provides any insight into the progress of science.
When I read a modern paper, I skim a lot more of the references. Getting to a cited paper is often just two clicks: there's a reference to the bibliography, and from there a link to arXiv (or some other public repository). As papers get older the bibliography might not link to arXiv, or might not link to a digitized paper at all. Those older papers are more time consuming to use as a reference. Scientists aren't lazy but they also don't have infinite time.
Put another way: papers form a graph, this result suggests that older nodes are traversed less frequently. I think the metric they propose is useful, but we could rephrase this all as measure of how useful papers are as a node (where CD = -1 is most useful, CD = 1 is least useful).
In short, this result might just mean paper discovery is getting easier. That's a good thing. We can wring our hands endlessly about how there's less low hanging fruit (there is) or about how academia is rotting away (as though it was ever different), but those conclusions are big jumps from what this paper says.
That recognition immediately led to a flurry of follow-on work confirming the mechanism and birthing molecular biology as we know it, which opened up vast new areas of science.
I think the more important question is, how do we validate the usefulness of the CD5 metric. And, why is it better to be "disruptive" than "consolidating"?
Based on the examples shown (and others that I can imagine), it is unclear that it measures anything related to scientific significance.
The trend over time could equally be capturing changes in referencing practice (either that really important, influential papers/inventions in the past didn't reference other really important, influential papers/inventions nearly as often as equally novel papers/inventions today, or that modern referencing is much more exhaustive about referencing minor and tangentially relevant papers/inventions. There is, of course, also more significant prior art in journals and patent indices to actually cite in later years...). They carry out some robustness checks, but I'm not sure any of them really dent that hypothesis. It's also a conclusion based on proportions (ie. the effect mostly comes from by there being a lot more "non-disruptive" papers published rather than fewer "disruptive" papers) and there's actually a trend rise in the raw number of "most disruptive" academic papers from the 1950s through to the 1990s, at which point the numbers drop so sharply it looks like a discontinuity in the data (but would probably coincide with lots of journal content becoming easily accessible online...)
Pace of publishing and observations have dramatically as you rightly point out.
I didn't push back, because I had no power and I just wanted to get the fuck out of there and start making money after being scammed out of 5 years of my life where I poured a lot of effort and got shit pay. I lost respect for my advisor and I was stunned by how un-scientific his response was when it came to correcting his 1980s results. My measurements were more accurate and were strongly supported by structural measurements he never did and couldn't do back then. But he was more interested in protecting his "legacy" of his paper with 13 citations.
So petty and small. I dropped those efforts and was able to get out completely obviating those results, never mentioned them in my thesis even.
Anyway, my point is that I'm not surprised that scientific results are lame, vanilla, uninspired. When the committees that decide on what gets funded are all 65+ dinosaurs stuck in their ways, very few disruptive ideas will be founded because you're disrupting the very research that got the committee members clout and recognition.
As they say, change comes a funeral at a time.
Kind of like a Judge ruling one way or another for personal reasons, or ruling with a heavy conflict of interest.
There needs to be much stronger ethical principles around this and an advisor hinting at obfuscating work for such reasons should be up for review.
It can't all be blamed on corporatization, however. The control of funding agencies at the federal level by vested interests seems to be a major problem, i.e. innovative lines of research that threaten the status quo could threaten those cabals that control fund disbursement, so such efforts don't get funded. The stagnation of Alzheimer's research and the focus on the amyloid hypothesis seems to fall into this category. This kind of bureaucratic ossification is not new; the classic example is how Trofim Lysenko in the Soviet Union controlled the direction of agricultural research for about three decades, much to the detriment of the understanding of plant genetics and crop development research in the USSR over that period.
So it can be attributed to the rise of monopolistic corporatization on university campuses on one hand, and the growth of entrenched bureaucracies at the federal funding level on the other. Neither group wants the rug pulled out from under them by 'disruptive innovation'. There are many historical antecedents for this kind of situation, incidentally.
This is a fundamental problem with the nature of science and the nation state in that modern science is constituted by the nation state, not merely imposing into or upon it.
https://www.youtube.com/watch?v=Jvntr0J5eQk
Take physics. As I read it, modern fundamental physics is mostly some combination of general relativity and quantum mechanics, which are both early 20th century inventions.
They weren't necessarily genius in particular in-depth analysis, rather they were genius new concepts that created a whole new blank canvas to fill in.
What did the field look like just before them? I always forget the details, but was it lord Kelvin who said, physics just needs to figure out the ultraviolet catastrophe and it's done? I wonder if it looked similarly uninnovative. The canvas they had was full, and they needed a new canvas for true innovation.
Now the thing is, I think true disruption, like GR and quantum physics, is not limited by the number of postdocs, but by truly genius insights. Einstein wasn't even working as an academic when he came up with Special Relativity. So the number of papers written isn't a proxy for pace of revolution.
Not to mention, since number of published papers has become a target, it has become useless as a measure of progress.
This is apparently a common misconception regarding a lecture given by Lord Kelvin in 1900 concerning "two clouds" in physics.[1] In fact, Max Planck solved the ultraviolet catastrophe, also in 1900, by assuming electromagnetic radiation can only be emitted or absorbed in discrete packets of energy called quanta. Albert Einstein also solved the ultraviolet catastrophe in a paper he published in 1905, for which he was awarded the Nobel Prize in 1922, by, as always, standing on the shoulders of giants, hypothesizing that Planck's quanta were actual particles. These particles are known today by a word coined by physical chemist Gilbert Newton Lewis in a letter to Nature in 1926, namely, photons.
[1] https://arxiv.org/pdf/2106.16033.pdf
This article mentions Einstein being offended that his paper was submitted to peer review by Physical Review, and submitting it elsewhere instead:
"We (Mr. Rosen and I) had sent you our manuscript for publication and had not authorised you to show it to specialists before it is printed. I see no reason to address the – in any case erroneous – comments of your anonymous expert. On the basis of this incident I prefer to publish the paper elsewhere."
I just recently ended up there via this blog: https://experimentalhistory.substack.com/p/the-rise-and-fall...
which argues, not unconvincingly IMO, that peer review is a failed experiment which tries to stop bad science at the price of also stopping brilliant, unconventional science, where instead we would profit more from letting the scientific public do the sorting instead of hand-picked specialists. The author phrases it as science being as strong as its strongest links, not as weak as the weakest one, and we should do everything to get the brilliant crazy ideas.
Most recent example: https://www.quantamagazine.org/long-out-of-math-an-ai-progra...
I think perhaps we've also partially lost the meaning of ground breaking research given the constant news articles on "ground breaking science" and the constant overinflation and exaggeration of one's papers and achievements.
Yes, and we can’t expect it to occur in a socially confined over-structured environment either. Academia has changed a lot since then, with peer review hell, publishing pressure, etc. I mean, are we really pretending to be surprised that a system of incrementalist incentives produces incrementalist results?
Besides the current structure of research and the known weaknesses (misaligned incentives, funding issues, etc.) science in general today ran out of low hanging fruit. The last unexplored or unanswered riddles may need prerequisites that are outside of the current boundaries so someone needs to open the door do a vast new bubble that has more relatively low hanging fruit.
Also humans probably (don't quote me on that :) ) can't come up with any new questions or answers unless they are already at the edge of that bubble. And they each see and understand a smaller patch of that surface as it expanded. Stumbling onto new thinks is exponentially harder.
This is what those revolutionary discoveries did for us in the past: Opened the door to a new world where the hard answers were suddenly a bit easier, they made sense.
It’s amazing upright apes can figure out the speed of light or postulate and find the Higgs Boson in the first place.
How much physics do we expect the smartest dog in the world to do? What if we 10x’d its lifespan?
At some point either:
1. There is no more physics
2. We build beings that can do physics that we can’t
3. We’re stopped dead in our tracks at the limit of our ability to comprehend things that apes weren’t built to comprehend
A bunch of blind people could theoretically construct a model of an elephant, if they work together well. https://en.wikipedia.org/wiki/Blind_men_and_an_elephant
It’s not even just going against headline consensus issues that will get you in trouble. Any findings that overturn what a large group of scientists have spent their careers working on won’t be well received.
> it takes a PhD before people can even understand the frontier of most scientific fields
It’s interesting that we have this implicit view that hyper-specialization is the only way to advance. I always thought so too, but why really? If you think about it, specialization is very close to incrementalism. Groundbreaking paradigm-shift stuff can often be written down on a napkin, and rarely does it require a PhD.
That said, to find treasure you have to go down a LOT of wrong paths before you (or someone else) ends up in the right one.
It’s always been a problem but I’d say these factors make it worse:
1. It’s harder for individuals or small groups to make breakthroughs. We need larger groups and more expertise as well as expensive equipment to continue making discoveries. More hands in the pie make more gatekeepers and less risk taking.
2. In general our entire society is “circling the wagons”. I’m not sure why but we’re more tribal than normal. Science isn’t immune from the trend.
I could go on. It’s a big topic.
How to slow down scientific progress According to Leo Szilard
https://rootsofprogress.org/szilard-on-slowing-science
Discussed in https://news.ycombinator.com/item?id=34264436
[0] https://en.wikipedia.org/wiki/Planck%27s_principle?wprov=sfl...
Hasn't that always been the case? It's said science advances one grave at a time.
I suspect that there isn't a single cause but a cluster of them.
One possible issue is that high level science needs more and more energy (like LHC) and more and more intelligence to crack (most science prizes are teams).
The focus is clearly on "getting the grant". And the grant acts as an entity in itself, as it becomes a line in the CV of scientists, which will help them "get the next grant". The outcome of the grant becomes thus irrelevant. Maybe competition is missing?
It almost seems as a new model would take the world by storm, but it doesnt seem to arise. Even private research often falls for the same pitfalls, like how Deepmind keeps seeking to publish proprietary research in Nature, instead of creating an open ecosystem that will drive reinforcement-learning science forward.
I think there's plenty enough of competition in academia. And competition alone doesn't help, because the metrics over which one loses or wins the race are not correlated with short or long-term worthiness. And while we can describe, even if somewhat vaguely, what makes worthy science, there's no way I know of to have a stable system in which that metric drives funding.
> Deepmind keeps seeking to publish proprietary research in Nature, instead of creating an open ecosystem that will drive reinforcement-learning science forward
This research at least has some feedback: Google funds it because it expects to make money off using its results in practice, so the research has to be at least somewhat correlated with reality. Unfortunately, this structure also means that "open ecosystem" isn't pursued.
Deleted Comment
The volume of papers has increased significantly, and publish or perish kinda stinks, but it's not just an issue of funding, but some (many?) researchers publish and exaggerate about the importance and difficulty of their research to receive an "I am smart" badge on social media. Although, despite this increase in volume of crap tier papers, the article seems to think it's not correlated:
"Declines in disruptiveness are also not attributable to changing publication, citation or authorship practices..."
People can point to incentive structures like grants etc, and to the implications in terms of the meaning of metrics etc but the real harm is in how it shifts mental focus and attention collectively.
It's not ok to let minor things or things you think are obvious go anymore to focus on bigger picture issues, that might be "riskier" but nonobvious. The way that plays out too is incredibly dependent on your institutional social environment etc.
So much of this is difficult to easily quantify in ways that can be easily studied. I loved this paper for example, but it's easy for me to think of papers in my own field that would look "disruptive" in terms of citation networks but are really the same content. There's these weird shifts I've seen happen in reading old literature and during my career, where big shifts in who is being cited will happen for quixotic social reasons. Usually it's basically politics or ignorance of big parts of a field, who will get introduced to an idea by a particular author or group. Lots of chaotic citation patterns and feedback loops.
Add in shifts in what's motivating grant and paper topics and it really damages authentic scientific discourse. So much is driven by looking like a brilliant scientist and not by scientific progress. People aren't dumb either and they are very very good at looking like brilliant scientists.
[1] http://www.vpri.org/pdf/Kay_How.pdf
[2] https://m.youtube.com/watch?v=j9ZGFaIHegE