Readit News logoReadit News
RobotToaster · 21 hours ago
It kinda skips over how large mainstream journals, with their restrictive and often arbitrary standards, have contributed to this. Most will refuse to publish replications, negative studies, or anything they deem unimportant, even if the study was conducted correctly.
CGMthrowaway · 20 hours ago
So much of this started with the rise of the peer-review journal cartel, beginning with Pergamon Press in 1951 (coincidentally founded by Ghislaine Maxwell's father). "Peer review" didn't exist before then, science papers and discussion was published openly, and scientists focused on quality not quantity.
leoc · 19 hours ago
I'm not sure that the system was ever that near to perfection: for example, John Maddox of Nature didn't like the advent of pre-publication peer review, but that presumably had something to do with it limiting his discretion to approve and desk-reject whatever he wanted. But in any case it (like other aspects of the cozy interwar and then wartime scientific world) could surely never have survived the huge scaling-up that had already begun in the post-war era and created the pressure to switch to pre-publication peer reivew in the first place.
canjobear · 16 hours ago
Peer review existed before 1951 in the US at least. See for example Einstein’s reaction to negative reviews when he tried to publish in Physical Review in 1935 https://paeditorial.co.uk/post/albert-einstein-what-did-he-t...
throwaway27448 · 18 hours ago
> coincidentally founded by Ghislaine Maxwell's father

A crazy world we live in where Robert Maxwell's daughter is more notorious than he is.

DoctorOetker · 12 hours ago
100% this.

What is currently called "peer review" didn't exist back then, back then the meaning of "peer review" was just the back and forth happening in the open academic literature. Note the inevitable lack of finality in the original concept of peer review, a discussion in the scientific community could go on for 100's of years before being finally resolved. The current concept of "peer review" is closer to the concept of a delegation of some opaque ministry of truth composed of some opaquely selected experts (who often truly intend well) to settle in a short duration the finality.

Some measurements or experiments or questions to be settled can be very actionable and provide highly accurate results, others require much longer gathering of data to draw a clear picture.

The modern concept of "peer review" tries to sell the idea of almost immediate finality, like an economic transaction. In reality it is selling just the illusion, and creating lots of victims ranging from truth, individuals, departments institutions, or even entire fields (think of the replication crisis in psychology) along with any patients or others they treat.

john_strinlai · 20 hours ago
>Pergamon Press in 1951 (coincidentally founded by Ghislaine Maxwell's father)

perhaps a bit off-topic, but what is coincidental about this and/or what is the relevance of Ghislaine Maxwell here?

jayde2767 · 14 hours ago
I wish you had highlighted or bolded "cartel", which is exactly how those industry players act.
ArchieScrivener · 11 hours ago
Hey man, trust the science.
underlipton · 17 hours ago
Some "fun" reading on the subject of Mr. Maxwell:

https://sarahkendzior.substack.com/p/red-lines

tl;dr He is the bridge that uncomfortably links Biden's former Secretary of State, Antony Blinken, to Jeffrey Epstein and Mossad. Hence, *gestures at the last couple of weeks and years*. Dude was just, like, Fraud Central, apparently.

butILoveLife · 18 hours ago
>scientists focused on quality not quantity.

I know a PhD professor doing post doc or something, and he accepted a scientific study just because it was published in Nature.

He didn't look at methodology or data.

From that point forward, I have never really respected Academia. They seem like bottom floor scientists who never truly understood the scientific method.

It helped that a year later Ivys had their cheating scandals, fake data, and academia wide replication crisis.

bonoboTP · 15 hours ago
Plenty will publish it, but those are not as highly regarded by the community. It's not a problem of journals. It's not hard to start your own journal by teaming up with other academics. In machine learning, ICLR is such a venue for example. The problem is much deeper and more fundamental. You want to publish alongside groundbreaking novel research. Researcher's own ears perk up when they hear about something new. They invite colleagues to talk about their novel discoveries not to describe all their null results and successful replications of known results. Funding agencies want research with novelty and impact. They want to write reports to the higher ups and the politicians and the donors that document the innovations that their funding brought. The media will republish press releases that have cool new results.

To have research happening, you need someone saying "I want to give money to this researcher". There is an endless queue of people lining up who are ready to take this money and do something with it. The person with money (govt or private) has to use some heuristics to pick. One way is to say "I trust this one, I don't care too much what the project is, I'm sure this person will do something that makes sense". But that is dependent on a track record.

ramraj07 · 20 hours ago
Do you want issues of Nature and cell to be replication studies? As a reader even from within the field, im not interested in browsing through negative studies. It'll be great if I can look them up when needed but im not looking forward to email ToC alerts filled with them.

Also who's funding you for replication work? Do you know the pressure you have in tenure track to have a consistent thesis on what you work on?

Literally every single know that designs academia is tuned to not incentivize what you complain about. Its not just journals being picky.

Also the people committing fraud aren't ones who will say "gosh I will replicate things now!" Replicating work is far more difficult than a lot of original work.

benterix · 19 hours ago
> Do you want issues of Nature and cell to be replication studies?

Of course I do! Not all of course, and taking (subjectively measured) impact into account. "We tried to replicate the study published in the same journal 3 years ago using a larger sample size and failed to achieve similar results..." OR "after successfully replicating the study we can confirm the therapeutic mechanism proposed by X actually works" - these are extremely important results that are takin into account in meta studies and e.g. form the base of policies worldwide.

Bratmon · 19 hours ago
> Do you want issues of Nature and cell to be replication studies?

More than anything. That might legitimately be enough to save science on its own.

zhdc1 · 19 hours ago
> Do you want issues of Nature and cell to be replication studies? As a reader even from within the field, im not interested in browsing through negative studies.

Actually, yes, I do. The marginal cost for publishing a study online at this point is essentially nil.

xandrius · 17 hours ago
I know you got a ton of responses already but not caring about replicability just invalidates science as a method. If we care only about first to publish we end up in the current situation where we don't even know that we know is actually even remotely correct.

All because journals prefer novelty over confirmation. It's like a castle of cards, looks cool but not stable or long-term at all.

notRobot · 19 hours ago
"Original research" isn't worth much unless replicated, which is the entire problem being discussed in this thread. Replicating studies are great though because they tell you if the original research actually stands and is valid.

> Replicating work is far more difficult than a lot of original work.

Only if the original work was BS. And what, just because it's harder, we shouldn't do it?

chocochunks · 18 hours ago
Even if that negative study could save you one, two, three+ years of work for the same outcome (which you then also can't really do anything with)? Shouldn't there BE funding for replication studies? Shouldn't that count towards tenure? Part of the problem is that publications play such a heavy role in getting tenure in the first place.

I'm sure you can more narrowly tune your email alerts FFS.

carlosjobim · 16 hours ago
If you're a reader within the field, then you are the one person in the world who should be most interested in negative replication studies.
peyton · 18 hours ago
> Do you want issues of Nature and cell to be replication studies?

Hell yeah. We’re all trying to get that Nature paper. Imagine if you could accomplish that by setting the record straight.

renewiltord · 18 hours ago
Realistically, everyone will say “yes” to the “do you want” question because if you’re not a reader or a subscriber you benefit from the readers reading replication studies.

I believe people will enthusiastically say yes but that they do not routinely read that journal.

paganel · 19 hours ago
>Also who's funding you for replication work? Do you know the pressure you have in tenure track to have a consistent thesis on what you work on?

This is partly why much of today's science is bs, pure and simple.

lovich · 17 hours ago
> Replicating work is far more difficult than a lot of original work.

I don’t regularly read scientific studies but I’ve read a few of them.

How is it possible that a serious study is harder to replicate than it is to do originally. Are papers no longer including their process? Are we at the point where they are just saying “trust me bro” for how they achieved their results?

> Do you want issues of Nature and cell to be replication studies?

Not issues of Nature but I’ve long thought that universities or the government should fund a department of “I don’t believe you” entirely focused on reproducing scientific results and seeing if they are real

baxtr · 3 hours ago
One major contributing factor, in my opinion, is that almost no one in the community was taught the scientific method / epistemology itself.

The simple fact that theories should be falsified and not verified is something that most scientists don’t know.

charlieyu1 · 2 hours ago
The publish or perish culture leads to this. The businessmen and politicians should never be allowed to decide academic fundings.
tppiotrowski · 20 hours ago
Maybe we need a journal completely dedicated to replication studies? It would attract a lot of attention I think.
MichaelDickens · 20 hours ago
Economics has the Journal of Comments and Replications in Economics: https://jcr-econ.org/
pfdietz · 20 hours ago
And funding dedicated to replication studies.
fc417fc802 · 16 hours ago
We already have archival journals. What's missing is funding and any prospect of career advancement.
LargeWu · 19 hours ago
Is there a viable career path for researchers who choose to focus on replication instead of novel discoveries? I assume replications are perceived as less prestigious, but it's also important work.
obviouslynotme · 15 hours ago
I have worked in this particular sausage factory. Multiple funded random replications are the only thing that will save science from this crisis. The scientific method works. We need to actually do it.

Replications don't have to be in the journals either. As long as money flows, someone will do them, and that is what matters. The randomization will help prevent coordination between authors and replicators.

In a better world, negative studies and replications would count towards tenure, but that is unlikely to occur. At least half of the problem is the pressure to continuously publish positive results.

bonoboTP · 13 hours ago
Regardless of what gets taught in school about science being objective and without ego, or having a culture of adversarial checks on each other etc., the reality is that scientists are humans and have egos and have petty feuds.

Publishing a failed replication of the work of a colleague will not earn you many brownie points. I'm stating this as an observation of what is the case, not as something that I think should be the case. If you attack other researchers like this and damage their reputation - even if for valid scientific reason - you'll have a hard time when those colleagues sit on committees deciding about your next grant etc.

Of course if you discover something truly monumental that will override this. But simply sniping down the mediocre research published by other run-of-the-mill researchers will get you more trouble than good. Yes it's directly in contradiction to the textbook-ideal of what science should be, as described to high school students, but there are many things in life this way.

Of course it can be laudable to go on such a crusade despite all this, and to relentlessly pursue scientific truth, etc. but that just won't scale.

canjobear · 16 hours ago
This isn’t about honest researchers resorting to fraud to publish their null results because they were blocked by big bad Nature. It’s about journals and authors churning out pure junk papers whose only goal is to game metrics like citation count.
leoc · 20 hours ago
Right, it seems that many of the weaknesses in the system exist because they serve the interests of journal publishers or of normal, legitimate-ish researchers, but in the process open the door to full-time system-hackers and pure fraudsters.
bonoboTP · 13 hours ago
Any system that grows too fast has these kinds of problems. When it's a small intimate circle where everyone knows everyone, reputation alone can keep people in check. Once it's larger you need to invent rules and bureaucracies and structures and you will have loopholes that bad actors can more easily exploit, hiding in the crowd, than in the small version. It's the same with the Internet or computing. Security was much less of a topic when it was mostly honest academic nerds using the Internet, and the protocol designs often didn't even assume adversarial participants. Science also still runs on this assumed honesty system that worked well when it was small.
dheera · 17 hours ago
Mainstream journals are complicit, but are not the biggest problem.

The biggest problem by far is modern society: Tenure, getting paid a livable wage as a researcher, not getting stack-ranked and eliminated from your organization all overindex on positive research results that are marketable. This "loss function" encourages scientific fraud of sorts.

bonoboTP · 15 hours ago
When, in those mythical non-"modern" times, was it easy to get tenure or a livable wage as a researcher? How open were the doors to this and what proportion of society got a realistic chance to pursue such a career? More people getting a chance means more fierce competition.
godelski · 15 hours ago

  > Most will refuse to publish replications, negative studies, or anything they deem unimportant, even if the study was conducted correctly.
I think this was really caused by the rise of bureaucracy in academia. Bureaucrats favorite thing is a measurement, especially when they don't understand its meaning. There's always been a drive for novelty in academia, it's just at the very core of the game. But we placed far too much focus on this, despite the foundation of science being replication. We made a trade, foundation for (the illusion of) progress. It's like trying to build a skyscraper higher and higher without concern for the ground it stands on. Doesn't take a genius to tell you that building is going to come crashing down. But proponents say "it hasn't yet! If it was going to fall it would have already" while critics are actually saying "we can't tell you when it'll fall, but there's some concerning cracks and we're worried it'll collapse and we won't even be able to tell we're in a pile of rubble."

I don't know what the solution is, but I do know that our fear of people wasting money and creating fraudulent studies has only resulted in wasting money and fraudulent studies. We've removed the verification system while creating strong incentives to cheat (punish or perish, right?).

I think one thing we do need to recognize is that in the grand scheme of things, academia isn't very expensive. A small percentage of a large number is still a large number. Even if half of academics were frauds it would be a small percentage of waste, and pale in comparison to more common waste, fraud, and abuse of government funds.

From what I can tell, the US spent $60bn for University R&D in 2023[0] (less than 1% of US Federal expenditures). But in that same time there was $400bn in waste and fraud through Covid relief funds [1]. With $280bn being straight up fraud. That alone is more than 4x of all academic research funding!!!

I'm unconvinced most in academia are motivated by money or prestige, as it's a terrible way to achieve those things. But I am convinced people are likely to commit fraud when their livelihoods are at stake or when they can believe that a small lie now will allow them to continue doing their work. So as I see it, the publish or perish paradigm only promotes the former. The lack of replication only allows, and even normalizes, the latter. The stress for novelty only makes academics try to write more like business people, trying to sell their product in some perverse rat race.

So I think we have to be a bit honest here. Even if we were to naively make this space essentially unregulated it couldn't be the pinnacle of waste, fraud, and abuse that many claim it is. But I doubt even letting scientists be entirely free from publication requirements that you'd find much waste, fraud, and abuse. Science has a naturally regulating structure. It was literally created to be that way! We got to where we are in through this self regulating system because scientists love to argue about who is right and the process of science is meant to do exactly that. Was there waste and fraud in the past? Yes. I don't think it's entirely avoidable, it'll never be $0 of waste money. But the system was undoubtably successful. And those that took advantage of the system were better at fooling the public than they were their fellow scientists. Which is something I think we've still failed to catch onto

[0] https://usafacts.org/articles/what-do-universities-do-with-t...

[1] https://apnews.com/article/pandemic-fraud-waste-billions-sma...

mike_hearn · 2 hours ago
> But in that same time there was $400bn in waste and fraud through Covid relief funds [1].

The cost of academic fraud should also include the indirect costs of bad decision making.

The Covid relief funds were only needed because politicians implemented extremely aggressive policies based on unproven epidemiological models built on fraudulent practices. I investigated all this extensively at the time and it was really sad/shocking how non-existent intellectual standards are in the field of epidemiology. The models were trash RNGs that couldn't have been validated even if they'd tried, which they never had because the field doesn't consider validation to be necessary to get a paper published. So the models made wildly wrong predictions based on untested, buggy, non-replicable models, which then led to lockdowns, which led to economic catastrophe, which led to the relief programme. All of the fraud in that programme - really the entire cost of it - should be laid at the feet of academic fraud.

bonoboTP · 15 hours ago
You either have something documented and quantified and measured and objective criteria tickboxes and deal with this style of failure mode, or you rely on subjective judgment and assessment and accept the failure mode of bias, nepotism, old boy's clubs etc. Of course the ideal case is to rely on the unbureaucratic informal wise and impartial judgment of some hypothetical perfect humans you can fully trust and rely on, and they always decide fully on merits etc. without having to follow any rigid criteria and checkboxes and numbers on hiring and promotion etc. But people are not perfect and society largely decided to go the bureaucratic way to ensure equal opportunities and to reduce bias through this kind of transparency.
pixl97 · 21 hours ago
This is Goodhart's law at scale. Number of released papers/number of citations is a target. Correctness of those papers/citations is much more difficult so is not being used as a measure.

With that said, due to the apparent sizes of the fraud networks I'm not sure this will be easy to address. Having some kind of kill flag for individuals found to have committed fraud will be needed, but with nation state backing and the size of the groups this may quickly turn into a tit for tat where fraud accusations may not end up being an accurate signal.

May you live in interesting times.

bwfan123 · 20 hours ago
> This is Goodhart's law at scale.

Also, Brandolini's law. And Adam Smith's law of supply and demand. When the ability to produce overwhelms the ability to review or refute, it cheapens the product.

otherme123 · 19 hours ago
> Number of released papers/number of citations is a target

There was this guy, well connected in the science world, that managed to publish a poor study quite high (PNAS level). It was not fraud, just bad science. There were dozens of papers and letters refuting his claims, highlighting mistakes, and so... Guess what? Attending to metrics (citations, don't matter if they are citing you to say you were wrong and should retract the paper!), the original paper was even more stellar on the eyes of grants and the journal itself.

It was rage bait before Facebook even existed.

armchairhacker · 21 hours ago
There’s an accurate way to confirm fraud: look for inconsistencies and replicate experiments.

If the fraudsters “fail to replicate” legitimate experiments, ask them for details/proof, and replicate the experiment yourself while providing more details/proof. Either they’re running a different experiment, their details have inconsistencies, or they have unreasonable omissions.

mike_hearn · an hour ago
That only confirms a very small subset of fraud. There are many ways to do scientific fraud that will yield internally consistent papers that pass replication as practiced today.

An example is papers which claims of the form, "We proved X by doing Y" where Y is a methodology that isn't derived from and can't prove X. This sort of paper will replicate every time because if you re-derive a correct methodology the original authors say you didn't really replicate their study and your work should be ignored, but if you use their broken methodology you'll just give an intellectually fraudulent paper the stamp of replication approval.

This kind of problem is actually much more widespread than work that looks scientific but in which the data is faked.

pixl97 · 20 hours ago
Of course this is slightly messy too. Fraudsters are probably always incorrect, of course they could have stolen the data. But being incorrect doesn't mean your intentionally committing fraud.
ertgbnm · 17 hours ago
That would be great if journals bothered publishing replication studies. But since they don't, researchers can't get adequate funding to perform them, and since they can't perform them, they don't exist.

We can't look for failed replication experiments if none exist.

john_strinlai · 20 hours ago
that approach is accurate, but not scalable.

the effort to publish a fraudulent study is less (sometimes much less) than the effort to replicate a study.

wswope · 20 hours ago
Yeah, but this happens all the time.

>>95% of the time, the fraudsters get off scot-free. Look at Dan Ariely: Caught red-handed faking data in Excel using the stupidest approach imaginable, and outed as a sex pest in the Epstein files. Duke is still giving him their full backing.

It’s easy to find fraud, but what’s the point if our institutions have rotten all the way through and don’t care, even when there’s a smoking gun?

awesome_dude · 19 hours ago
Is it that easy?

Machine Learning papers, for example, used to have a terrible reputation for being inconsistent and impossible to replicate.

That didn't make them (all) fraudulent, because that requires intent to deceive.

bonoboTP · 15 hours ago
> Number of released papers/number of citations is a target

Only in stupid university leaderships is that truly what gets you hired or promoted. It's simply not true. Junior researchers in fact are believing it stronger than the facts actually support. Yes, you have to have a solid amount of publications, but doing a ridiculous amount of low-impact salami-sliced stuff or getting your name on a ton of papers where you did no real work is not going to win you a job. People who evaluate applications also live in this world and know that these metrics are being gamed. It's a cat and mouse game but the cats are also paying attention. You can only play this against really dumb government bureaucracies that mechanically give points for publications and have hard numerical criteria etc. Good institutions don't do that.

Good evaluators actually read the papers themselves. Of course you can't read the papers of every single applicant if there are many. But once the applicant gets into the a somewhat filtered down list, reading the paper(s) or having an interview about it, or having them give a talk is much more informative than the number of the papers. Still not perfect, because some people can't communicate well, but communicating is part of the job, so maybe that's super bad but somewhat bad.

Evaluators will use also other evidence such as recommendation letters (informally being aware of the reputation of the recommender), previous fellowships or grants obtained, etc.

None of these are foolproof in themselves. But someone who has super few publications relative to their career stage will need some other piece of evidence in favor.

In machine learning and AI, peer reviews are known to be quite random. If you have a good Arxiv-only paper that makes sense and you can give a good talk on it and answer questions, that will get you further than having a rubberstamp on some paper that's "meh, so what".

There are some players in this game (which includes funding agencies, journals, university administration, hiring committees, conference organizers, students, etc) that are more ossified and slow-moving than others.

And it's also true that double blind peer review and the rubberstamp of a top-tier conference was mostly beneficial to small, not well connected research groups, as it puts the paper on an equal footing with the big labs. The more this system erodes, to more we fall back to reputation and branding of big labs and famous researchers. Again, because there is no infinite time and infinite wisdom available to pick from applicants and there never will be. There are only tradeoffs.

Deleted Comment

canjobear · 16 hours ago
I ran into an interesting incident of this recently. I got a Google Scholar alert about a paper with some experiments related to a paper I had published a while ago, by one "N. Tvlg". I read the paper with interest but I started noticing that although the arguments sounded good, they didn't really make sense, and also the descriptions of the results didn't really match the figures. Eventually I came across a cluster of citations to completely unrelated papers---my field is computational linguistics and these were citations to, like, studies of battery technologies for electric cars. I looked up "N Tvlg" on Google Scholar and they had "published" several articles very recently in totally divergent fields, and upon inspection, all of them had citations back to this materials science research buried deeply somewhere. Clearly these were LLM generated papers trying to build up citation count and h-rank for someone's career.
reactordev · 16 hours ago
Where there’s a ranking, there’s someone out there trying to cheat at it. Citation count is a joke.
matthewdgreen · 14 hours ago
The purpose of scientific publication used to be to deliver useful scientific results to one's peers. This meant that everyone ran their own personal filter of which peers were working on interesting things, and which collections (journals) were reproducing the most interesting ones. This system still works relatively well for most conscientious researchers. The idea that we should also use publication metrics to rank researchers was never part of this system, and it obviously leads to all sorts of spam (that most scientists just work around) but that seems to really upset non-scientists.
pjdesno · 19 hours ago
Perhaps relevant to this - if you go to this global ranking of publications:

  https://traditional.leidenranking.com/ranking/2025/list
and select "Mathematics and Computer Science", you'll find the top-ranked university is the University of Electronic Science and Technology of China.

My Chinese colleagues have heard of it, but never considered it a top-ranked school, and a quick inspection of their CS faculty pages shows a distinct lack of PhDs from top-ranked Chinese or US schools. It's possible their math faculty is amazing, but I think it's more likely that something underhanded is going on...

zahlman · 18 hours ago
It's strange to me that in places full of smart people, it seems to be well understood that this happens and there are lots of anecdotes relating to it; yet the same people will be confused that their political adversaries don't trust "the science" on one issue or another.

Maybe it's the scientists they don't trust?

Hendrikto · 18 hours ago
That’s the beautiful thing about science: You do not have to (and should not) trust any individual. And even if you don’t trust “the consensus” of “the scientific community”, you can empirically verify yourself.
tbrownaw · 16 hours ago
Once you move from abstract to practical - like say having legislators or regulators make rules based on The Science, or relying personally on more facts than you have time to independently verify - yes you do need to have trustworthy people.
zahlman · 17 hours ago
Can ordinary civilians feasibly measure, for example, global trends in mean temperature without relying on the data of others?
dekhn · 16 hours ago
Are you going to build a competitor to CERN?

There are many things that cannot be feasibly verified empirically without access to rare resources.

bonoboTP · 15 hours ago
I think it's difficult to relay to the public that a lot of this noise in "scientific publications" is not the same category as real research by reputable institutions. Yes, in certain cases the line can be blurry, fraudsters are sometimes caught in big-name institutions, maybe more in some fields than others, but serious researchers of the field know very well which publication venues and research groups are the real deal and what is bullshit. Overwhelmingly, these fraud papers and nonsense LLM-generated fake stuff are not published in serious journals or conferences.

It's a bit like how can we trust online shopping if I get all these emails trying to sell me aphrodisiac pills?

alansaber · an hour ago
There has always been a lot of bad science. I would suggest that percentage has only marginally increased.
fastaguy88 · 20 hours ago
It is useful to distinguish between "effective" scientific fraud, where some set of fraudulent papers are published that drive a discipline in an unproductive direction, and "administrative" scientific fraud, where individuals use pseudo-scientific measures (H-index, rankings, etc) to make allocation decisions (grants, tenure, etc). This article suggests that administrative scientific fraud has become more accessible, but it is very unclear whether this is having a major impact on science as it is practiced.

Non-scientists often seem to think that if a paper is published, it is likely to be true. Most practicing scientists are much more skeptical. When I read a that paper sounds interesting in a high impact journal, I am constantly trying to figure out whether I should believe it. If it goes against a vast amount of science (e.g. bacteria that use arsenic rather than phosphorus in their DNA), I don't believe it (and can think of lots of ways to show that it is wrong). In lower impact journals, papers make claims that are not very surprising, so if they are fraudulent in some way, I don't care.

Science has to be reproducible, but more importantly, it must be possible to build on a set of results to extend them. Some results are hard to reproduce because the methods are technically challenging. But if results cannot be extended, they have little effect. Science really is self-correcting, and correction happens faster for results that matter. Not all fraud has the same impact. Most fraud is unfortunate, and should be reduced, but has a short lived impact.

perfmode · 18 hours ago
The distinction between effective and administrative fraud is useful and I think underappreciated. A lot of the conversation in these threads conflates the two, which makes it hard to reason about what actually needs fixing.

I want to push back a little on "science is self-correcting" though. It's true in the limit, but correction has a latency, and that latency has real costs. In fields like nutrition, psychology, or pharmacology, a fraudulent or deeply flawed result can shape clinical guidelines, public policy, and drug development pipelines for a decade or more before the correction lands. The people harmed during that window don't get made whole by the eventual retraction.

The comparison I keep coming back to is fault tolerance in distributed systems. You can build a system that's "eventually consistent" and still have it be practically broken if convergence takes too long or if bad state propagates faster than corrections do. The fraud networks described in TFA are basically an adversarial workload against a system (peer review) that was designed for a much lower rate of bad input. Saying the system self-corrects is accurate, but it's not the same as saying the system is healthy or that the current correction rate is adequate.

I think the practical question isn't whether science corrects itself in theory but whether the feedback loops are fast enough relative to the rate of fraud production, and right now the answer seems pretty clearly no.

qsera · 20 hours ago
>methods are technically challenging.

And finanacially too..

>Science really is self-correcting..

When economy allows it....

temporallobe · 21 hours ago
My wife completed her PhD two years ago and she put a LOT of work into it. Many sleepless nights, and it almost destroyed our marriage. It took her about 6 years of non-stop madness and she didn’t even work during that time. She said that many of her colleagues engaged in fraudulent data generation and sometimes just complete forgery of anything and everything. It was obvious some people were barely capable of putting together coherent sentences in posts, but somehow they generated a perfect dissertation in the end. It was common knowledge that candidates often hired writers and even experts like statisticians to do most of the heavy lifting. I don’t know if this is the norm now, but I simultaneously have more respect and less respect for those doctoral degrees, knowing that some poured their heart and soul into it, while others essentially cheated their way through. OTOH, I also understand that there may be a lot of grey area.

My eyes have been opened!

titzer · 20 hours ago
I found the article and your third-hand anecdotes troubling. The good news is that it does not match any of the years of experience in my field. Fraud is just not that rampant. At PhD-granting institutions, the level of fraud you describe here is very seriously punished. It's career-ending. The violations that you are serious enough that any institution would expel said students (or harshly punish faculty--probably firing them). She did no one any favors by not reporting them.

Unfortunately I don't think a dialogue around vague anecdotes is going to be particularly enlightening. What matters is culture, but also process--mechanisms and checks--plus consequences. Consequences don't happen if everyone is hush-hush about it and no one wants to be a "rat".

qsera · 20 hours ago
>It's career-ending..

That is where being good at politics come into play. And if you are good at it, instead of being career-ending, fraud will put you in the highest of the positions!

No one wants a "plant" who cannot navigate scrutiny!

delichon · 19 hours ago
> The good news is that it does not match any of the years of experience in my field.

I worked for exactly one academic, and he indulged in impossible-to-detect research fraud. So in my own limited experience research fraud was 100%.

It was a biology lab, and this was an extremely hard working man. 18 hours per day in the lab was the norm. But the data wasn't coming out the way he wanted, and his career was at stake, so he put his thumb on the scale in various ways to get the data he needed. E.g. he didn't like one neural recording, so he repeated it until he got what he wanted and ignored the others. You would have to be right in the middle of the experiment to notice anything, and he just waved me off when I did.

This same professor was the loudest voice in the department when it came to critiquing experimental designs and championing rigor. I knew what he did was wrong, because he taught me that. And he really appeared to mean it, but when push came to shove, he fiddled, and was probably even lying to himself.

So I came away feeling that academic fraud is probably rampant, because the incentives all align that way. Anyone with the extraordinary integrity to resist was generally self-curated out of the job.

suddenlybananas · 17 hours ago
What field? I am aware this kind of stuff happens, but I don't really see it among any of my colleagues.
mistrial9 · 20 hours ago
yeah - skeptical here. Among certain departments, at large schools, under certain leaders.. The combination of "my marriage almost crumbled" for motivated reasoning, and "I have never seen any of this before" total inexperience with actual process.. the post shows itself to be biased and unreliable.

However, among certain departments, at large schools, under certain leaders.. yes, and growing

$0.02

russdill · 19 hours ago
Fucking hilarious to me when people claim academics are motivated by the "money", eg, when claimed by climate deniers.
1234letshaveatw · 17 hours ago
Undoubtably climate science is the exception and immune from fraudulent data generation and sometimes complete forgery
renewiltord · 11 hours ago
That's approximately 1 million people. Even a religious cult that size would have difficulty controlling motivations. As an example:

> Petitioners also formed a variety of organizations to create what they termed "marketable science." Pet. App. 1687a. For example, through the Council for To bacco Research (CTR) and Lawyers' Special Accounts, petitioners jointly financed research programs that were directed by company lawyers and calculated to yield favorable results. Id. at 240a-275a. Petitioners regu larly cited the conclusions of the scientists funded through these programs as if they were the objective results of disinterested research, without revealing that the scientists had, in fact, been funded by the industry. Id. at 195a.

That comes from here: https://www.justice.gov/osg/brief/philip-morris-usa-inc-v-un...

It's possible all the science was good but people were upset about who funded it.