Patient can of course bring whoever they want into the circle. The problem is the intrusions that neither healthcare provider nor patient want.
Patient can of course bring whoever they want into the circle. The problem is the intrusions that neither healthcare provider nor patient want.
Casting the patient doctor relationship in terms of power dynamics is a bogus sociological construct divorced from reality. The true division of power is between those who fund healthcare and those who receive it, I would start there if you think things need to be improved.
I'm discounting it for this discussion because your argument is:
"meta-analyses are somewhat irrelevant" and "Meta-analyses are mostly there for trainees to notch up a paper." which is completely false.
Note a single clinical trial is still only considered "good quality" while multiple trials or meta-analyses are considered "high quality".
To address this new point you raised, when something has very promising early results we start using it in treatment (e.g. 3rd gen TKIs in adjuvant NSCLC) but until this weekend we had no 5 year OS survival for adjuvant use.
It's entirely possible something one thinks is "most effective" is later proven to not be (gen 1-2 TKIs, HIPEC, etc).
> That paragraph from NCCN is quite interesting. It is describing medicine in general really, and belies the fact that oncology has probably one of the strongest evidence base across all medical fields.
> Take for example how many stents cardiologists have inserted long after contradictory evidence was available, or how many pointless back operations have been done, or how many people have sat through fruitless psychoanalysis.
I'm not sure what point you are trying to make by addressing other specialties.
The National Comprehensive Cancer Network, comprised of multidisciplinary experts from 33 of the leading cancer centers in the country, is unequivocally the authority in oncology and is incredibly well respected. I'm going to defer to their opinion on the quality of evidence available and the hierarchy of evidence.
> Also one of your citations is seemingly casting doubt on the value of meta-analyses in oncology, so somewhat confused about your point.
The JAMA article states that the methodology in many studies does not meet NCCN/PRISMA criteria which is a well known, this says nothing about the relative value of good-quality meta-analyses (which are far more common now with the PRISMA update).
I'm really not sure why you think systematic reviews are irrelevant, this is a very radical viewpoint that I've seen no evidence of. Good meta-analysis > good RCT. The reality is that good quality studies of both types are uncommon in medicine, but the goal is still to use good SRs.
But to give a concrete example, the problem with meta-analyses is well illustrated in the recent EBCTG meta-analysis published in the Lancet, a top tier journal. This involved over 100,000 patients, and explored concurrent chemotherapy regimens in breast cancer. The problem is that such regimens are not used anymore. The authors acknowledge in their own conclusion that this massive meta-analysis contradicts their own previous meta-analysis showing the superiority of sequential therapy. What exactly does one do with this? How does this help a patient get the right therapy? The treatment of various breast cancer subtypes has also evolved so much that the trials they meta-analyse are mostly obsolete. Hence my point, that meta-analyses are just not that useful in oncology, even truly massive well conducted ones published in prestigious journals. So it is not so simple as meta-analysis > RCT, that is merely lazy dogma. I find it hard to believe that anyone actually treating cancer patients would hold this view.
Of course most meta-analyses in oncology are not 100,000 patient behemoths conducted by consortia. They are much smaller studies, which usually don't bother to get patient level data, and just copy numbers from tables in the original papers while running through the Cochrane systematic review template.
And yet, here I am dubbed 'radical' at the bottom of a comment thread on Hacker News. Unfortunately the dogma around systematic reviews and EBM has exceeded its usefulness by quite some margin. The meta-analytic method was developed by psychologists trying to compile evidence about extra sensory perception of all things - an inauspicious beginning if there ever was one for the supposed cornerstone of medicine.
But oncology is a very big field…
For example, the majority diagnostic testing guidelines are based on meta-analyses.
FYI since you asked to pick one, let's take a look at NCCN. Not sure where you're drawing your conclusion that most evidence is from phase III RCTs:
"[Q] Quality and quantity of evidence refers to the number and types of clinical trials relevant to a particular intervention. To determine a score, panel members may weigh the depth of the evidence, i.e., the numbers of trials that address this issue and their design. The scale used to measure quality of evidence is:
5 (High quality): Multiple well-designed randomized trials and/or meta-analyses
4 (Good quality): One or more well-designed randomized trials
3 (Average quality): Low quality randomized trial(s) or well-designed non-randomized trial(s)
2 (Low quality): Case reports or extensive clinical experience
1 (Poor quality): Little or no evidence"
"The overall quality of the clinical data and evidence that exist within the field of cancer research is highly variable, both within and across cancer types. Large, well designed, randomized controlled trials (RCTs) may provide high-quality clinical evidence in some tumor types and clinical situations. However, much of the clinical evidence available to clinicians is primarily based on data from indirect comparisons among randomized trials, phase II or non-randomized trials, or in many cases, on limited data from multiple smaller trials, retrospective studies, or clinical observations. In some clinical situations, no meaningful clinical data exist and patient care must be based upon clinical experience alone. Thus, in the field of oncology, it becomes critical and necessary (where the evidence landscape remains sparse or suboptimal) to include input from the experience and expertise of cancer specialists and other clinical experts."
From an article:
"We identified 1124 potential systematic reviews from our survey of the 49 NCCN guidelines for the treatment of cancer by site. Five NCCN guidelines did not cite any systematic reviews."
https://www.nccn.org/guidelines/guidelines-process/developme...
https://jamanetwork.com/journals/jamaoncology/fullarticle/27...
https://www.nccn.org/guidelines/guidelines-with-evidence-blo...
I didn't say most evidence is from phase III RCTs, particularly if you include everything that happens in oncology as the denominator, only that meta-analyses were not that relevant. Most of the critical patient facing interventions have the backing of good quality trials, at least where it is reasonable and possible to do a trial. Also one of your citations is seemingly casting doubt on the value of meta-analyses in oncology, so somewhat confused about your point.
That paragraph from NCCN is quite interesting. It is describing medicine in general really, and belies the fact that oncology has probably one of the strongest evidence base across all medical fields. Take for example how many stents cardiologists have inserted long after contradictory evidence was available, or how many pointless back operations have been done, or how many people have sat through fruitless psychoanalysis.
Another fine example of the author's point is the ivermectin in covid meta-analytic nonsense (which I cannot even bring myself to link), where a bunch of small rubbish trials are meta-analysed into a 'flawless' body of evidence while double blind randomised trials are impugned.
(I’m sure this varies by jurisdiction too; I have only heard bad thing about US IRBs)
Some recent examples of problems with clinical trials:
1. Using a harmful or deliberately inadequate placebo. https://academic.oup.com/jnci/article/104/4/273/979399https://www.ahajournals.org/doi/10.1161/CIRCULATIONAHA.122.0...https://pubmed.ncbi.nlm.nih.gov/34688044/
2. Starting a bunch of clinical trials using a biomarker which turned out to be invalid. https://www.nejm.org/doi/10.1056/NEJMoa1214271
3. Endless 'me-too' trials driven by pharma, with a low chance of success. https://www.statnews.com/2019/09/04/me-too-drugs-cancer-clin...
Yes, absolutely, medicine should be evidence based. Yes, large randomized, double blind, placebo controlled studies provide a lot of information.
However, there are limitations with these kinds of studies.
First, it may not be ethical or practical to study some things in this manner. For example, antibiotics for bacterial pneumonia has not had a randomized, double blind, placebo controlled study.
Famously, there was an article discussing how parachutes in jumping out of airplanes had not been subject to a randomized, double blind, placebo controlled study. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC300808/
Later, somebody did that study: https://www.bmj.com/content/363/bmj.k5094 and found that parachutes made no difference, but it is not applicable to any real world case where you would use a parachute.
Which illustrates the second issue with evidence based medicine. Many times, the large trials's main thing they are measuring are different than what you really want to know, or the population they studied has significant differences from the patient who is right in front of you. How to apply the results of the large study to the individual patient in front of you is still more of an art than a science.
Finally, I think there is the example from machine learning. It has turned out that instead of creating more and more rules, feeding lots and lots of data to a neural network ends up performing better in a lot of machine learning cases. In a similar way, an experienced physician who has treated thousands of patients over decades has a lot of implicit knowledge and patterns stored in their (human) neural networks. Yes, these decisions should be informed by the results of trials, but they should not be discounted, which I think that Evidence Based Medicine did in at least a small part. During my residency, I worked with physicians who would examine and talk with a patient and tell me that something is not right and to do more extensive tests which would end up unearthing a hidden infection or other problem that we were able to treat before it caused major problems. They were seeing subtle patterns from decades of experience that might not even be fully captured in the patient's chart, much less a clinical trial without thousands of participants.
So yes, these clinical trials are a very important base for knowledge. But so is physician judgment and experience.
Source: I am a physician.
EBM deals with this by saying ‘there is no viable alternative’, a remarkable statement of epistemological nihilism that enables much low quality snd pointless research.
>Upon approval of the Protocol, a systematic review of the medical literature is conducted. ASCO staff use the information entered into the Protocol, including the clinical questions, inclusion/exclusion criteria for qualified studies, search terms/phrases, and range of study dates, to perform the systematic review. Literature searches of selected databases, including The Cochrane Library and Medline (via PubMed) are performed.
>After the systematic review is completed, a GRADE evidence profile and summary of findings table is developed to provide the guideline panels with the information about the body of evidence, judgments about the quality of evidence, statistical results, and certainty of the evidence ratings for each pre-specified included outcome.
Rapid criteria:
The criteria for a rapid recommendation update are: 1. that the identified evidence is of high methodological quality, 2. there is high certainty among experts that results are clinically meaningful to practice, 3. the identified evidence represents a significant shift in clinical practice from a recommendation in an existing ASCO guideline (e.g., change from recommending against the use of a particular therapy to recommending the use of that therapy; or a reversal to a recommendation) such that it should not wait for a scheduled guideline update.
A systematic literature review focused on the updated recommendation will be conducted by ASCO staff. Specifically, the immediate past guideline literature search strategy will be updated and filtered by search criteria specific to evidence informing the recommendation under review. All identified evidence will be quality- appraised using the GRADE methodology as outlined in Section 10 of this ASCO Guideline Methods Manual. The procedures used to draft the rapid recommendation update and deliberations by the expert panel will follow routine methods for all guidance products as outlined in this ASCO Guideline Methods Manual.
ASCO position on meta-analyses:
All these reasons can be used to make the excuse that a systematic review and meta -analysis should not be done, especially if resources aren’t available to hand search all journals of all languages, etc.
The solution is not to avoid doing a systematic review or meta-analysis, but to reveal to the reader what short cuts were taken (e.g., we included only peer reviewed published studies, or restricted our eligibility to studies published in English). This shows transparency, and then the readers can decide how important this problem is in applying the results of your meta -analysis to their situation.
Myths about meta-analyses • A literature-based meta-analysis is not worth doing
So systematic reviews and meta-analyses are no longer useful?
That you're arguing with anecdotes is proof itself of why EBM is important.
>There are 2 such updates this year already. If the primacy of meta-analyses were so great, why would they bother to issue rapid updates of what you class as low quality evidence?
Both March 2023 updates are identical.
https://old-prod.asco.org/sites/new-www.asco.org/files/conte...
https://old-prod.asco.org/sites/new-www.asco.org/files/conte...
https://old-prod.asco.org/sites/new-www.asco.org/files/conte...
"A targeted electronic literature search was conducted to identify any additional phase III randomized controlled trials in this patient population. No additional randomized controlled trials were identified. The original guideline Expert Panels reconvened to review evidence from EMERALD and to review and approve the revised recommendations."
Where is the meta-analysis? Where is the funnel plot? What are you even arguing about? They issued an update because of one trial.
Here is another one from June 2022, a major change to how one type of breast cancer is managed, in the methods:
"A targeted electronic literature search was conducted to identify phase III clinical trials pertaining to the recommendation on immune checkpoint inhibitors in this patient population. No additional randomized trials were identified. The original Expert Panel was reconvened to review the key evidence from KEYNOTE-522 and to review and approve the revision to the recommendation."
Where is the meta-analysis? Again, what are you trying to argue? They issued an update because of one trial.
There are two updates this year, one about HER2 testing, and one about ESR1.