Some thoughts on this as someone working on circulating-tumor DNA for the last decade or so:
- Sure, cancer can develop years before diagnosis. Pre-cancerous clones harboring somatic mutations can exist for decades before transformation into malignant disease.
- The eternal challenge in ctDNA is achieving a "useful" sensitivity and specificity. For example, imagine you take some of your blood, extract the DNA floating in the plasma, hybrid-capture enrich for DNA in cancer driver genes, sequence super deep, call variants, do some filtering to remove noise and whatnot, and then you find some low allelic fraction mutations in TP53. What can you do about this? I don't know. Many of us have background somatic mutations speckled throughout our body as we age. Over age ~50, most of us are liable to have some kind of pre-cancerous clones in the esophagus, prostate, or blood (due to CHIP). Many of the popular MCED tests (e.g. Grail's Galleri) use signals other than mutations (e.g. methylation status) to improve this sensitivity / specificity profile, but I'm not convinced its actually good enough to be useful at the population level.
- The cost-effectiveness of most follow on screening is not viable for the given sensitivity-specificity profile of MCED assays (Grail would disagree). To achieve this, we would need things like downstream screening to be drastically cheaper, or possibly a tiered non-invasive screening strategy with increasing specificity to be viable (e.g. Harbinger Health).
This sort of thing is exactly like preventative whole body MRI scans. It's very noisy, very overwhelming data that is only statistically useful in cases we're not even sure about yet. To use it in a treatment program is witchcraft at this moment, probably doing more harm than good.
It COULD be used to craft a pipeline that dramatically improved everyone's health. It would take probably a decade or two of testing (an annual MRI, an annual sequencing effort, an annual very wide blood panel) in a longitudinal study with >10^6 people to start to show significant reductions in overall cancer mortality and improvements in diagnostics of serious illnesses. The diagnostic merit is almost certainly hiding in the data at high N.
The odds are that most of the useful things we would find from this are serendipitous - we wouldn't even know what we were looking at right now, first we need tons of training data thrown into a machine learning algorithm. We need to watch somebody who's going to be diagnosed with cancer 14 years from now, and see what their markers and imaging are like right now, and form a predictive model that differentiates between them and other people who don't end up with cancer 14 years from now. We [now] have the technology for picking through complex multidimensional data looking for signals exactly like this.
In the meantime, though, you have to deal with the fact that the system is set up exclusively for profitable care of well-progressed illnesses. It would be very expensive to run such a trial, over a long period of time, and the administrators would feel ethically bound to unblind and then report on every tiny incidentaloma, which completely fucks the training process.
This US is institutionally unable to run this study. The UK or China might, though.
> This sort of thing is exactly like preventative whole body MRI scans. It's very noisy, very overwhelming data that is only statistically useful in cases we're not even sure about yet. To use it in a treatment program is witchcraft at this moment, probably doing more harm than good.
The child of a friend of mine has PTEN-Hamartom-Tumor-Syndrom, a tendency to develop tumors throughout life due to a mutation in the PTEN gene. The poor child gets whole body MRIs and other check-ups every half year. As someone in biological data science, I always tell the parents how difficult it will be to prevent false positives, because we don't have a lot of data on routine full body check-ups on healty people. We just know the huge spectrum on how healthy/ok tissue looks like.
> It would be very expensive to run such a trial, over a long period of time, and the administrators would feel ethically bound to unblind and then report on every tiny incidentaloma, which completely fucks the training process.
I wonder if our current research product is only considered the gold standard because doing things in a probabilistic way is the only way we can manage the complexity of the human body to date.
It’s like me running an application many, many times with many different configurations and datasets, while scanning some memory addresses at runtime before and after the test runs, to figure out whether a specific bug exists in a specific feature.
Wouldn’t it be a lot easier if I could look at the relevant function in the source code and understand its implementation to determine whether it was logically possible based on the implementation?
We currently don’t have the ability to decompile the human body, or understand the way it’s “implemented”, but that is something that tech is rapidly developing tools that could be used for such a thing. Either a way to corroborate enough information aggregated about the human body “in mind” than any person can in one lifetime and reason about it, or a way to simulate it with enough granularity to be meaningful.
Alternatively, the double-blindedness of a study might not be as necessary if you can continually objectively quantify the agreement of the results with the hypothesis.
Ie if your AI model is reporting low agreement while the researchers are reporting high agreement, that could be a signal that external investigation is warranted, or prompt the researchers to question their own biases where they would’ve previously succumbed to confidence bias.
All of this is fuzzy anyway - we likely will not ever understand everything at 100% or have perfect outcomes, but if you can cut the overhead of each study down by an order of magnitude, you can run more studies to fine-tune the results.
Alternatively, you can have an AI passively running studies to verify reproducibility and flag cases where it fails, whereas now the way the system values contributions makes it far less useful for a human author to invest the time, effort, and money. Ie improve recovery from a bad study a lot quicker rather than improve the accuracy.
EDIT: These are probably all ideas other people have had before, so sorry to anyone who reaches the end of my brainstorming and didn’t come out with anything new. :)
I guess the problem is a mismatch between detection capability and treatment capability? We seem to be getting increasingly good at detecting precancerous states but we don't have corresponding precancer treatments, just the regular cancer treatments like chemo or surgery which are a big hit to quality of life, expensive, harmful etc.
Like if we had some kind of prophylactic cancer treatment that was easy/cheap/safe enough to recommend to people even on mild suspicion of cancer with false positives, we could offer it to positive tests. Maybe even just lifestyle interventions if those are proven to work. That's probably very difficult though, just dreaming out loud.
>I guess the problem is a mismatch between detection capability and treatment capability?
the problem is you do the test for 7 billion people, say, 30 times over their life... 210000000000 tests. imagine how many false negatives and false positives, the cost of follow up testing only to find... false positive. the cost of telling someone they have cancer when they don't. the anger of telling someone they are free of cancer, only to find out they had it all along
this tech isn't that good, nowhere near it, more like a 1 in 100 or 10 in 100 rate of "being wrong". those numbers can get cheesed towards more false positives or false negatives.
as for grail, they tried to achieve this and printed OK numbers... ... .. but their test set was their training set. so the performance metrics went to shit when they rolled it out to production
Would you say ctDNA tools are sensitive and specific enough now to be able to make a decision about post op adjuvant therapies? “Now that I’ve had surgery, did the R0 resection get it all, or do I need to do chemo and challenging medication like mitotane?”
I’ve seen it most commonly thought of as using ctDNA to detect relapse earlier.
So, more like — did the tumor come back? And if that does happen, with ctDNA, can you detect that there is a relapse before you would otherwise find it with standard imaging. Most studies I’ve seen have shown that this happens and ctDNA is a good biomarker for early detection of relapse.
The case for proactively looking for circulating tumor DNA without an initial diagnosis or underlying genetic condition is a bit dicier IMHO. For example, what if really like to know (I haven’t read this article, but I’m pretty familiar with the field) is how many people had a detectable cancer in their plasma (ctDNA), but didn’t receive a cancer diagnosis. It’s been known for a while that you can detect precancerous lesions well before a formal cancer diagnosis. But, what’s still an open question AFAIK, is how many people have precancerous lesions or positive ctDNA hits that don’t form a tumor?
This seems like yet another place where the base rate is going to fuck us: intuitively (and you've actually thought about this problem and I haven't) I'd expect that even with remarkably good tests, most people who come up positive will not go on to develop related disease.
Ideally, you'd want a test (or two sequential ones) that's both very sensitive (rule candidates in) and specific (rule healthy peeps out). But that's only the first step, because there's no point knowing you're sick (from the populational and economic pov) if you can't do something useful about it. So you also have to include downstream tests and treatments in your assessment and all this suddenly becomes a very intricate probability network needing lots of data and thinking before decisions are made. And then, there's politics...
You might be able to target and preemptively treat some aggressive cancers!
I lost my wife to melanoma that metastasized to her brain after cancerous mole and margin was removed 4 years earlier. They did due diligence and by all signs there was no evidence of recurrence, until there was. They think that the tumor appeared 2-3 months before symptoms (headaches) appeared, so it was unlikely that you’d discover it otherwise.
With something like this, maybe you could get lower dose immunotherapy that would help your body eradicate the cancer?
How about tracking deltas between blood draws starting ant youngish age when things are on average presumed to be in a good state? When a new feature turns up in a subsequent blood draw, could it then be something more concerning?
The sensitivity challenge is compounded by the signal-to-noise ratio problem at ultra-low allelic fractions (<0.1%), where technical artifacts from library preparation and sequencing can mask true variants.
Long term the goal should be to find a treatment that is safe enough and with so small side effects that it can be used for any suspicious mutations even though it may be decades away from killing you.
Yes.. as I read the OP post I was thinking about how many weak natural poisons (ie bloodroot) have been shown to be effective at dispersing through the body and how they might be a good treatment ie 1-2 month course of pills.
It’s when bone marrow cells acquire mutations and expand to take up a noticeable proportion of all your bone marrow cells, but they’re not fully malignant, expanding out of control.
Here's what may seem like an unrelated question in response: how can we get 10^7+ bits of information out of the human body every day?
There are a lot of companies right now trying to apply AI to health, but what they are ignoring is that there are orders of magnitude less health data per person than there are cat pictures. (My phone probably contains 10^10 bits of cat pictures and my health record probably 10^3 bits, if that). But it's not wrong to try to apply AI, because we know that all processes leak information, including biological ones; and ML is a generic tool for extracting signal from noise, given sufficient data.
But our health information gathering systems are engineered to deal with individual very specific hypotheses generated by experts, which require high quality measurements of specific individual metrics that some expert, such as yourself, have figured may be relevant. So we get high quality data, in very small quantities -a few bits per measurement.
Suppose you invent a new cheap sensor for extracting large (10^7+ bits/day) quantities of information about human biochemistry, perhaps from excretions, or blood. You run a longitudinal study collecting this information from a cohort and start training a model to predict every health outcome.
What are the properties of the bits collected by such a sensor, that would make such a process likely to work out? The bits need to be "sufficiently heterogeneous" (but not necessarily independent) and their indexes need to be sufficiently stable (in some sense). What is not required if for specific individual data items to be measured with high quality. Because some information about the original that we're interested in (even though we don't know exactly what it is) will leak into the other measurements.
I predict that designs for such sensors, which cheaply perform large numbers of low quality measurements are would result in breakthroughs what in detection and treatment, by allowing ML to be applied to the problem effectively.
I think it's a very interesting approach and I highly support such an initiative. The easiest way to get a lot of data out of the body is probably to tap the body's own monitoring system - the sensory nerves.
A chemosensor also sounds like a useful thing it should give concentration by time. Minimally invasive option would be to monitor breath, better signal in blood.
Or perhaps even routine bloodwork could incorporate some form of sequencing and longitudinal data banking. Deep sequencing, which may still be too expensive, generates tons of data that can be useful for things that we don't even know to look for today, capturing this data could let us retroactively identify meaningful biomarkers or early signals when we have better techniques. That way, each time models/methods improve, prior data becomes newly valuable. Perhaps the same could be said of raw data/readings from instruments running standard tests as well (as opposed to just the final results).
I'd be really curious to see how longitudinal results of sequencing + data banking, plus other routine bloodwork, could lead to early detection and better health outcomes.
Last time someone tried to inject chips into the bloodstream, public opinion didn't handle it too well. It's the same as we would learn a lot by being more cruel to research animals. But most people have other priorities. Good or bad ? Who knows ? Research meets social constructs.
Someone should add a sensor to all those diabetes sensors people have in their arms all day and collect general info. It would obviously bias towards diabetics but that's like half the US population anyways so maybe it wouldn't matter that much.
Sadly health insurance in the US is unlikely to pay for most preventative care because the followup costs of false-positives and that they are betting that down the line someone else will pick up the tab when you get sick decades later (like the government).
It's kind of why I'm favor of universal option to align financial incentives. Like given how sick the US population is, it probably makes sense to put a lot more people of GPL-1s and invest in improving their efficacy and permanence. Like nationalize-the-patent COVID-operational-warp-speed level urgency. There are over 100M Americans that are pre-diabetic, the cost of treating a diabetic is about 20k/yr. So $4 trillion in new costs, on top of the misery and human suffering.
I have a friend nearing mid-60s. Retired military so now covered by Medicare, then Tri-Care. Having prostate issues. PSA went from 12 to 19. Desperate to get a PET scan to determine his is benign BPH, or cancer. Cannot get his scan approved since both insurances will not approve a PET as an early diagnostic tool (scan is about $7500). Cannot imagine what will happen if everyone getting a cancer DNA signal of this type tries to get clarification via additional tests. USA health care really does not work that way. HTH, RF
A PSA of 12 is pretty far past the threshold for an MRI (don’t know about a PET scan) and an MRI would be pretty determinative about whether or not a biopsy is warranted. A biopsy would be pretty good at identifying cancer.
Sounds like either there are complicating factors or an absence of standard protocol adherence.
US health insurance is a mess, but that doesn’t sound like the entire story. I suspect urologists see a fair amount of friction for routine procedures related to prostrate health.
They care about prevention but only if it's very cheap. I get emails all the time from my insurance company about joining their program that is supposed to help you live a healthier lifestyle.
US private healthcare insurance is required to pay for “medically necessary” treatments and generally does pay for medicine where they are unlikely to see the benefits (see statins).
How do you convince those pre-diabetic people to use a GLP-1? There was quite a bit of backlash about the one-time injection COVID vaccine when it was mandated.
You don’t need to mandate it, heaps of people who are obese or overweight are eager to take it, because they are sick of being this way, worried about the long-term health risks, feel the societal sigma, etc. For many such people who currently don’t, the big reason is not that they don’t want to, it is that their insurance doesn’t cover it and they can’t afford the $$$ of paying for it uninsured-but as patents expire the price is going to come down. Other people don’t like injecting themselves, but oral formulations are becoming available
COVID was different because being a transmissible disease, there was a strong motivation to try to maximise the percent of the population immunised. With GLP-1 agonists, if you made them freely available, likely over >50% of eligible patients would take them voluntarily, which would result in massive long-term cost savings from lifestyle diseases, even considering the continued costs from the other 50% who will refuse. And insurers may even give discounts to those who take GLP-1s (if permitted by regulators)
GLP-1s are probably going to have the unintended side effect of increasing weight stigma - already obesity skews poor, once most of the well-off obese people cure their obesity with GLP-1s it is going to skew even more poor. I can foresee a cycle in which GLP-1s increase weight stigma which pushes more people into taking them which then increases weight stigma even more, which could drive up their adoption even further
Any bureaucratic system is going to be inefficient. You see it in countries with universal healthcare. In Canada, some provinces now have wait times of over a year from referral to treatment. Many European countries face similar access issues, though France and the Netherlands perform somewhat better.
The U.S. is a different kind of mess. It’s a patchwork of heavy government restrictions, large public programs like Medicare, and for-profit corporations, all thrown together without a coherent design. It’s no surprise it’s expensive. In 2023, healthcare spending was nearly 18% of GDP. Another factor could simply be wealth: higher per capita GDP tends to correlate with higher healthcare spending. To be fair to the U.S. healthcare system, it is highly capitalized, with much higher concentrations of diagnostic equipment like MRI machines than other OECD countries, and it does have some of the highest five-year survival rates for cancer and heart disease.
Even so, all of these healthcare systems are heavily dysfunctional in many ways.
In contrast to all of this, cosmetic surgery and laser eye surgery are the only fields of medicine where prices have actually fallen in inflation-adjusted terms, which is extraordinary, as prices in healthcare in general have increased much faster than inflation. The superior performance of these fields is because of basic market dynamics. People pay out of pocket, so they’re price conscious, and providers compete. There are also fewer regulatory restrictions since these fields aren’t tied up in government programs like Medicare.
Innovation is the only thing that reliably drives prices down. But in most of healthcare, it moves slowly. Devices often take 10-30 years to cycle out. Compare that to consumer electronics, where turnover happens every 1-2 years.
If it were up to me, I’d make restrictions on medical providers much lighter. Anyone could offer medical procedures as long as they disclose they’re uncertified and include a government-mandated warning. That kind of freedom is necessary to solve hard problems. You can’t regiment innovation and industry development. Gatekeeping in the name of consumer safety is the worst thing that can be done to any industry, and unfortunately, there is heavy gatekeeping in healthcare.
That is not to say that I am opposed to government intervention in general. I think it can play a critical role in advancing healthcare. Where government intervention creates the most value is in funding research for the public domain: drug designs, medical procedures, and open datasets. These investments have enormous returns and are best handled by governments. If the private sector focused on delivery and innovation, with governments making strategic contributions in foundational research, healthcare would see revolutionary improvements generation after generation.
The big secret is that they could detect cancer very early in most people, but the health care companies don't want to pay for the screening. You can pay out of pocket for these procedures. I was told this by a cancer researcher
EDIT:
Adding these caveats:
1. There is a ton of nuance in the diagnosis, since most people have a small amount of cancer in their blood at all times
2. The screenings are 5-10k + follow up appointments to actually see if its real cancer
3. All in cost then could be much higher per person
4. These tests arent something that are currently produced to be used at mass scale
The not so big secret is that we can detect cancer early in a lot of people, but we also would detect a lot of not-cancer. We don't currently know the cost/benefit of that tradeoff for all these new types of screening, and therefore insurers and health systems are reluctant to pay the cost of the both screening and the subsequent workup. This is not just a financial consideration, though the financial part is a big part -- the workup for those that end up as not-cancer has non-negligible risks for the patients as well (I have had patients of mine suffer severe injury and even die from otherwise routine biopsies), and on top of that, some actual cancers may not really benefit from early discovery in the first place.
This is not to downplay the potential benefit of early cancer detection... which is huge. And in the US/UK anyway, there are ongoing large trials to try to figure some of this stuff out in the space of blood-based cancer screening, as part of the path to convincing regulatory bodies and eventual reimbursement for certain tests. As mentioned, you can currently at least get the Galleri test out of pocket (<$1k, not cheap, but not exorbitant either), as well as whole body MRIs (a bit more expensive, ~$2-5k).
Many prostate cancers, for instance, are slow growing and won't kill you before something else does. If you try to take that kind of cancer out surgically or zap it with radiation or chemo the side effects could be severe.
Yeah, after a detection there is alot of work to determine if what they detected should be worried about. But this doesnt take away from the fact that cancer can be detected very early, and these screenings could easily save your life
Most healthy, active people who eat decently, get enough rest, and avoid drinking and smoking, will be able to eliminate cancer as it comes up. The only people who would benefit from these screenings are already unhealthy and cancer might be just one of many potential conditions they could experience—the goal of healthcare is not to dedicate an inordinate amount of resources for procedures that may amount to not much of any long term benefit.
People talk about the “immune system” but they are really referring to a number of systems the body uses to regulate itself, more or less successfully, around environmental pressures. The body is a system under tension, sometimes extreme tension leads to extreme success (success here being growth of power), sometimes it breaks the body, and sometimes the systems have been slowly failing for a while, and most treatments will not help. Medicine is only useful in the specific case where the power of the body would be promoted if not for one thing, that the body would be healthy, at least manageably so, without that issue.
The usual story is that you’re just better off not knowing because you’ll end up doing more harm than good chasing every little suspicious diagnosis. Cancer happens all the time, but many times doesn’t lead to anything.
Yeah but then you'd go through life having biopsies all the time. If all people did a full body MRI almost everyone would have weird lumps that doctors would have to biopsy to be really sure, and then what do you do? Do you biopsy yourself every time some weird tissue appears? Most of those will be nothing and you'll be going through the complications of surgeries and anesthesia all the time just to always make sure.
Doing this could be actively worse for you and society based on the false positive rate. Testing and accidental unneeded treatment carry very real risks that could lead to net suffering and more death or damage if enough people are tested.
I actually know a little about this through my work. Cell-free DNA (CfDNA) Has been known about for a few decades, but has become more of a focus in recent years because of the advent of immunotherapies, which are often highly targeted drugs. CfDNA has also been used in "liquid biopsies" i.e, a simple blood draw, because it can help you profile the tumor and location of the cancer.
In my field, we all think that CfDNA testing will eventually become a standard thing that will go along with your annual physical's blood test, because it has predictive/preventative abilities.
how actionable is the result? let's say you do detect trace amounts of tumoral DNA in your blood, what can you do? can you prevent it from developing into a full-on tumor if you don't even know where it is?
Seems this newfound ability to detect cancers earlier than we thought possible could be used to develop better treatments to boost the body's innate ability to eliminate cancerous cells before they turn malign: 1) identify thousands of people with traces of cancerous DNA that are too weak to merit immediate action and who are willing to participate in a trial. 2) Divide them into two groups, one group gets for one month a daiiy dose of auricularia auricula fungus extract or whatever that is believed to possibly prevent cancers from developing, the other group gets a placebo. 3) Run the early-detection test again at the end of the month to see whether or not there is a difference in cancerous DNA signal strength between the two groups.
A relative had this or a similar test come back positive. This sounds like a helpful signal on paper, but in reality it's not always actionable.
They assumed their previous cancer had survived and metastasized. Doctors couldn't find the source. It turned into a waiting game, where they lived with a sword of Damocles over their head. They were retested every few months and monitored. Then after a year the tests the levels dropped off. And the end result was nothing came of it so far.
It's normal to have some amount of pre-cancerous cells get naturally removed by your immune system. And this catches those too.
Have you seen function health? It’s now a unicorn.
As far as I can tell, they did a wholesale deal with quest diagnostics, and run your results through ChatGPT and give you supplement / diet recs via a pretty web portal 2x a year for $499.
Claim is it’s 100 biomarkers and would cost avg person $15k retail.
- Sure, cancer can develop years before diagnosis. Pre-cancerous clones harboring somatic mutations can exist for decades before transformation into malignant disease.
- The eternal challenge in ctDNA is achieving a "useful" sensitivity and specificity. For example, imagine you take some of your blood, extract the DNA floating in the plasma, hybrid-capture enrich for DNA in cancer driver genes, sequence super deep, call variants, do some filtering to remove noise and whatnot, and then you find some low allelic fraction mutations in TP53. What can you do about this? I don't know. Many of us have background somatic mutations speckled throughout our body as we age. Over age ~50, most of us are liable to have some kind of pre-cancerous clones in the esophagus, prostate, or blood (due to CHIP). Many of the popular MCED tests (e.g. Grail's Galleri) use signals other than mutations (e.g. methylation status) to improve this sensitivity / specificity profile, but I'm not convinced its actually good enough to be useful at the population level.
- The cost-effectiveness of most follow on screening is not viable for the given sensitivity-specificity profile of MCED assays (Grail would disagree). To achieve this, we would need things like downstream screening to be drastically cheaper, or possibly a tiered non-invasive screening strategy with increasing specificity to be viable (e.g. Harbinger Health).
It COULD be used to craft a pipeline that dramatically improved everyone's health. It would take probably a decade or two of testing (an annual MRI, an annual sequencing effort, an annual very wide blood panel) in a longitudinal study with >10^6 people to start to show significant reductions in overall cancer mortality and improvements in diagnostics of serious illnesses. The diagnostic merit is almost certainly hiding in the data at high N.
The odds are that most of the useful things we would find from this are serendipitous - we wouldn't even know what we were looking at right now, first we need tons of training data thrown into a machine learning algorithm. We need to watch somebody who's going to be diagnosed with cancer 14 years from now, and see what their markers and imaging are like right now, and form a predictive model that differentiates between them and other people who don't end up with cancer 14 years from now. We [now] have the technology for picking through complex multidimensional data looking for signals exactly like this.
In the meantime, though, you have to deal with the fact that the system is set up exclusively for profitable care of well-progressed illnesses. It would be very expensive to run such a trial, over a long period of time, and the administrators would feel ethically bound to unblind and then report on every tiny incidentaloma, which completely fucks the training process.
This US is institutionally unable to run this study. The UK or China might, though.
The child of a friend of mine has PTEN-Hamartom-Tumor-Syndrom, a tendency to develop tumors throughout life due to a mutation in the PTEN gene. The poor child gets whole body MRIs and other check-ups every half year. As someone in biological data science, I always tell the parents how difficult it will be to prevent false positives, because we don't have a lot of data on routine full body check-ups on healty people. We just know the huge spectrum on how healthy/ok tissue looks like.
I wonder if our current research product is only considered the gold standard because doing things in a probabilistic way is the only way we can manage the complexity of the human body to date.
It’s like me running an application many, many times with many different configurations and datasets, while scanning some memory addresses at runtime before and after the test runs, to figure out whether a specific bug exists in a specific feature.
Wouldn’t it be a lot easier if I could look at the relevant function in the source code and understand its implementation to determine whether it was logically possible based on the implementation?
We currently don’t have the ability to decompile the human body, or understand the way it’s “implemented”, but that is something that tech is rapidly developing tools that could be used for such a thing. Either a way to corroborate enough information aggregated about the human body “in mind” than any person can in one lifetime and reason about it, or a way to simulate it with enough granularity to be meaningful.
Alternatively, the double-blindedness of a study might not be as necessary if you can continually objectively quantify the agreement of the results with the hypothesis.
Ie if your AI model is reporting low agreement while the researchers are reporting high agreement, that could be a signal that external investigation is warranted, or prompt the researchers to question their own biases where they would’ve previously succumbed to confidence bias.
All of this is fuzzy anyway - we likely will not ever understand everything at 100% or have perfect outcomes, but if you can cut the overhead of each study down by an order of magnitude, you can run more studies to fine-tune the results.
Alternatively, you can have an AI passively running studies to verify reproducibility and flag cases where it fails, whereas now the way the system values contributions makes it far less useful for a human author to invest the time, effort, and money. Ie improve recovery from a bad study a lot quicker rather than improve the accuracy.
EDIT: These are probably all ideas other people have had before, so sorry to anyone who reaches the end of my brainstorming and didn’t come out with anything new. :)
Like if we had some kind of prophylactic cancer treatment that was easy/cheap/safe enough to recommend to people even on mild suspicion of cancer with false positives, we could offer it to positive tests. Maybe even just lifestyle interventions if those are proven to work. That's probably very difficult though, just dreaming out loud.
It gives people the agency to alter their lifestyle trajectory.
I personally suspect that people get and cure cancer all the time.
I wonder if cancer is just damage to your body - either a lot of direct damage or interfering with the body's ability to manage/heal itself.
if someone was pre-cancer, would it help to exercise, cut out sugar, use the sauna, stop overeating? I'll bet it might make a difference
the problem is you do the test for 7 billion people, say, 30 times over their life... 210000000000 tests. imagine how many false negatives and false positives, the cost of follow up testing only to find... false positive. the cost of telling someone they have cancer when they don't. the anger of telling someone they are free of cancer, only to find out they had it all along
this tech isn't that good, nowhere near it, more like a 1 in 100 or 10 in 100 rate of "being wrong". those numbers can get cheesed towards more false positives or false negatives.
as for grail, they tried to achieve this and printed OK numbers... ... .. but their test set was their training set. so the performance metrics went to shit when they rolled it out to production
So, more like — did the tumor come back? And if that does happen, with ctDNA, can you detect that there is a relapse before you would otherwise find it with standard imaging. Most studies I’ve seen have shown that this happens and ctDNA is a good biomarker for early detection of relapse.
The case for proactively looking for circulating tumor DNA without an initial diagnosis or underlying genetic condition is a bit dicier IMHO. For example, what if really like to know (I haven’t read this article, but I’m pretty familiar with the field) is how many people had a detectable cancer in their plasma (ctDNA), but didn’t receive a cancer diagnosis. It’s been known for a while that you can detect precancerous lesions well before a formal cancer diagnosis. But, what’s still an open question AFAIK, is how many people have precancerous lesions or positive ctDNA hits that don’t form a tumor?
(I’ve done a little work in this area)
And the question would be “do I believe the test when it tells me the cancer is gone?” When you know it’s not 100% accurate?
Or do you always do the adjuvant treatment considering the very small chance the test is wrong has a very high cost (death)?
I lost my wife to melanoma that metastasized to her brain after cancerous mole and margin was removed 4 years earlier. They did due diligence and by all signs there was no evidence of recurrence, until there was. They think that the tumor appeared 2-3 months before symptoms (headaches) appeared, so it was unlikely that you’d discover it otherwise.
With something like this, maybe you could get lower dose immunotherapy that would help your body eradicate the cancer?
What is CHIP?
It’s when bone marrow cells acquire mutations and expand to take up a noticeable proportion of all your bone marrow cells, but they’re not fully malignant, expanding out of control.
There are a lot of companies right now trying to apply AI to health, but what they are ignoring is that there are orders of magnitude less health data per person than there are cat pictures. (My phone probably contains 10^10 bits of cat pictures and my health record probably 10^3 bits, if that). But it's not wrong to try to apply AI, because we know that all processes leak information, including biological ones; and ML is a generic tool for extracting signal from noise, given sufficient data.
But our health information gathering systems are engineered to deal with individual very specific hypotheses generated by experts, which require high quality measurements of specific individual metrics that some expert, such as yourself, have figured may be relevant. So we get high quality data, in very small quantities -a few bits per measurement.
Suppose you invent a new cheap sensor for extracting large (10^7+ bits/day) quantities of information about human biochemistry, perhaps from excretions, or blood. You run a longitudinal study collecting this information from a cohort and start training a model to predict every health outcome.
What are the properties of the bits collected by such a sensor, that would make such a process likely to work out? The bits need to be "sufficiently heterogeneous" (but not necessarily independent) and their indexes need to be sufficiently stable (in some sense). What is not required if for specific individual data items to be measured with high quality. Because some information about the original that we're interested in (even though we don't know exactly what it is) will leak into the other measurements.
I predict that designs for such sensors, which cheaply perform large numbers of low quality measurements are would result in breakthroughs what in detection and treatment, by allowing ML to be applied to the problem effectively.
A chemosensor also sounds like a useful thing it should give concentration by time. Minimally invasive option would be to monitor breath, better signal in blood.
I'd be really curious to see how longitudinal results of sequencing + data banking, plus other routine bloodwork, could lead to early detection and better health outcomes.
It's kind of why I'm favor of universal option to align financial incentives. Like given how sick the US population is, it probably makes sense to put a lot more people of GPL-1s and invest in improving their efficacy and permanence. Like nationalize-the-patent COVID-operational-warp-speed level urgency. There are over 100M Americans that are pre-diabetic, the cost of treating a diabetic is about 20k/yr. So $4 trillion in new costs, on top of the misery and human suffering.
Sounds like either there are complicating factors or an absence of standard protocol adherence.
US health insurance is a mess, but that doesn’t sound like the entire story. I suspect urologists see a fair amount of friction for routine procedures related to prostrate health.
COVID was different because being a transmissible disease, there was a strong motivation to try to maximise the percent of the population immunised. With GLP-1 agonists, if you made them freely available, likely over >50% of eligible patients would take them voluntarily, which would result in massive long-term cost savings from lifestyle diseases, even considering the continued costs from the other 50% who will refuse. And insurers may even give discounts to those who take GLP-1s (if permitted by regulators)
GLP-1s are probably going to have the unintended side effect of increasing weight stigma - already obesity skews poor, once most of the well-off obese people cure their obesity with GLP-1s it is going to skew even more poor. I can foresee a cycle in which GLP-1s increase weight stigma which pushes more people into taking them which then increases weight stigma even more, which could drive up their adoption even further
The U.S. is a different kind of mess. It’s a patchwork of heavy government restrictions, large public programs like Medicare, and for-profit corporations, all thrown together without a coherent design. It’s no surprise it’s expensive. In 2023, healthcare spending was nearly 18% of GDP. Another factor could simply be wealth: higher per capita GDP tends to correlate with higher healthcare spending. To be fair to the U.S. healthcare system, it is highly capitalized, with much higher concentrations of diagnostic equipment like MRI machines than other OECD countries, and it does have some of the highest five-year survival rates for cancer and heart disease.
Even so, all of these healthcare systems are heavily dysfunctional in many ways.
In contrast to all of this, cosmetic surgery and laser eye surgery are the only fields of medicine where prices have actually fallen in inflation-adjusted terms, which is extraordinary, as prices in healthcare in general have increased much faster than inflation. The superior performance of these fields is because of basic market dynamics. People pay out of pocket, so they’re price conscious, and providers compete. There are also fewer regulatory restrictions since these fields aren’t tied up in government programs like Medicare.
Innovation is the only thing that reliably drives prices down. But in most of healthcare, it moves slowly. Devices often take 10-30 years to cycle out. Compare that to consumer electronics, where turnover happens every 1-2 years.
If it were up to me, I’d make restrictions on medical providers much lighter. Anyone could offer medical procedures as long as they disclose they’re uncertified and include a government-mandated warning. That kind of freedom is necessary to solve hard problems. You can’t regiment innovation and industry development. Gatekeeping in the name of consumer safety is the worst thing that can be done to any industry, and unfortunately, there is heavy gatekeeping in healthcare.
That is not to say that I am opposed to government intervention in general. I think it can play a critical role in advancing healthcare. Where government intervention creates the most value is in funding research for the public domain: drug designs, medical procedures, and open datasets. These investments have enormous returns and are best handled by governments. If the private sector focused on delivery and innovation, with governments making strategic contributions in foundational research, healthcare would see revolutionary improvements generation after generation.
It's all private in the Netherlands, just with non-discriminatory mandatory private insurance, so that makes sense.
EDIT:
Adding these caveats:
1. There is a ton of nuance in the diagnosis, since most people have a small amount of cancer in their blood at all times
2. The screenings are 5-10k + follow up appointments to actually see if its real cancer
3. All in cost then could be much higher per person
4. These tests arent something that are currently produced to be used at mass scale
This is not to downplay the potential benefit of early cancer detection... which is huge. And in the US/UK anyway, there are ongoing large trials to try to figure some of this stuff out in the space of blood-based cancer screening, as part of the path to convincing regulatory bodies and eventual reimbursement for certain tests. As mentioned, you can currently at least get the Galleri test out of pocket (<$1k, not cheap, but not exorbitant either), as well as whole body MRIs (a bit more expensive, ~$2-5k).
Many prostate cancers, for instance, are slow growing and won't kill you before something else does. If you try to take that kind of cancer out surgically or zap it with radiation or chemo the side effects could be severe.
People talk about the “immune system” but they are really referring to a number of systems the body uses to regulate itself, more or less successfully, around environmental pressures. The body is a system under tension, sometimes extreme tension leads to extreme success (success here being growth of power), sometimes it breaks the body, and sometimes the systems have been slowly failing for a while, and most treatments will not help. Medicine is only useful in the specific case where the power of the body would be promoted if not for one thing, that the body would be healthy, at least manageably so, without that issue.
The usual story is that you’re just better off not knowing because you’ll end up doing more harm than good chasing every little suspicious diagnosis. Cancer happens all the time, but many times doesn’t lead to anything.
thanks for adding the caveats; they suggest that there are good reasons why it isn't clear cut that health care companies should pay.
It's the same reason they pay for annual physicals in the first place.
In my field, we all think that CfDNA testing will eventually become a standard thing that will go along with your annual physical's blood test, because it has predictive/preventative abilities.
Doing this same idea but with inflammation monitoring would be enormously valuable as well.
They assumed their previous cancer had survived and metastasized. Doctors couldn't find the source. It turned into a waiting game, where they lived with a sword of Damocles over their head. They were retested every few months and monitored. Then after a year the tests the levels dropped off. And the end result was nothing came of it so far.
It's normal to have some amount of pre-cancerous cells get naturally removed by your immune system. And this catches those too.
As far as I can tell, they did a wholesale deal with quest diagnostics, and run your results through ChatGPT and give you supplement / diet recs via a pretty web portal 2x a year for $499.
Claim is it’s 100 biomarkers and would cost avg person $15k retail.
I’m a member and love it.
then i remembered a month or so ago seeing this, and not knowing what to make of it.
https://siphoxhealth.com/