I don’t like relative risk and relative risk reduction because it tends to overestimate the effectiveness of the intervention.
In this case, the absolute risk when measuring for death in the GIM pre-intervention and GIM post-intervention are 0.0215 (2.15%) and 0.0146 (1.46%) with an absolute risk reduction of 0.0069 (.69%).
While the relative risk is 26% across the pre- and post-intervention, the absolute risk reduction is only 0.69% with a NNT (number needed to treat) of 1/156. Which means that 1 patient in 156 was helped by this intervention.
In addition, they had 2 false alarms for each true alarm and could suggest that interventions were performed in patients who did not require it — more tests, medications and possibly increased risk from said interventions.
This shows that the CHARTwatch ML/AI is not helping at all that much clinically.
I like this analysis, although I come to a different conclusion: if AI can give early warning to nursing staff, telling them 'look closer', and over 1/3 of the time, it was right, that seems great. Right now in a 30 bed unit, nurses have to keep track of 30 sets of data. With this, they could focus in on 3 sets when an alarm goes off. I believe these systems will get better over time as well. But, as a patient, I'd 100% take a ward that early AI warning with 66% chance of false positives over one with no such tech. Wouldn't you?
I would not. High false alarm rates are a problem in all sorts of industry when it comes to warnings and alerts. Too many alerts, or too many false positive alerts cause operators (or nurses in this example) to start ignoring such warnings.
No, many people working in clinical units wouldn't. Because of what might happen on false alarms. What GP said: more meds, more interventions. It's not clear at all whether such systems would help with current workflows and current technology. One of the most famous books about medicine says that good medicine is doing nothing as much as possible. It's still very true in 2024, and probably for a long time still.
I like this analysis, although I come to a different conclusion: if AI can allow nurses to manage 10x as many beds (30 vs 3), a hospital can now let go 90% of its nursing staff. Wouldn’t you?
Would you suffer serious nonlethal complications from false alarm to (maybe) save your room neighbour that you've never met before? This wouldn't be captured.
You can't automate it. You have to look at the data and charts to figure out the specifics you want and then you plug and chug. I haven't looked deeply at this though but whenever researchers use relative risk and it shows a profound effect, I always calculate the absolute risk to make sure that the intervention is effective.
Many researchers go to relative risk because it shows better results!
That’s a good point, a similar conversation was had around the Covid jabs, with some research re vaccine mandates concluding “heads of governments, schools, healthcare facilities, and private businesses (were) misled by the vaccines’ reported 95% relative risk reduction”
In an ideal world, the nurse to patient ratio would be high enough that patients could be seen on regular rotation frequently. I've never been in a hospital where this was the case. So a system that can correctly prioritize resources for critical cases even if it's pulling resources away from non-critical cases will probably result in a net improved outcome.
Fun bit of trivia (though depressing) from the wiki
The term "MARS" is trademarked and licensed to Salford Systems. In order to avoid trademark infringements, many open-source implementations of MARS are called "Earth".
The question is will this lead to better care or a reduction in resources? Technology allows companies to become 'just good enough'. Any better than 'just good enough' and resources are withdrawn. If there is a 26% improvement in x and x was 'just good enough' before then the only 'rational' move by administration is to reduce other resources until x hits 'just good enough' again. That being said, I think the improvements are coming so rapidly in healthcare that we have a real chance of causing the entire system to shift into a new dynamic so maybe we will actually capture some of these gains for patients.
This takes place in Canada. There are no for-profit hospital complexes like the USA. All of our major hospitals are non-profit, reimbursed by the single-payer healthcare system and philanthropists getting stuff named after them. The profit-motive isn't as significant of a factor here.
That being said, I'm fine with a reduction of resources if additional resources don't increase the quality of my care. In Canada, doctors don't really like to prescribe antibiotics for minor infections.
Americans find this bizarre, but for a minor infection antibiotics are going to screw up your stomach bacteria and long-term health to maybe treat a disease that your body can easily handle on its own.
There's no magic value that comes from allocating resources to a problem. Oftentimes spending money has zero or negative impact beyond virtue-signalling that you care about the problem.
Canadian hospitals have largely the same cost cutting and "efficiency" measures as their US equivalents. Departments have budgets that they have to fight for, feifdoms compete for scraps, and there is an enormous and perpetually growing admin/executive side that is taking more and more of the budget. Couple this with governments such as Ontario that "starve the beast", so to speak, forcing hospitals to squeeze further.
I don't think we should ever take any sort of superior position on this. The same motivations and outcomes occur.
Having said that, efficiency is good, especially with an aging population that will require more and more care. Resources are limited, so applying them in the most effective, efficient way possible is always a win.
I totally agree. The tie to whole patient outcome is stronger in that system. Still not perfect, but a lot more direct for sure. It may be an odd thing to say, but because of that there is an argument that the Canadian system is closer to a true free market healthcare system with the patient as the consumer than the US system.
I think you're missing an important part of the equation: it's outcome quality per amount paid. If you could have gotten 20% better results but it would mean tripling the costs of healthcare because we'd need to hire a lot more staff, perhaps we felt that was a bad deal.
If you can get 20% by paying... what, presumably <5% more for some ML tool that double-checks stuff and flags risky stuff... perhaps it's something we want to do.
No, my argument isn't that this wouldn't be used, it is that by using it there will be overage in quality of care above 'good enough' for the same or similar cost. That will result in the most expensive resources being reduced until quality of care is back to 'good enough' at less cost. It isn't a stretch to imagine that a tool like this would lead to a reduction in nursing staff since they can make rounds more effective and now don't need as many people to get the same level of quality job done.
Outcome is not one thing. The patient wants better health. The provider has an interest in profits. The government has an interest in optics…well anyone using “AI” does.
I think in this case it's unlikely because I don't think the problems the tool solves correspond 1-1 with reduced staffing or other resources. The tool mostly seems to provide ongoing diagnosis at a level of detail the clinical team doesn't have regular bandwidth for (they might make one diagnosis of the patient per time they visit the patient rather than on an ongoing basis). It doesn't really reduce the amount of time staff can spend with patients. They can't get rid of doctor diagnosis entirely so they can't really reduce time per patient in any effective way.
Starving the beast is an ongoing program, the budget will be cut (or fixed, hence silently cut through inflation) either way. My hope is that improvements like this will stave off the harmful effects of the budget cuts.
You realistically can’t starve the beast that is healthcare. The costs will go up disproportionately, and they do, in basically every advanced economy: https://en.m.wikipedia.org/wiki/Baumol_effect
This question has an implicit assumption that you're talking about a US-style health system and the incentives that exist in a system of that structure.
This is exactly why a structure like the UK NHS which is going for "what's the most healthcare I can get for the country with a fixed pot of money" is a better setup.
For instance, in the UK the female contraceptive pill is free to whoever wants it. Because that is a whole lot cheaper than extra (unwanted) pregnancies. Similarly the NHS has spent money on reducing smoking because that's cheaper than dealing with the health effects.
The early death of smokers tends to save a long, expensive period of end-of-life care. I believe smoking deaths reduce health care costs, ironically enough.
This scenario exists only in progressive and HNers heads.
Companies make money and Capitalism works by offering more services not fewer. Are there companies that does short-term thinking? Yes.
But overall, our standard of living and quality of services has always improved
That was a rational capitalist argument. If a company has an opportunity to make money, they will. Any better than 'good enough' isn't rational and the people running that company should be fired. In the long term the entire industry will slowly adopt this and the standard of care may rise slightly as these gains are used for competitive advantage instead of pure profit but that will take a while at best and relies on a true free market, which healthcare definitely isn't.
IME anything that looks vaguely like a cost center often has something vaguely resembling an acceptable service/quality level and people typically aim to achieve that with the lowest cost. It’s not at all uncommon to cut budgets/headcount when that goal is exceeded noticeably.
Unclear what "AI" brings to the table here. Sounds like traditional automation & monitoring could do the job here. No mention of how the model works, or what kind of training is involved.
> white blood cell count was "really, really high"
You don't need AI for this.
I wish they would provide a more compelling example.
It’s a regression model. You don’t “need” AI for anything. But using ML to identify thresholds for decision making is extremely useful.
I don’t like calling everything AI, but I’m even more irritated by people that don’t understand the value of simple ML models for low hanging fruit decisions like the one shown here
In AI applications, especially those involving predictive modeling, MARS can be used to improve the accuracy of predictions. For example, MARS models are used in time series forecasting, financial predictions, environmental modeling, and other domains where relationships between inputs and outputs are complex and non-linear. By adding time-awareness, the model can handle time-based data more effectively.
It's the difference between "give the programmer this medical report and have them parse out the white blood cell count" versus s/progammer/AI. And the same every time that the report changes in any way.
I've been that programmer more times than I can count. I'm much happier about being able to work on better problems instead than I am worried about AI taking away my rice bowl.
I think there is some element of "technology laundering" here that I saw during the blockchain hype. Even if plain ol' monitoring and automation could solve your problem no executives want to back that. If you say it's adding AI, blockchain, etc. they get to feel like a visionary so they'll fund your project
I am tired of people redefining AI to exclude fully viable and useful technologies in favour of the latest hype. AI should be a functional concept, not defined by technological choices.
Most modern AI does even less. It simply flows values through a graph. No decision is ever made. The consumer of the network interprets the result and makes a decision.
I think the real question is why is this being reported on. There are always medical advancements, but because this one gets chosen as a news story because "AI" in the headline gets clicks.
Because healthcare (and banking and ....) are horribly behind on tech. We have life saving devices in hospitals still running Windows 95 as an OS. Also, the main problem in healthcare is misaligned incentives. As said elsewhere in this thread, this kind of tech will get when it enables cost reductions larger than its costs.
Because tech people don't understand how healthcare systems work, and reciprocally healthcare workers have neither the education nor the time to understand new tech. The result is what you get today: people from both sides shouting at deaf ears on the internet. Also, the usual corporate culture issues.
Machine learning is extremely good at recognising patterns and I'd much rather trust an LLM's spotting accuracy for an early warning system than the regex code of hospital IT workers
Machine learning is indeed extremely good at pattern recognition, but I wouldn't trust an LLM to reliably identify patterns, especially in a medical context. As other commenters have said, this article is evidence of classical methods continuing to be useful.
This doesn't make sense on many levels. "Hospital IT" does not code the hospital EHR systems, just like the airport doesn't code flight management systems.
These are life-long software engineers, just like others reading this comment, using the best tools at their disposal to engineer lifesaving software. They're not using "regex" to develop algorithms for monitoring patients (???), and frankly that suggestion is so wild that one has to assume you don't know anything about algorithm design at all.
An LLM literally hallucinates incorrect answers by design and struggles to get extremely basic math and spelling correct.
You're welcome to put your literal life in the hands of a hallucinating english generator, but when it comes to healthcare, I want a "0% LLM" policy. LLM's will be the cheap things that offer substandard care to poor people, while the wealthy and elite enjoy personalized and human-centered care.
Knowing what I know about workplace dynamics in hospitals I'm gonna go out on a limb and say that the "new hotness" factor of the term "AI" probably does a lot of heavy lifting here when it comes to getting buy in from management and users.
Forgoing a decade of income to get some letters beside your name selects for people who don't take orders from Clippy unless you market it well.
This is a great example of "classic AI" being more than good enough.
Using AI to find patterns in patients and intervene was something I worked on in my last job in Specialty Pharma. Theres many red flags on patients long before they even start treatment, sadly income is one of the largest red flags here in the States.
We were able to perform interventions earlier and improve outcomes with a simple regression model that tried to determined the number of missed doses.
Medical professionals, mostly nurses, are spread extremely thin. They are so busy
and/or jaded that they often neglect to show any compassion or empathy until they see somebody else doing it. Having a family member nearby also keeps them accountable.
Medicine isn't science because science is not as advanced as many would think. The lack of workplace integration is also a big factor.
I don't think we discourage second opinions, except maybe in some for-profit structures. The bad idea is to have multiple people making decisions in parallel. I'm not in the US, though.
Regarding advocacy, I don't think it's so crazy. It's very good to have a valid interlocutor when the patient is diminished. Also, hospitals are big systems with limited personalization. If someone's there to call out the system when it's trying to shoehorn too hard, it's also very good.
Enjoy the privilege of seeing multiple doctors as long as you still can. With steady cost reduction (AI, automation, less effort per patient) and increase in medical authoritarianism ("expert said so") that privilege is on thin ice. In the UK it's already normal to have a single area-designated doctor you're allowed to go to, and that doctor is also a gatekeeper to refer you to specialists. Hope he likes you!
Beyond that, AI diagnosis would likely require an extensive medical online profile of you. Such e-med profiles obviously already exist in various countries, as opt-out features. In the name of cost reduction through automation, I'll be so free and call it: These profiles will become mandatory over the next ten years. Either way, good luck getting a second opinion once a false diagnosis ended up in your file or once AI continuously misidentifies a pattern present there.
I was semi-retired two years ago and decided to do a LPN program to work part time, do something physical, something that felt like a moral win and good for society.
I would have had no problem intellectually getting through the program but quit after the first night in a hospital.
Anyone sitting at a desk can not understand how tough and miserable a nursing job is. Everyone is basically miserable and stressed out. The work is completely thankless, disgusting and dangerous with personal liability on the line if you make a mistake. Everything that we take for granted in an office setting just doesn't apply in a medical setting.
I eventually just went back to a bullshit project management job, for more money than a nurse of course. This is obviously part of the problem.
It is easy to complain about the system when it is someone else who has to help grandma to the bathroom. There is no easy solution for any of this given the demographics. It is basically a disaster.
We have the term “GOFAI” to distinguish “modern” AI from the older stuff (big bag of if statements, behavior trees, etc., but do we need a new term now to distinguish pre-LLM / Diffusion models (neural networks and tree based models)? Everyone thinks “ChatGPT” when they hear AI now but surely this is something more like XGBoost or a neural network under the hood.
No, we don't use GOFAI, we call it machine learning. LLMs are a subset of the field, and if you want to refer to them just use the term LLM. We don't need new terms when we already have easy to use precise language.
Marketing will abuse any term they get their hands on, and certainly "AI" has been abused, but in the field it usually the umbrella term for all areas of research into making "intelligent" behaviour. Be it expert systems, logic systems, machine learning, statistical machine learning, or otherwise.
Important to note that the timing of this means that it's dedicated, specific AI, not "throw a wrapper and a specific prompt in front of ChatGPT" AI. Of course it's all muddied now.
Test results are already reported from testing equipment a value and expected range (to account for a specific machine/reagant's calibration). Notifying when out of range hardly seems like a AI, but it certainly might be marketed as such.
Maybe there is some nuance for things like a patient in for liver issues where their liver enzymes are expected to be abnormal, but identifying when it is abnormal for them.
Yeah, I'm not sure how this qualifies as AI outside of marketing, but wanted to get ahead of the people whose opinions would be biased by the current en vogue LLMs.
In this case, the absolute risk when measuring for death in the GIM pre-intervention and GIM post-intervention are 0.0215 (2.15%) and 0.0146 (1.46%) with an absolute risk reduction of 0.0069 (.69%).
While the relative risk is 26% across the pre- and post-intervention, the absolute risk reduction is only 0.69% with a NNT (number needed to treat) of 1/156. Which means that 1 patient in 156 was helped by this intervention.
In addition, they had 2 false alarms for each true alarm and could suggest that interventions were performed in patients who did not require it — more tests, medications and possibly increased risk from said interventions.
This shows that the CHARTwatch ML/AI is not helping at all that much clinically.
the headline says we're talking about death: does that mean 1 life was saved for every 156 patients?
>In addition, they had 2 false alarms for each true alarm and ... and possibly increased risk from said interventions
but wouldn't this study have captured any deaths from those interventions, so the 1 out of 156 life-savings was net?
Many researchers go to relative risk because it shows better results!
It came to roughly the same conclusion as the gp comment when provided with the study PDF.
https://chatgpt.com/share/66eb09e3-7a74-8008-afa8-3b60161d24...
(Though obviously this approach still requires you to go and look at the PDF yourself to make sure it isn't making anything up)
Happy to talk about it.
Are you in the Healthcare industry?
https://www.sciencedirect.com/science/article/pii/S277265332...
https://doi.org/10.1503/cmaj.240132
I found this interesting:
> 1 truly alerted patient for every 2 falsely alerted patients was deemed an acceptable number of false alarms
In an ideal world, the nurse to patient ratio would be high enough that patients could be seen on regular rotation frequently. I've never been in a hospital where this was the case. So a system that can correctly prioritize resources for critical cases even if it's pulling resources away from non-critical cases will probably result in a net improved outcome.
https://en.m.wikipedia.org/wiki/Multivariate_adaptive_regres...
That being said, I'm fine with a reduction of resources if additional resources don't increase the quality of my care. In Canada, doctors don't really like to prescribe antibiotics for minor infections.
Americans find this bizarre, but for a minor infection antibiotics are going to screw up your stomach bacteria and long-term health to maybe treat a disease that your body can easily handle on its own.
There's no magic value that comes from allocating resources to a problem. Oftentimes spending money has zero or negative impact beyond virtue-signalling that you care about the problem.
I don't think we should ever take any sort of superior position on this. The same motivations and outcomes occur.
Having said that, efficiency is good, especially with an aging population that will require more and more care. Resources are limited, so applying them in the most effective, efficient way possible is always a win.
If you can get 20% by paying... what, presumably <5% more for some ML tool that double-checks stuff and flags risky stuff... perhaps it's something we want to do.
Outcome is not one thing. The patient wants better health. The provider has an interest in profits. The government has an interest in optics…well anyone using “AI” does.
This is exactly why a structure like the UK NHS which is going for "what's the most healthcare I can get for the country with a fixed pot of money" is a better setup.
For instance, in the UK the female contraceptive pill is free to whoever wants it. Because that is a whole lot cheaper than extra (unwanted) pregnancies. Similarly the NHS has spent money on reducing smoking because that's cheaper than dealing with the health effects.
Abundant contraception encourages and promotes promiscuity
> the NHS has spent money on reducing smoking because that's cheaper than dealing with the health effects.
Reducing tobacco usage makes more room for nicotine OTC and vaping to replace it. Among other stimulants.
From a nation which should know better after being so very thoroughly roasted by Mr. Swift some few years ago: https://www.gutenberg.org/files/1080/1080-h/1080-h.htm
Even declaring that is the case doesn’t change that it’s still clearly a personal judgement depending on the individual.
"Depending on the individual" here means, depending if you're a share holder, or the patient dying on the cot.
> white blood cell count was "really, really high"
You don't need AI for this.
I wish they would provide a more compelling example.
I don’t like calling everything AI, but I’m even more irritated by people that don’t understand the value of simple ML models for low hanging fruit decisions like the one shown here
A 26% reduction in unexpected deaths, apparently.
I've been that programmer more times than I can count. I'm much happier about being able to work on better problems instead than I am worried about AI taking away my rice bowl.
And you don’t need Dropbox for file sync. Machine learning makes integrating automation easier.
"A difference-in-differences comparison between GIM and subspecialty units demonstrated no statistically significant difference in outcomes"
These are life-long software engineers, just like others reading this comment, using the best tools at their disposal to engineer lifesaving software. They're not using "regex" to develop algorithms for monitoring patients (???), and frankly that suggestion is so wild that one has to assume you don't know anything about algorithm design at all.
An LLM literally hallucinates incorrect answers by design and struggles to get extremely basic math and spelling correct.
You're welcome to put your literal life in the hands of a hallucinating english generator, but when it comes to healthcare, I want a "0% LLM" policy. LLM's will be the cheap things that offer substandard care to poor people, while the wealthy and elite enjoy personalized and human-centered care.
LLM's and accuracy in one sentence in the context of quantifying thresholds is stunning.
LLM's don't have a concept of numerical accuracy.
Forgoing a decade of income to get some letters beside your name selects for people who don't take orders from Clippy unless you market it well.
Using AI to find patterns in patients and intervene was something I worked on in my last job in Specialty Pharma. Theres many red flags on patients long before they even start treatment, sadly income is one of the largest red flags here in the States.
We were able to perform interventions earlier and improve outcomes with a simple regression model that tried to determined the number of missed doses.
If a loved one is in the hospital, stay with them as long as the hospital will allow you to.
Medical professionals, mostly nurses, are spread extremely thin. They are so busy and/or jaded that they often neglect to show any compassion or empathy until they see somebody else doing it. Having a family member nearby also keeps them accountable.
I have seen it personally too many times.
Medical isnt science, and its frightening.
The weirdest thing I've experienced as a patient is that Physicians will urge you against second opinions or having multiple doctors.
Hope telemedicine becomes more mainstream, I'd like to avoid US physicians as much as possible.
I don't think we discourage second opinions, except maybe in some for-profit structures. The bad idea is to have multiple people making decisions in parallel. I'm not in the US, though.
Regarding advocacy, I don't think it's so crazy. It's very good to have a valid interlocutor when the patient is diminished. Also, hospitals are big systems with limited personalization. If someone's there to call out the system when it's trying to shoehorn too hard, it's also very good.
I would have had no problem intellectually getting through the program but quit after the first night in a hospital.
Anyone sitting at a desk can not understand how tough and miserable a nursing job is. Everyone is basically miserable and stressed out. The work is completely thankless, disgusting and dangerous with personal liability on the line if you make a mistake. Everything that we take for granted in an office setting just doesn't apply in a medical setting.
I eventually just went back to a bullshit project management job, for more money than a nurse of course. This is obviously part of the problem.
It is easy to complain about the system when it is someone else who has to help grandma to the bathroom. There is no easy solution for any of this given the demographics. It is basically a disaster.
Marketing will abuse any term they get their hands on, and certainly "AI" has been abused, but in the field it usually the umbrella term for all areas of research into making "intelligent" behaviour. Be it expert systems, logic systems, machine learning, statistical machine learning, or otherwise.
If you want the details, call it a regression model. If not, why insisting on communicating the details?
Maybe there is some nuance for things like a patient in for liver issues where their liver enzymes are expected to be abnormal, but identifying when it is abnormal for them.