I'm an interventional radiologist with a master's in computer science. People outside radiology don't get why AI hasn't taken over.
Can AI read diagnostic images better than a radiologist? Almost certainly the answer is (or will be) yes.
Will radiologists be replaced? Almost certainly the answer is no.
Why not? Medical risk. Unless the law changes, a radiologist will have to sign off on each imaging report. So say you have an AI that reads images primarily and writes pristine reports. The bottleneck will still be the time it takes for the radiologist to look at the images and validate the automated report. Today, radiologist read very quickly, with a private practice rads averaging maybe 60-100 studies per day (XRs, ultrasounds, MRIs, CTs, nuclear medicine studies, mammograms, etc). This is near the limit of what a human being can reasonably do. Yes, there will be slight gains at not having to dictate anything, but still having to validate everything takes nearly as much time.
Now, I'm sure there's a cavalier radiologist out htere who would just click "sign, sign, sign..." but you know there's a malpractice attorney just waiting for that lawsuit.
This is like saying that self-driving cars won't ever become a thing because someone behind the wheel needs to be to blame. The article cites AI systems that the FDA already has cleared to operate without a physicians' validation.
> This is like saying that self-driving cars won't ever become a thing because someone behind the wheel needs to be to blame.
Which is literally the case so far. No manufacturer has shown any willingness to take on the liability of self driving at any scale to date. Waymo has what? 700 cars on the road with the finances and lawyers of Google backing it.
Let me know when the bean counters sign off on fleets in the millions of vehicles.
I'm curious how many people would want a second opinion (from a human) if they're presented with a bad discovery from a radiological exam and are then told it was fully automated.
I have to admit if my life were on the line I might be that Karen.
The FDA can clear whatever they want. A malpractice lawyer WILL sue and WILL win whenever an AI mistake slips through and no human was in the loop to fix the issue.
It's the same way that we can save time and money if we just don't wash our hands when cooking food. Sure it's true. But someone WILL get sick and we WILL get in trouble for it
This is essentially what's happened with airliners.
Planes can land themselves with zero human intervention in all kinds of weather conditions and operating environments. In fact, there was a documentary where the plane landed so precisely that you could hear the tires hitting the center lane marker as it landed and then taxied.
Yet we STILL have pilots as a "last line of defense" in case something goes wrong.
At the end of day, there's a decision needs to be made and decisions have consequences. And in our current society, there are only one way we know about how to make sure that the decision is taken with sufficient humanity: by putting a human to be responsible for making that decision.
Medicine does not work like traffic. There is no reason for a human to care whether the other car is being driven by a machine.
Medicine is existential. The job of a doctor is not to look at data, give a diagnosis and leave. A crucial function of practicing doctors is communication and human interaction with their patients.
When your life is on the line (and frankly, even if it isn't), you do not want to talk to an LLM. At minimum you expect that another human can explain to you what is wrong with you and what options there are for you.
I'm moderately amused that as an interventional radiologist, you didn't bother to mention that IRs do actual procedures and don't just WFH. When I was doing my DxMP residency there was a joke among the radiology residents that IRs had slotted into the cushiest field of medicine and then flopped the landing by choosing the only subfield that requires physical work.
Well I do enjoy procedures. As for diagnostics, it’s very different when you come from a CS background.
On a basic level, software exists to expedite repetitive human tasks. Diagnostic radiology is an extremely repetitive human task. When I read diagnostics, there’s a voice in the back of my head saying, “I should be writing code to automate this rather than dictating it myself.”
I don't know. Doesn't sound like a very big obstacle to me. But I don't think AI will replace radiologists even if there was a law that said like, "blah blah blah, automated reports, can't be sued, blah blah." I personally think the consulting work they do is really valuable and very difficult to automate, we would be in an AGI world where radiologists get replaced, which seems unlikely.
The bigger picture is that we are pretty much obligated to treat people medically, which is a good thing, so there is a lot more interest in automating healthcare than say, law, where spending isn't really compulsory.
> I don't know. Doesn't sound like a very big obstacle to me
A lot of things are one law amendment away from happening and they aren’t happening. This could well become another mask mandate, which while being reasonable in itself, rubs people wrong way just enough to become a sacred issue.
I'm not going to comment on whether AI is better than human radiologists or not, but if it is, what will happen is this:
Radiologists will validate the results but either find themselves clicking "approve, approve, approve" all day, or disagree and find they were wrong (since our hypothesis is that the AI is better than a human). Eventually, this will be common knowledge in the field, hospitals will decide to save on costs and just skip the humans altogether, lobby, and get the law changed.
I don't think the legal framework even allows the patient to make that trade off. Can a patient choose 99.9% accuracy instead of 99.95% accuracy and also waive the right to a malpractice lawsuit?
You know the crazy thing about this? For this application I think it’s similar to spam. AI can easily be trained to be better than a human.
And it’s definitely not a 0.05 percent difference. AI will perform better by a long shot.
Two reasons for this.
1. The AI is trained on better data. If the radiologist makes a mistake that mistake is identified later and then the training data can be flagged.
2. No human indeterminism. AI doesn’t get stressed or tired. This alone even without 1. above will make AI beat humans.
Let’s say 1. was applied but that only applies for consistent mistakes that humans make. Consistent mistakes are eventually flagged and shows up as a pattern in training data and the AI can learn it even though humans themselves never actually notice the pattern. Humans just know that the radiologists opinion was wrong because a different outcome happened, we don’t even have to know why it was wrong and many times we can’t know… just flagging the data is enough for the AI to ingest the pattern.
Inconsistent mistakes comes from number 2. If humans make mistakes that are due to stress the training data reflecting those mistakes will be minuscule in size and also random without pattern. The average majority case of the training data will smooth these issues out and the model will remain consistent. Right? A marker that follows a certain pattern shows up 60 times in the data but one time it’s marked incorrectly because of human error… this will be smoothed out.
Overall it will be a statistical anomaly that defies intuition. Similar to how flying in planes is safer than driving. ML models in radiology and spam will beat humans.
I think we are under this delusion that all humans are better than ML but this is simply not true. You can thank LLMs for spreading this wrong intuition.
I think its the other way around AI would certainly have better accuracy than a human, AI can see things pixel by pixel.
You can take a 4k photo of anything, change one pixel to pure white and a human wouldn't be able to find this pixel by looking at the picture with their eyes. A machine on the other hand would be able to do it immediately and effortlessly.
Machine vision is literally superhuman, For example Military camo can easily fool human eyes. But a machine can see through it clear as day. Because they can tell the difference between
I'm a diagnostic radiologist with 20 years clinical experience, and I have been programming computers since 1979. I need to challenge one of your core assumptions.
> Can AI read diagnostic images better than a radiologist? Almost certainly the answer is (or will be) yes.
I'm sorry, but I disagree, and I think you are making a wild assumption here. I am up to date on the latest AI products in radiology, use several of the, and none of them are even in the ballpark on this. That vast majority are non-contributory.
It is my strong belief that there is an almost infinite variation in both human anatomy and pathology. Given this variation, I believe that in order for your above assumption to be correct, the development of "AGI" will need to happen.
When I interpret a study I am not just matching patterns of pixels on the screen with my memory. I am thinking, puzzling, gathering and synthesizing new information. Every day I see something I have never seen before, and maybe no one has ever seen before. Things that can't and don't exist in a training data set.
I'm on the back end of my career now and I am financially secure. I mention that because people will assume I'm a greedy and ignorant Luddite doctor trying to protect my way of life. On the contrary, if someone developed a good replacement for what I don, I would gladly lay down my microphone and move on.
But I don't think we are there yet, in fact I don't think we're even close.
Can a human reliably carefully study for hours on end imaging from screening tests (think of a future world where whole-body MRI scanning for asymptomatic people becomes affordable and routine thanks to AI processing) and not miss subtle anomalies?
I can easily imagine that humans are better at really digging deeply and reasoning carefully about anomalies that they notice.
I doubt they're nearly as good as computers at detecting subtle changes on screens where 99% of images have nothing worrisome and the priors are "nothing is suspicious".
I don't want to equate radiologists with TSA screeners, but the false negative rate for TSA screening of carryon bags is incredibly high. I think there's an analog here about the ability of humans to maintain sustained focus on tedious tasks.
> When I interpret a study I am not just matching patterns of pixels on the screen with my memory.
Seems like an over simplification, but let's say it's just true. Wouldn't you rather spend your time on novel problems that you haven't seen before? Some ML system identifies easy/common ones that it has high confidence in, leaving the interesting ones for you?
Your belief is held by many, many radiologists. One thing I like to highlight is that LLMs and LVMs are much more advanced than any model in the past. In particular, they do not require specific training data to contain a diagnosis. They don't even require specific modality data to make inferences.
Think about how you learned anatomy. You probably looked at Netter drawings or Grey's long before you ever saw a CT or MRI. You probably knew the English word "laceration" before you saw a liver lac. You probably knew what a ground glass bathroom window looked like before the term was used to describe lung findings.
LLMs/LVMs ingest a huge amount of training data, more than humans can appreciate, and learn connections between that data. I can ask these models to render an elephant in outer space with a hematoma on its snout in the style of a CT scan. Surely, there is no such image in the training set, yet the model knows what I want from the enormous number of associations in its network.
Also, the word "finite" has a very specific definition in mathematics. It's a natural human fallacy to equate very large with infinite. And the variation in images is finite. Given a 16-bit, 512 x 512 x 100 slice CT scan, you're looking at 2^16 * 26214400 possible images. Very large, but still finite.
Of course, the reality is way, way smaller. As a human, you can't even look at the entire grayscale spectrum. We just say, < -500 Hounsfield units (HU), that's air, -200 < fat < 0, bone/metal > 100, etc. A gifted radiologist can maybe distinguish 100 different tissue types based on the HU. So, instead of 2^16 pixel values, you have...100. That's 100 * 26214400 = 262,440,000 possible CT scans. That's a realistic upper-limit on how many different CT scans there could possibly be. So, let's pre-draft 260 million reports and just pick the one that fits best at inference time. The amount you'd have to change would be miniscule.
Johns Hopkins has an in house AI unit where they train their own AI's to do imaging analysis. In fact this center made the rounds a few months ago in an NYT story about AI in radiology.
What was left out was that these "cutting edge" AI imaging models were old school CNNs from the mid 2010's, running on local computers. It seems only right now is the idea of using transformers (what LLMs are) is being explored.
In that sense, we still do not know what a purpose build "ChatGPT of radiology" would be capable of, but if we use the data point of comparing AI from 2015 to AI of 2025, the step up in ability is enormous.
"Latest products" and "state of the art" are two very, very different classes of systems. If anything medical has reached the state of a "product", you can safely assume that it's somewhere between 5 and 50 years behind what's being attempted in the labs.
And in AI tech, even "5 years ago" is a different era.
In year 2025, we have those massive multimodal reasoning LLMs that can crossreference data from different images, text and more. If the kind of effort and expertise that went into general purpose GPT-5 went into a more specialized medical AI, where would its capabilities top out?
> Every day I see something I have never seen before, and maybe no one has ever seen before.
Do you have any typical examples of this you could try to explain to us laymen, so we get a feel for what this looks like? I feel like it's hard for laymen to imagine how you could be seeing new things outside a pattern every day (or week}.
Doesn't most of the stuff a radiologist does get double checked anyways by the doctor that orders the scan in the first place? I guess not a more typical screening scan like a mammogram. However, for anything else like a CT, MRI, Xray, etc. I expect the doctor/NP that ordered it in the first place will want to take a look at the image itself and not just the report on the image.
A primary physician (or NP) isn't in a position to validate the judgement of a specialist. Even if they had the training and skill (doubtful), responsibility goes up, not down. It's all a question of who is liable when things go wrong.
As an ER doc I look at a lot of my own studies, because I'm often using my interpretation to guide real-time management (making decisions that can't wait for a radiologist). I've gotten much better over time, and I would speculate that I'm one of the better doctors in my small hospital at reading my own X-rays, CTs, and ultrasounds.
I am nowhere near as good as our worst radiologist (who is, frankly... not great).
It's not even close.
>People outside radiology don't get why AI hasn't taken over
AI will probably never taking over, what we really need is AI working in tandem with radiologist and complementing their work to help with their busy schedule (or limited number of radiologist).
The OP title can also be changed to "Demand for human cardiologist is at an all-time high", and is still be true.
For example in CVDs detection cardiologist need to diagnose the patient properly, and if the patient not happy with the diagnostic he can get a second opinion from another cardiologist, but cardiologist number is very limited even more limited than radiologist.
For most of the countries in the world, only several hundreds to several thousands registered cardiologist per country, making the ratio about 1:100,000 cardiologist to population ratio.
People expecting cardiologist to go through their ECG readings but do you know that reading ECG is very cumbersome. Let's say you have 5 minutes ECG signals for the minimum requirement for AFib detection as per guideline. The standard ECG is 12-lead resulting in 12 x 5 x 60 = 3600 beats even for the minimum 5 minutes durations requirements (assuming 1 minute ECG equals to 60 beats). Then of course we have Holter ECG with typical 24-hour readings that increase the duration considerably and that's why almost all Holter reading now is automated. But current ECG automated detection has very low accuracy because their accuracy of their detection methods (statistics/AI/ML) are bounded by the beat detection algorithm for example the venerable Pan-Tompkins for the fiducial time-domain approach [1].
The cardiologist will rather spent their time for more interesting activities like teaching future cardiologists, performing expensive procedures like ICD or pacemaker, or having their once in a blue moon holidays instead of reading monotonous patients' ECGs.
I think this is why ECG reading automation with AI/ML is necessary to complement the cardiologist but the trick is to increase the sensitivity part of the accuracy to very high value preferably 100% so the missing potential patients is minimized for the expert and cardiologist in the loop exercise.
I used to think that too. AI can already do better at screening mammography than a radiologist with a lower miss rate. Given that, insurance rates to cover AI should be even lower than for a radiologist, lawsuits will happen, but with a smaller number of missed cases the number should go down.
Paul Kedrosky had an interesting analogy when the automobile entered the scene. Teamsters (the men who drove teams of horses) benefited from rising salaries, even as new people declined to enter the "dead end" profession. We may well be seeing a similar phenomenon with Radiologists.
Finally, I'd like to point out that rising salaries mean there are greater incentives to find alternative solutions to this rising cost. Given the erratic political situation, I will not be surprised to see a relatively sudden transition to AI interpretation for at least a minority of cases.
But this indicates lack of incentives to reduce healthcare costs by optimisation. If AI can do something well enough , and AI + humans surpass humans leading to costs reductions/ increased throughput this should be reflected in the workflows.
I feel that human processes have inertia and for lack of a better word, gatekeepers feel that new, novel approaches should be adopted slowly and which is why we are not seeing the impact, yet. Once a country with the right incentive structure (e.g. China ) can show that it can outperform and help improve the overall experience I am sure things will change.
While 10 years progress is a lot in ML, AI , in more traditional fields it probably is a blip to change this institutional inertia which will change generation by generation. All that is needed is an external actor to take the risk and show a step change improvement. Having experienced how healthcare in US I feel people are only scared to take on bold challenges
Three things explain this. First, while models beat humans on benchmarks, the standardized tests designed to measure AI performance, they struggle to replicate this performance in hospital conditions. Most tools can only diagnose abnormalities that are common in training data, and models often don’t work as well outside of their test conditions. Second, attempts to give models more tasks have run into legal hurdles: regulators and medical insurers so far are reluctant to approve or cover fully autonomous radiology models. Third, even when they do diagnose accurately, models replace only a small share of a radiologist’s job. Human radiologists spend a minority of their time on diagnostics and the majority on other activities, like talking to patients and fellow clinicians
Remember, the AI doesn’t create anything, so you add risk potentially to the patient outcome and perhaps make advancement more difficult.
My late wife had to have a stent placed in a vein in her brain to relieve cranial pressure. We had to travel to to New York for an interventional radiologist and team to fish a 7 inch stent and balloon from her thigh up.
At the time, we had to travel to NYC, and the doctor was one of a half dozen who could do the procedure in the US. Who’s going to train the future physician the skills needed to develop the procedure?
For stuff like this, I feel like AI is potentially going to erase certain human knowledge.
The assumption is that more productive AI + humans leads to cost reductions.
But if everyone involved has a profit motive, you end up cutting at those cost reductions. "We'll save you 100 bucks, so give us 50", done at the AI model level, the AI model repackager, the software suite that the hospital is using, the system integrators that manage the software suite installation for the hospital, the reseller of the integrator's services through some consultancy firm, etc etc.
There are so many layers involved, and each layer is so used to trying to take a slice, and we're talking about a good level of individualization in places that aren't fully public a la NHS, that the "vultures" (so to speak) are all there ready to take their cut.
Maybe anathema to say on this site, but de-agglomeration really seems to have killed just trying to make things better for the love of the game.
From real world experience as a patient that has had a lot go wrong over the last decade. The problem isn’t lack of automation, it’s structural issues affecting cost.
Just as one example a chest CT would’ve cost $450 if done cash. It costed an insurer over $1200 done via insurance. And that was after multiple appeals and reviews involving time from people at the insurance company and the providers office including the doctor himself. The low hanging fruit in American healthcare costs is the stuff like that.
Risks in traditional medicine are standardized by standardized training and credentialing. We haven't established ways to evaluate the risks of transferring diagnostic responsibility to AIs.
> All that is needed is an external actor to take the risk and show a step change improvement
Who's going to benefit? Doctors might prioritize the security of their livelihood over access to care. Capital will certainly prioritize the bottom line over life and death[0].
The cynical take is that for the time being, doctors will hold back progress, until capital finds a way to pay them off. Then capital will control AI and control diagnosis, letting them decide who is sick and what kind of care they need.
The optimistic take is that doctors maintain control but embrace AI and use it to increase the standard of care, but like you point out, the pace of that might be generational instead of keeping pace with technological progress.
Part of the challenge is that machines are significantly different. The radiologist’s statement that an object measured from two different machines is the same and has not changed in size is in large part judgement. Building a model which can replicate this judgement likely involves building a model which can solve all common computer vision tasks, has the full medical knowledge of an expert radiologist, and has been painstakingly calibrated against thousands of real radiologists in hospital conditions.
> If AI can do something well enough , and AI + humans surpass humans leading to costs reductions/ increased throughput this should be reflected in the workflows.
But it doesn't lead to increased throughput because there needs to be human validation when people's lives are on the line.
Planes fly themselves these days, it doesn't increase the "throughout" or eliminate the need for a qualified pilot (and even a copilot!)
The article points out that the AI + humans approach gives poorer results. Humans end up deferring to or just accepting the AI output without double checking. So corner cases, and situations where the AI doesn't do well just end up going through the system.
Or, maybe artifacts justify prices less so than amounts of souls bothered will. Robotic medical diagnosis could save costs, but it could suppress customers' appetite too, in which case, like you said, commercial healthcare providers would not be incentivized to offer it.
I think the one thing we will find out with the AI/Chatbot/LLM boom is: Most economic activity is already reasonably close to a local optimum. Either you find a way to change the whole process (and thereby eliminate steps completely) or you won't gain much.
That's true for AI-slop-in-the-media (most of the internet was already lowest effort garbage, which just got that tiny bit cheaper) and probably also in medicine (a slight increase in false negatives will be much, much more expensive than speeding up doctors by 50% for image interpretation). Once you get to the point where some other doctor is willing (and able) to take on the responsibility of that radiologist, then you can eliminate that kind of doctor (but still not her work. Just the additional human-human communication)
I mean the company providing the AI is free to assume malpractice insurance. If that happens then there is definitely a chance.
If statistically their error rate is better or around what a human does then their insurance is a factor of how many radiologists they intend to replace.
The article mentions a system for diabetic retinopathy diagnosis that is certified and has liability coverage. It sounds like it's the only one where that occurs. For everything else, malpractice insurance explicitly excludes any AI assisted diagnosis.
But the equipment is operated by a person, and the diagnostic report has to be signed off by a person, who has a malpractice insurance policy for personal injury attorneys to go after.
The system is designed a nanny-state fashion: there's no way to release practitioners from liability in exchange for less expensive treatments. I doubt this will change until healthcare pricing hits an extremely expensive breaking point.
Did the need raise through the use of silicon X ray detectors that improved the handling of images and reduced the time needed to get done imaging meaning that it made it faster, cheaper and less cumbersome, increasing the number of requests for X ray imaging?
For real though how close are we to a product that takes an order for an ED or inpatient CT A/P, protocols it then reads the images and can read the chart and spits out a dictated report without any human intervention that ends up usable as is even 90% of the time.
So you're telling me the reason an extremely expensive yet totally redundant cost in the healthcare infrastructure will remain in place is because of regulatory capture?
However, it is also because, in matters of life or death, as a diagnosis from a radiologist can be, we often seek a second opinion, perhaps even a third.
But we don't ask a second opinion to an "algorithm", we want a person, in front of us, telling us what is going on.
AI is and will be used in the foreseeable future as a tool by radiologists, but radiologists, at least for some more years, will keep their jobs.
Maybe, but there is already that risk of some influence from other doctors, patients, nurses and general circumstances.
When an X-Ray is ordered, there is usually a suspected diagnosis in the order like "suspect sprain, pls exclude fracture", "suspect lung cancer". Patients will complain about symptoms or give the impression of a certain illness. Things like that already place a bias on the evaluation a radiologist does, but they are trained to look past that and be objective. No idea how often they succeed.
4 years undergrad - major and minor not important, met the pre-med requirements
2 year grad school (got a master's degree, not required, but I was having fun)
4 years medical school
5 years radiology residency
I am actively researching this friction and others like it. I would love it if you happened to have recommendations for literature that 3rd parties can use to corroborate your experience (I’ve found some, but this is harder to uncover than I expected as I’m not in the field)
Add to that that the demand for imaging is not fixed. Even if somehow imaging became a lot cheaper to do with AI, then likely we would just get more imaging done instead of having fewer radiologists.
When Tesla demoed (via video) self-driving in 2016 with a claim "The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself" and then when they unveiled Semi in 2017 - I tweeted out and honestly thought that trucking industry is changed forever and it doesn't make sense to be starting in trucking industry. It's almost end of 2025 and either nothing out of it or just a small part of it panned out.
I think we all have become hyper-optimistic on technology. We want this tech to work and we want it to change the world in some fundamental way, but either things are moving very slowly or not at all.
Look at Waymo, not Robotaxi. Waymo is essentially the self driving vision I had as a kid, and ridership is growing exponentially as they expand. It's also very safe if you believe their statistics[0]. I think there's a saying about overestimating stuff in the short term and underestimating stuff in the long term that seems to apply here, though the radiologist narrative was definitely wrong.
Even though the gulf between Waymo and the next runner up is huge, it too isn't quite ready for primetime IMO. Waymos still suffer from erratic behavior at pickup/dropoff, around pedestrians, badly marked roads and generally jam on the brakes at the first sign of any ambiguity. As much as I appreciate the safety-first approach (table stakes really, they'd get their license pulled if they ever caused a fatality) I am frequently frustrated as both a cyclist and driver whenever I have to share a lane with a Waymo. The equivalent of a Waymo radiologist would be a model that has a high false-positive and infinitesimal false-negative rate which would act as a first line of screening and reduce the burden on humans.
I agree with both comments here. I wonder what the plausibility of fully autonomous trucking is in the next 10-30 years...
Is there any saying that exists about overestimating stuff in the near term and long term but underestimating stuff in the midterm? Ie flying car dreams in the 50s etc.
Waymo is very impressive, but also demonstrates limitations of these systems. Waymo vehicles are still getting caught performing unsafe driving maneuvers, they get stuck alleys in numbers, and responders have trouble getting them to acknowledge restricted areas. I am very supportive of this technology, but also highly skeptical as long as these vehicles are directly causing problems for me personally. Driving is more than a technical challenge, it involves social communication skills that automated vehicles do not yet have.
I've seen a similar quote attribute to Bill Gates;
"We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten."
I think about this quote a lot these days, especially while reading Hacker News. On one hand, AI doesn't seem to be having the productivity and economic impacts that were predicted, but on the other, LLMs are getting gold medals at the Math Olympiads. It's like the ground is shifting beneath our feet, but it's still too slow to be perceptible.
Waymo still have the ability to remotely deal with locations the AI has problems; I'd love to know what type of percentage of trips need to do that now.
Having that escape together with only doing tested areas makes their job a LOT easier.
(Not that it's bad - it's a great thing and I wish for it here!)
It's limited to a few specific markets though. My bet is they aren't going to be able to roll it out widely easily. Probably need to do years of tests in each location to figure out the nuances of the places.
Honestly, once a traffic island city (like Singapore) or some other small nation state adopts self driving only within its limits and shows that it is much easier when all are self driving I think the opposition to the change will slowly reduce.
Rain, Snow etc. are still challenges but needs a bold bet in a place that wants to show how futuristic it is. The components are in place (Waymo cars), what is needed is high enough labor cost to justify the adoption.
for me i have been riding in waymos the last year and have been very pleased with the results. i think we WANT this technology to move faster but the some of the challenges at the edges take a lot of time and resources to solve, but not fundamentally unsolvable.
It is the usual complexity rule of software: solving 80% of the problem is usually pretty easy at only takes about 50% of the estimated effort, it is the remaining 20% that takes up the remaining 90% of estimated effort (thus the usual schedule overruns).
The interesting thing is that there are problems for which this rule applies recursively. Of the remaining 20%, most of it is easier than the remaining 20% of what is left.
Most software ships without dealing with that remaining 20%, and largely that is OK; it is not OK for safety critical systems though.
Most work is actually in oversight and getting the train to run when parts fail. When running millions of machines 24/7 there is always a failing part. Also understanding gesticulation humans and running wildlife is not yet (fully) automatable.
Where we're too optimistic is with technology that demos impressively, but which has 10,000 potentially-fatal edge cases. Self-driving cars and radiology interpretation are both in this category.
When there are relatively few dangerous edge cases, technology often works better than we expect. TikTok's recommendation algorithm and Shazam are in this category.
This story (the demand for Radiologists) really shows a very important thing about AI: It's great when it has training data, and bad at weird edge cases.
Gee, seems like about the worst fucking thing in the world for diagnostics if you ask me, but what do I know, my degree is in sandwiches and pudding.
A lot of people in the industry really underestimated the difficulty in getting self driving cars be effective. It is relatively easy to get a staged demo together, but getting a trustworthy product out there is really hard.
We've seen this with all of the players, with many dropping out due to the challenges.
Having said that, there are several that are fielded right now, with varying degrees of autonomy. Obviously Waymo has been operating in small-ish geofences for a while, but they are managed >200% annual growth readily. Zoox just started offering fully autonomous drives in Vegas.
And even Tesla is offering a service, albeit with safety monitors/drivers. Tesla Semi isn't autonomous at all, but appears ready to go into volume production next year too.
This is such a stereotypical SF / US based perspective.
Easy to forget the rest of the world does not and never has ticked this way.
Don't get me wrong, optimism and thinking of the future are great qualities we direly need in this world on the one hand.
On the other, you can't outsmart physics.
We've conquered the purely digital realm in the past 20 years.
We're already in the early years of the next phase were the digital will become ever more multi-modal and make more inroads into the physical world.
So many people bring an old mindset to a new context, where maring of errors, cost of mistakes or optimizing the last 20% of a process is just so vastly different than a bit of HTML, JS and backend infra.
It's almost end of 2025 and either nothing out of it or just a small part of it panned out.
The truck part seems closer than the car part.
There are several driverless semis running between Dallas, Houston, and San Antonio every day. Fully driverless. No human in the cab at all.
Though, trucking is an easier to solve problem since the routes are known, the roads are wide, and in the event of a closure, someone can navigate the detour remotely.
Things happen slowly, then all at once. Many people think ChatGPT appeared out of nowhere a couple of years ago. In reality it was steadily improving for 8 years. Before then, LLMs were being developed for Word2Vec. Before then, Yoshua Bengio and colleagues proposed the first neural probabilistic language model, and introducing distributed word representations (precursors to embeddings). Before then we had Statistical NLP took hold, with n-gram models, hidden Markov models, and later phrase-based machine translation. Before that we had work on natural language processing (NLP) which began with symbolic AI and rule-based systems (e.g., ELIZA, 1966).
These are all stepping stones, and eventually the technology is mature enough to productise. You would be shocked by how good Tesla FSD is right now. It can easily take you on a cross country trip with almost zero human interactions.
I realized long ago that full unattended self driving requires AGI. I think Elon finally figured that out. So now LLMs are going to evolve into AGI any moment. Um no. Tesla (and others) have effectively been working on AGI for 10 years with no luck
For trucking I think self driving can be, in the short term, an opportunity for owner-operators. An owner-operator of a conventional truck can only drive one truck at a time, but you could have multiple self driving trucks in a convoy led by a truck manned by the owner-operator. And there might be an even greater opportunity for this in Europe thanks to the low capacity of European freight rail compared to North America.
I used to think this sort of thing too. Then a few years ago I worked with a SWE who had experience in the trucking industry. His take was that most trucking companies are too small scale to benefit from this. The median trucking operation is basically run by the owner's wife in a notebook or spreadsheet- and so their ability to get the benefits of leader/follower mileage like that just doesn't exist. He thought that maybe the very largest operators- Walmart and Amazon- could benefit from this, but he thought that no one else could.
This was why he went into industrial robotics instead, where it was clear that the finances could work out today.
"I think we all have become hyper-optimistic on technology. We want this tech to work and we want it to change the world in some fundamental way, but either things are moving very slowly or not at all."
It's also like nobody learns from the previous hype cycles. Short term overly optimistic predications followed by disillusionment and then long term benefits which deliver on some of the early promises.
For some reason, enthusiasts always think this time is different.
Waymo has worked out. I’ve taken one so many times now I don’t even think about it. If Waymo can pull this off in NYC I believe it will absolutely be capable of long distance trucking not that far in the future.
Trucks are orders of magnitude more dangerous. I wouldn’t be surprised if Waymo is decades away from being able to operate a long haul truck on the open interstate.
Meanwhile, it’s my feeling that technology is moving insanely fast but people are just impatient. You move the bar and the expectations move with it. I think part of the problem is that the market rewards execs who set expectations beyond reality. If the market was better at rewarding outcomes not promises, you’d see more reasonable product pitches.
How have expectations moved on self driving cars? Yes, we're finally getting there, but adoption is still tiny relative to the population and the cars that work best (Waymo) are still humongously expensive + not available for consumer purchase.
The universe has a way with being disappointing. This isn't to say that life is terrible and we should have no optimism. Rather, that things generally work out for the better, but usually not in the way we'd prefer them to.
It's not about optimism. It is well established in the industry that Tesla's hardware-stack gives them 98% accuracy at the very most. But those voices are drowned by the marketing bravado.
In the case of Musk it has worked out. His lies have earned him a fortune and now he asks Tesla to pay him out with a casual 1 trillion paycheck.
The best story I heard about machine learning and radiology was when folks were racing to try to detect COVID in lung X-rays.
As I recall, one group had fairly good success, but eventually someone figured out that their data set had images from a low-COVID hospital and a high-COVID hospital, and the lettering on the images used different fonts. The ML model was detecting the font, not the COVID.
If you're not at a university, try searching for "AI for radiographic COVID-19 detection selects shortcuts over signal" and you'll probably be able to find an open-access copy.
I remember a claim that someone was trying to use an ML model to detect COVID by analyzing the sound of the patient coughing.
I couldn't for the life of me understand how this was supposed to work. If the coughing of COVID patients (as opposed to patients with other respiratory illnesses) actually sounds meaningfully different in a statistically meaningful way (and why did they suppose that it would? Phlegm is phlegm, surely), surely a human listener would have been able to figure it out easily.
That doesn't really follow. NN models have been able to pick up on noisier and more subtle patterns than humans for a long time, so this type of research is definitely worth a short in my opinion. The pattern might also not be noticeable to a human at all, e.g. "this linear combination of frequency values in the Fourier space exceeds a specific threshold".
Anecdotes like this are informative as far as they go, but they don't say anything at all about the technique itself. Like your story about the fonts used for labeling, essentially all of the drawbacks cited by the article come down to inadequate or inappropriate training methods and data. Fix that, which will not be hard from a purely-technical standpoint, and you will indeed be able to replace radiologists.
Sorry, but in the absence of general limiting principles that rule out such a scenario, that's how it's going to shake out. Visual models are too good at exactly this type of work.
The issue is that in medicine, much like automobiles, unexpected failure modes may be catastrophic to individual people. “Fixing” failure modes like the above comment is not difficult from a technical standpoint, that’s true, but you can only fix it once you’ve identified it, and at that point you may have a dead person/people. That’s why AI in medicine and self driving cars are so unlike AI for programming or writing and move comparatively at a snails pace.
When it weren't for the font it might be anomalies in the image taking or even in the encoder software. You can never really be sure, what exactly the ML is detecting.
> Three things explain this. First,... Second, attempts to give models more tasks have run into legal hurdles: regulators and medical insurers so far are reluctant to approve or cover fully autonomous radiology models. Third, even when they do diagnose accurately, models replace only a small share of a radiologist’s job. Human radiologists spend a minority of their time on diagnostics and the majority on other activities, like talking to patients and fellow clinicians.
Everything else besides the above in TFA is extraneous. Machine learning models could have absolute perfect performance at zero cost, and the above would make it so that radiologists are not going to be "replaced" by ML models anytime soon.
I only came to this thread to say that this is completely untrue:
>Human radiologists spend a minority of their time on diagnostics and the majority on other activities, like talking to patients and fellow clinicians.
The vast majority of radiologists do nothing other than: come in (or increasingly, stay at home), sit down at a computer, consume a series of medical images while dictating their findings, and then go home.
If there existed some oracle AI that can always accurately diagnose findings from medical images, this job literally doesn't need to exist. It's the equivalent of a person staring at CCTV footage to keep count of how many people are in a room.
Agreed, I'm not sure where the OP from TFA is working but around here, radiologists have all been bought out and rolled into Radiology As A Service organizations. They work from home or at an office, never at a clinic, and have zero interactions with the patient. They perform diagnosis on whatever modality is presented and electronically file their work into their EMR. I work with a couple such orgs on remote access and am familiar with others, it might just be a selection bias on my side but TFA does not reflect my first-hand experience in this area.
>consume a series of medical images while dictating their findings, and then go home.
In the same fashion as construction worker just shows up, "performs a series of construction tasks", then go home. We just need to make a machine that performs "construction tasks" and we can build cities, railways and road networks for nothing but the cost of the materials!
Perhaps this minor degree of oversimplification is why the demise of radiologists have been so frequently predicted?
If they had absolute perfect performance at zero cost, you would not need a radiologist.
The current "workflow" is primary care physician (or specialist) -> radiology tech that actually does the measurement thing -> radiologist for interpretation/diagnosis -> primary care physician (or specialist) for treatment.
If you have perfect diagnosis, it could be primary care physician (or specialist) -> radiology tech -> ML model for interpretation -> primary care physician (or specialist.
If we're talking utopian visions, we can do better than dreaming of transforming unstructured data into actionable business insights. Let's talk about what is meaningfully possible: Who assumes legal liability? The ML vendor?
PCPs don't have the training and aren't paid enough for that exposure.
To understand why, you would really need to take a good read of the average PCP's malpractice policy.
The policy for a specialist would be even more strict.
You would need to change insurance policies before your workflow was even possible from a liability perspective.
Basically, the insurer wants, "a throat to choke", so to speak. Handing up a model to them isn't going to cut it anymore than handing up Hitachi's awesome new whiz-bang proton therapy machine would. They want their pound of flesh.
>Human radiologists spend a minority of their time on diagnostics and the majority on other activities, like talking to patients and fellow clinicians.
How often do they talk to patients? Every time I have ever had an x-ray, I have never talked to a radiologist. Fellow clinicians? Train the xray tech up a bit more.
If the mote is 'talking to people' that is a mote that doesn't need an MD, or at least not a full specialization MD. ML could kill radiologist MD, radiologist could become the job title of a nurse or x-ray tech specialized in talking to people about the output.
That's fine. But then the xray tech becomes the radiologist, and that becomes the point in the workflow that the insurer digs out the malpractice premiums.
In essence, your xray techs would become remarkably expensive. Someone is talking to the clinicians about the results. That person, whatever you call them, is going to be paying the premiums.
As a patient I don't think I've ever even talked to any radiologist that actually analyzed my imaging. Most of the times my family or I have had imaging done the imaging is handled by a tech who just knows how to operate the machines while the actual diagnostic work gets farmed out to remote radiologists who type up an analysis. I don't even think the other doctors I actually see ever directly talk to those radiologists.
It really depends on the specifics of the clinical situation; for a lot of outpatient radiology scenarios the patient and radiologist don't directly interact, but things can be different in an inpatient setting and then of course there are surgical and interventional radiology scenarios.
People love to bring this up, and it was a silly thing to say -- particularly since he didn't seem to understand that radiologists only spend a small part of their time reading scans.
But he said it in the context of a Q&A session that happened to be recorded. Unless you're a skilled politician who can give answers without actually saying anything, you're going to say silly things once in a while in unscripted settings.
Besides that, I'd hardly call Geoffrey Hinton an AI evangelist. He's more on the AI doomer side of the fence.
No, this was not an off-hand remark. He made a whole story comparing the profession to the coyote from road runner “they’ve already run of the cliff but don’t even realize it”. It was callous, and showed a total ignorance of the fact that medicine might be more than pixel classification.
Radiologists, here, mostly sit at home, read scan and dictate reports. They rarely talk to other doctors and talking to a patient is beyond them. They are some of the specialists with the best salary.
With interventional radiologists and radio-oncologists it's different but were talking about radiologists here...
I would argue an "AI doomer" is a negatively charged type of evangelist. What the doomer and the positive evangelist have in common is a massive overestimation of (current-gen) AI's capabilities.
It's the power of confidence and credentials in action. Which is why you should, when possible, look at the underlying logic and not just the conclusion derived from it. As this catches a lot of fluff that would otherwise be Trojan-Horsed into your worldview.
Let's assume the last person that entered their radiologist training started then and the training lasts 5 years. At the end of their training the year is 2021 and they are around 31. So that means they will practice medicine for cca 30 years which would put the calendar at around 2051. I'd wager in 25 years we'd get there so I think his opinion still has a large percentage of being correct.
People can't tell what they'll eat next sunday but they'll predict AGI and singualrity in 25 years. It's comfy because 25 years seems like a lot of time, it isn't.
Let’s say we do manage to develop a model that can replace radiologists in 20 years. But we stop training them today. What happens 15 years from now when we don’t have nearly enough radiologists.
Look, if we were okay with tolerating less regulation in medicine, and dismantled AMA, Hinton would have proven to be right by now and everyone would have been happier
Definitely an aggressive timeline but it seems like the biggest barrier to AI taking over radiology will be legal. Spending years training for a job which only continues to exist because of government fiat, which could change at any time, seems like a risky choice.
I sent a lady today to a radiologist for a core biopsy of a likely malignancy.
And a man to a radiologist for a lumbar perineural injection.
And a person to a radiologist for a subacromial bursa injection.
And a month ago I sent a woman to a radiologist to have adenomyosis embolised.
Also talked to a patient today who I will probably send to a radiologist to have a postnephrectomy urinary leak embolised.
Is an LLM going to do that ?
There is the another issue.
If AI commoditises a skill, competent people with options will just shift to another skill while offloading the commoditised skill to someone else.
Due to automated ECG interpretation built into every machine, reimbursement has plummeted. So I have let my ECG interpretation skills rust while focusing on my neurology and movement disorder skills.They are fun ... I also did part of a master's in AI decades ago ( Prolog, Lisp , good times, machine vision, good times...)
So now if someone needs a ECG, I am probably going to send them to a cardiologist who will do a ECG, Holter, Echo, Stress Echo etc. Income for the nice friendly cardiologist, extra cost and time for the patient and the health system.
I can imagine like food deserts, entire AI deserts in medicine that nobody want to work in. A bit like geriatrics, rural medicine and psychiatry these days.
The goal of the healthcare system isn't to make sure that the doctors get paid big bucks. It is, allegedly, to heal people.
Automating as much of that as possible and making healthcare more accessible should be pursued. Just like automated ECG interpretation made basic ECG more accessible.
> Due to automated ECG interpretation built into every machine
Oof - I hope the tools you're using as a physician are better than in the field as a paramedic.
I have never met a Lifepak (or Zoll) that doesn't interpret anything but the most textbook sinus rhythm in pristine conditions as "ABNORMAL ECG - EVALUATION NEEDED".
As a doctor and full stack engineer, I would never go into radiology or seek further training in it. (obviously)
AI is going to augment radiologists first, and eventually, it will start to replace them. And existing radiologists will transition into stuff like interventional radiology or whatever new areas will come into the picture in the future.
As a radiologist and full stack engineer, I’m not particularly worried about the profession going away. Changing, yes, but not more so than other medical or non-medical careers.
>AI is going to augment radiologists first, and eventually, it will start to replace them.
I am a medical school drop-out — in my limited capacity, I concur, Doctor.
My dentist's AI has already designed a new mouth for me, implants &all ("I'm only doing 1% of the finish-work: whatever the patient says doesn't feel just quite right, yet"—myDMD). He then CNCs in-house on his $xxx,xxx 4-axis.
IMHO: Many classes of physicians are going to be reduced to nothing more than malpractice-insurance-paying business owners, MD/DO. The liability-holders, good doctor.
In alignment with last week's (H)(1)(b) discussion, it's interesting to note that ~30% of US physician resident "slots" (<$60kUSD salary) are filled by these foreigner visa-holders (so: +$100k cost per applicant, amortized over a few years of training, each).
There's a number of you (engineer + doctor), though quite rare. I have a few friends who are engineers as well as doctors. You're like unicorns in your field. The Neo and Morpheus of the medical industry - you can see things and understand things that most people cant in your typical field (medicine). Kudos to you!
This was actually my dream career path when I was younger. Unfortunately there's just no way I would have afforded the time and resources to pursue both, and I'd never heard of Biomedical Engineering where I grew up.
As a doctor and full stack engineer you’d have a perfect future ahead of you in radiology - the profession will not go away, but will need doctors who can bridge the full medical-tech range
What’s your take on pharmacists? To my naive eyes, that seems like a certainty for replacement. What extra value does human judgement bring to their work?
My wife is a clinical pharmacist at a hospital. I am a SWE working on AI/ML related stuff. We've talked about this a lot. She thinks that the current generation of software is not a replacement for what she does now, and finds the alerts they provide mostly annoying. The last time this came up, she gave me two examples:
A) The night before, a woman in her 40's came in to the ER suffering a major psychological breakdown of some kind (she was vague to protect patient privacy). The Dr prescribed a major sedative, and the software alerted that they didn't have a negative pregnancy test because this drug is not approved for pregnant women and so should not be given. However, in my wife's clinical judgement- honed by years of training, reading papers, going to conferences, actual work experience and just talking to colleagues- the risk to a (potential) fetus from the drug was less than the risk to a (potential) fetus from mom going through an untreated mental health episode and so she approved the drug and overrode the alert.
B) A prescriber had earlier in that week written a script for Tylenol to be administered "PR" (per-rectum) rather than PRN (per requisite need). PR Tylenol is a perfectly valid thing that is sometimes the correct choice, and was stocked by the hospital for that reason. But my wife recognized that this wasn't one of the cases where that was necessary, and called the nurse to call the prescriber to get that changed so the nurse wouldn't have to give them a Tylenol suppository. This time there were no alerts, no flags from the software, it was just her looking at it and saying "in my clinical judgement, this isn't the right administration for this situation, and will make things worse".
So someone- with expensively trained (and probably licensed) judgement- will still need to look over the results of this AI pharmacist and have the power to override its decisions. And that means that they will need to have enough time per case to build a mental model of the situation in their brain, figure out what is happening, and override if necessary. And it needs to be someone different from the person filling out the Rx, for Swiss cheese model of safety reasons.
Congratulations, we've just described a pharmacist.
I am a pharmacist who dabbles in web dev. We should easily be replaced because all of our work on checking pill images and drug interactions are actually already automated, or the software already tells us everything.
If every doctor agreed to electronically prescribe (instead of calling it in, or writing it down) using one single standard / platform / vendor, and all pharmacy software also used the same platform / standard, then our jobs are definitely redundant.
I worked at a hospital where basically doctors and pharmacists and nurses all use the same software and most of the time we click approve approve approve without data entry.
Of course we also make IVs and compounds by hand, but that's a small part of our job.
I'm not a doc or a pharmacist (though I am in med school) and I'm sure there are areas that AI could do some of a pharmacists job but on the outpatient side they do things like answering questions for patients and helping them interpret instructions that I don't think we want AI to do... or at least I really doubt an AIs ability to gauge how well someone is understanding instructions and augment how it explains something based on that assessment... on the inpatient side, I have seen pharmacists help physicians grapple with the pros and cons of certain treatments and make judgement calls about dosing that I think it would be hard to trust an AI to do because there is no "right" answer really. It's about balancing trade offs.
IDK, these are just limitations - people that really believe in AI will tell you there is basically nothing it can't do... eventually. I guess it's just a matter of how long you want to wait for eventually to come.
I work on a kiosk (MedifriendRx) which, to some degree "replaces" pharmacists and pharmacy staff.
The kiosk is placed inside of a clinic/hospital setting, and rather than driving to the pharmacy, you pick up your medications at the kiosk.
Pharmacists are currently still very involved in the process, but it's not necessarily for any technical reason. For example, new prescriptions are (by most states' boards of pharmacies) required to have a consultation between a pharmacist and a patient. So the kiosk has to facilitate a video call with a pharmacist using our portal. Mind you, this means the pharmacist could work from home, or could queue up tons of consultations back to back in a way that would allow one pharmacist to do the work of 5-10 working at a pharmacy, but they're still required in the mix.
Another thing we need to do for regulatory purposes is when we're indexing the medication in the kiosk, the kiosk has to capture images of the bottles as they're stocked. After the kiosk applies a patient label, we then have to take another round of images. Once this happens, this will populate in the pharmacist portal, and a pharmacist is required to take a look at both sets of images and approve or reject the container. Again, they're able to do this all very quickly and remotely, but they're still required by law to do this.
TL;DR I make an automated dispensing kiosk that could "replace" pharmacists, but for the time being, they're legally required to be involved at multiple steps in the process. To what degree this is a transitory period while technology establishes a reputation for itself as reliable, and to what degree this is simply a persistent fixture of "cover your ass" that will continue indefinitely, I cannot say.
I could see that as more radiology AI tools become available to non-radiologist medical providers, they might choose to leverage the quick feedback those provide and not wait for a radiologist to weight in, even if they could gain something from the radiologist. They could make a decision while the patient is still in the room with them.
Partially true, and the answer to that is runway -- it will be a very long time before all the other specialties are fully augmented. With respect to "non-surgical" you may be underestimating the number and variety of procedures performed by non-surgeons (e.g. Internal Medicine physicians) -- thyroid biopsy, bronchoscopy, endoscopic retrograde cholangiopancreatography, liquid nitrogen ablation of skin lesion, bone marrow aspiration, etc.
The other answer is that AI will not hold your hand in the ICU, or share with you how their mother felt when on the same chemo regimen that you are prescribing.
In May earlier this year, the New York Times had a similar article about AI not replacing radiologists:
https://archive.is/cw1Zt
It has similar insights, and good comments from doctors and from Hinton:
“It can augment, assist and quantify, but I am not in a place where I give up interpretive conclusions to the technology.”
“Five years from now, it will be malpractice not to use A.I.,” he said. “But it will be humans and A.I. working together.”
Dr. Hinton agrees. In retrospect, he believes he spoke too broadly in 2016, he said in an email. He didn’t make clear that he was speaking purely about image analysis, and was wrong on timing but not the direction, he added.
Can AI read diagnostic images better than a radiologist? Almost certainly the answer is (or will be) yes.
Will radiologists be replaced? Almost certainly the answer is no.
Why not? Medical risk. Unless the law changes, a radiologist will have to sign off on each imaging report. So say you have an AI that reads images primarily and writes pristine reports. The bottleneck will still be the time it takes for the radiologist to look at the images and validate the automated report. Today, radiologist read very quickly, with a private practice rads averaging maybe 60-100 studies per day (XRs, ultrasounds, MRIs, CTs, nuclear medicine studies, mammograms, etc). This is near the limit of what a human being can reasonably do. Yes, there will be slight gains at not having to dictate anything, but still having to validate everything takes nearly as much time.
Now, I'm sure there's a cavalier radiologist out htere who would just click "sign, sign, sign..." but you know there's a malpractice attorney just waiting for that lawsuit.
Which is literally the case so far. No manufacturer has shown any willingness to take on the liability of self driving at any scale to date. Waymo has what? 700 cars on the road with the finances and lawyers of Google backing it.
Let me know when the bean counters sign off on fleets in the millions of vehicles.
I have to admit if my life were on the line I might be that Karen.
It's the same way that we can save time and money if we just don't wash our hands when cooking food. Sure it's true. But someone WILL get sick and we WILL get in trouble for it
Planes can land themselves with zero human intervention in all kinds of weather conditions and operating environments. In fact, there was a documentary where the plane landed so precisely that you could hear the tires hitting the center lane marker as it landed and then taxied.
Yet we STILL have pilots as a "last line of defense" in case something goes wrong.
Medicine is existential. The job of a doctor is not to look at data, give a diagnosis and leave. A crucial function of practicing doctors is communication and human interaction with their patients.
When your life is on the line (and frankly, even if it isn't), you do not want to talk to an LLM. At minimum you expect that another human can explain to you what is wrong with you and what options there are for you.
Dead Comment
On a basic level, software exists to expedite repetitive human tasks. Diagnostic radiology is an extremely repetitive human task. When I read diagnostics, there’s a voice in the back of my head saying, “I should be writing code to automate this rather than dictating it myself.”
That's it?
I don't know. Doesn't sound like a very big obstacle to me. But I don't think AI will replace radiologists even if there was a law that said like, "blah blah blah, automated reports, can't be sued, blah blah." I personally think the consulting work they do is really valuable and very difficult to automate, we would be in an AGI world where radiologists get replaced, which seems unlikely.
The bigger picture is that we are pretty much obligated to treat people medically, which is a good thing, so there is a lot more interest in automating healthcare than say, law, where spending isn't really compulsory.
A lot of things are one law amendment away from happening and they aren’t happening. This could well become another mask mandate, which while being reasonable in itself, rubs people wrong way just enough to become a sacred issue.
Radiologists will validate the results but either find themselves clicking "approve, approve, approve" all day, or disagree and find they were wrong (since our hypothesis is that the AI is better than a human). Eventually, this will be common knowledge in the field, hospitals will decide to save on costs and just skip the humans altogether, lobby, and get the law changed.
And it’s definitely not a 0.05 percent difference. AI will perform better by a long shot.
Two reasons for this.
1. The AI is trained on better data. If the radiologist makes a mistake that mistake is identified later and then the training data can be flagged.
2. No human indeterminism. AI doesn’t get stressed or tired. This alone even without 1. above will make AI beat humans.
Let’s say 1. was applied but that only applies for consistent mistakes that humans make. Consistent mistakes are eventually flagged and shows up as a pattern in training data and the AI can learn it even though humans themselves never actually notice the pattern. Humans just know that the radiologists opinion was wrong because a different outcome happened, we don’t even have to know why it was wrong and many times we can’t know… just flagging the data is enough for the AI to ingest the pattern.
Inconsistent mistakes comes from number 2. If humans make mistakes that are due to stress the training data reflecting those mistakes will be minuscule in size and also random without pattern. The average majority case of the training data will smooth these issues out and the model will remain consistent. Right? A marker that follows a certain pattern shows up 60 times in the data but one time it’s marked incorrectly because of human error… this will be smoothed out.
Overall it will be a statistical anomaly that defies intuition. Similar to how flying in planes is safer than driving. ML models in radiology and spam will beat humans.
I think we are under this delusion that all humans are better than ML but this is simply not true. You can thank LLMs for spreading this wrong intuition.
You can take a 4k photo of anything, change one pixel to pure white and a human wouldn't be able to find this pixel by looking at the picture with their eyes. A machine on the other hand would be able to do it immediately and effortlessly.
Machine vision is literally superhuman, For example Military camo can easily fool human eyes. But a machine can see through it clear as day. Because they can tell the difference between
Black Hex #000000 RGB 0, 0, 0 CMYK 0, 0, 0, 100
and
Jet Black Hex #343434 RGB 52, 52, 52 CMYK 0, 0, 0, 80
> Can AI read diagnostic images better than a radiologist? Almost certainly the answer is (or will be) yes.
I'm sorry, but I disagree, and I think you are making a wild assumption here. I am up to date on the latest AI products in radiology, use several of the, and none of them are even in the ballpark on this. That vast majority are non-contributory.
It is my strong belief that there is an almost infinite variation in both human anatomy and pathology. Given this variation, I believe that in order for your above assumption to be correct, the development of "AGI" will need to happen.
When I interpret a study I am not just matching patterns of pixels on the screen with my memory. I am thinking, puzzling, gathering and synthesizing new information. Every day I see something I have never seen before, and maybe no one has ever seen before. Things that can't and don't exist in a training data set.
I'm on the back end of my career now and I am financially secure. I mention that because people will assume I'm a greedy and ignorant Luddite doctor trying to protect my way of life. On the contrary, if someone developed a good replacement for what I don, I would gladly lay down my microphone and move on.
But I don't think we are there yet, in fact I don't think we're even close.
I can easily imagine that humans are better at really digging deeply and reasoning carefully about anomalies that they notice.
I doubt they're nearly as good as computers at detecting subtle changes on screens where 99% of images have nothing worrisome and the priors are "nothing is suspicious".
I don't want to equate radiologists with TSA screeners, but the false negative rate for TSA screening of carryon bags is incredibly high. I think there's an analog here about the ability of humans to maintain sustained focus on tedious tasks.
Seems like an over simplification, but let's say it's just true. Wouldn't you rather spend your time on novel problems that you haven't seen before? Some ML system identifies easy/common ones that it has high confidence in, leaving the interesting ones for you?
Think about how you learned anatomy. You probably looked at Netter drawings or Grey's long before you ever saw a CT or MRI. You probably knew the English word "laceration" before you saw a liver lac. You probably knew what a ground glass bathroom window looked like before the term was used to describe lung findings.
LLMs/LVMs ingest a huge amount of training data, more than humans can appreciate, and learn connections between that data. I can ask these models to render an elephant in outer space with a hematoma on its snout in the style of a CT scan. Surely, there is no such image in the training set, yet the model knows what I want from the enormous number of associations in its network.
Also, the word "finite" has a very specific definition in mathematics. It's a natural human fallacy to equate very large with infinite. And the variation in images is finite. Given a 16-bit, 512 x 512 x 100 slice CT scan, you're looking at 2^16 * 26214400 possible images. Very large, but still finite.
Of course, the reality is way, way smaller. As a human, you can't even look at the entire grayscale spectrum. We just say, < -500 Hounsfield units (HU), that's air, -200 < fat < 0, bone/metal > 100, etc. A gifted radiologist can maybe distinguish 100 different tissue types based on the HU. So, instead of 2^16 pixel values, you have...100. That's 100 * 26214400 = 262,440,000 possible CT scans. That's a realistic upper-limit on how many different CT scans there could possibly be. So, let's pre-draft 260 million reports and just pick the one that fits best at inference time. The amount you'd have to change would be miniscule.
That being said, there are no radiologists available to hire at any price: https://x.com/ScottTruhlar/status/1951370887577706915
What was left out was that these "cutting edge" AI imaging models were old school CNNs from the mid 2010's, running on local computers. It seems only right now is the idea of using transformers (what LLMs are) is being explored.
In that sense, we still do not know what a purpose build "ChatGPT of radiology" would be capable of, but if we use the data point of comparing AI from 2015 to AI of 2025, the step up in ability is enormous.
And in AI tech, even "5 years ago" is a different era.
In year 2025, we have those massive multimodal reasoning LLMs that can crossreference data from different images, text and more. If the kind of effort and expertise that went into general purpose GPT-5 went into a more specialized medical AI, where would its capabilities top out?
Do you have any typical examples of this you could try to explain to us laymen, so we get a feel for what this looks like? I feel like it's hard for laymen to imagine how you could be seeing new things outside a pattern every day (or week}.
I am nowhere near as good as our worst radiologist (who is, frankly... not great). It's not even close.
AI will probably never taking over, what we really need is AI working in tandem with radiologist and complementing their work to help with their busy schedule (or limited number of radiologist).
The OP title can also be changed to "Demand for human cardiologist is at an all-time high", and is still be true.
For example in CVDs detection cardiologist need to diagnose the patient properly, and if the patient not happy with the diagnostic he can get a second opinion from another cardiologist, but cardiologist number is very limited even more limited than radiologist.
For most of the countries in the world, only several hundreds to several thousands registered cardiologist per country, making the ratio about 1:100,000 cardiologist to population ratio.
People expecting cardiologist to go through their ECG readings but do you know that reading ECG is very cumbersome. Let's say you have 5 minutes ECG signals for the minimum requirement for AFib detection as per guideline. The standard ECG is 12-lead resulting in 12 x 5 x 60 = 3600 beats even for the minimum 5 minutes durations requirements (assuming 1 minute ECG equals to 60 beats). Then of course we have Holter ECG with typical 24-hour readings that increase the duration considerably and that's why almost all Holter reading now is automated. But current ECG automated detection has very low accuracy because their accuracy of their detection methods (statistics/AI/ML) are bounded by the beat detection algorithm for example the venerable Pan-Tompkins for the fiducial time-domain approach [1].
The cardiologist will rather spent their time for more interesting activities like teaching future cardiologists, performing expensive procedures like ICD or pacemaker, or having their once in a blue moon holidays instead of reading monotonous patients' ECGs.
I think this is why ECG reading automation with AI/ML is necessary to complement the cardiologist but the trick is to increase the sensitivity part of the accuracy to very high value preferably 100% so the missing potential patients is minimized for the expert and cardiologist in the loop exercise.
[1] Pan–Tompkins algorithm:
https://en.wikipedia.org/wiki/Pan%E2%80%93Tompkins_algorithm
Paul Kedrosky had an interesting analogy when the automobile entered the scene. Teamsters (the men who drove teams of horses) benefited from rising salaries, even as new people declined to enter the "dead end" profession. We may well be seeing a similar phenomenon with Radiologists.
Finally, I'd like to point out that rising salaries mean there are greater incentives to find alternative solutions to this rising cost. Given the erratic political situation, I will not be surprised to see a relatively sudden transition to AI interpretation for at least a minority of cases.
I feel that human processes have inertia and for lack of a better word, gatekeepers feel that new, novel approaches should be adopted slowly and which is why we are not seeing the impact, yet. Once a country with the right incentive structure (e.g. China ) can show that it can outperform and help improve the overall experience I am sure things will change.
While 10 years progress is a lot in ML, AI , in more traditional fields it probably is a blip to change this institutional inertia which will change generation by generation. All that is needed is an external actor to take the risk and show a step change improvement. Having experienced how healthcare in US I feel people are only scared to take on bold challenges
From the article
My late wife had to have a stent placed in a vein in her brain to relieve cranial pressure. We had to travel to to New York for an interventional radiologist and team to fish a 7 inch stent and balloon from her thigh up.
At the time, we had to travel to NYC, and the doctor was one of a half dozen who could do the procedure in the US. Who’s going to train the future physician the skills needed to develop the procedure?
For stuff like this, I feel like AI is potentially going to erase certain human knowledge.
But if everyone involved has a profit motive, you end up cutting at those cost reductions. "We'll save you 100 bucks, so give us 50", done at the AI model level, the AI model repackager, the software suite that the hospital is using, the system integrators that manage the software suite installation for the hospital, the reseller of the integrator's services through some consultancy firm, etc etc.
There are so many layers involved, and each layer is so used to trying to take a slice, and we're talking about a good level of individualization in places that aren't fully public a la NHS, that the "vultures" (so to speak) are all there ready to take their cut.
Maybe anathema to say on this site, but de-agglomeration really seems to have killed just trying to make things better for the love of the game.
Just as one example a chest CT would’ve cost $450 if done cash. It costed an insurer over $1200 done via insurance. And that was after multiple appeals and reviews involving time from people at the insurance company and the providers office including the doctor himself. The low hanging fruit in American healthcare costs is the stuff like that.
> All that is needed is an external actor to take the risk and show a step change improvement
Who's going to benefit? Doctors might prioritize the security of their livelihood over access to care. Capital will certainly prioritize the bottom line over life and death[0].
The cynical take is that for the time being, doctors will hold back progress, until capital finds a way to pay them off. Then capital will control AI and control diagnosis, letting them decide who is sick and what kind of care they need.
The optimistic take is that doctors maintain control but embrace AI and use it to increase the standard of care, but like you point out, the pace of that might be generational instead of keeping pace with technological progress.
[0] https://www.nbcnews.com/news/us-news/death-rates-rose-hospit...
But it doesn't lead to increased throughput because there needs to be human validation when people's lives are on the line.
Planes fly themselves these days, it doesn't increase the "throughout" or eliminate the need for a qualified pilot (and even a copilot!)
That's more than a problem of inertia
That's true for AI-slop-in-the-media (most of the internet was already lowest effort garbage, which just got that tiny bit cheaper) and probably also in medicine (a slight increase in false negatives will be much, much more expensive than speeding up doctors by 50% for image interpretation). Once you get to the point where some other doctor is willing (and able) to take on the responsibility of that radiologist, then you can eliminate that kind of doctor (but still not her work. Just the additional human-human communication)
If statistically their error rate is better or around what a human does then their insurance is a factor of how many radiologists they intend to replace.
The system is designed a nanny-state fashion: there's no way to release practitioners from liability in exchange for less expensive treatments. I doubt this will change until healthcare pricing hits an extremely expensive breaking point.
Deleted Comment
You're probably right.
But we don't ask a second opinion to an "algorithm", we want a person, in front of us, telling us what is going on.
AI is and will be used in the foreseeable future as a tool by radiologists, but radiologists, at least for some more years, will keep their jobs.
When an X-Ray is ordered, there is usually a suspected diagnosis in the order like "suspect sprain, pls exclude fracture", "suspect lung cancer". Patients will complain about symptoms or give the impression of a certain illness. Things like that already place a bias on the evaluation a radiologist does, but they are trained to look past that and be objective. No idea how often they succeed.
[1] I don’t know the US system so it’s just a guess
4 years med school
2 years computer science
6 years of residency (intern year, 4 years of DR, 1 year of IR)
16 years...
4 years undergrad - major and minor not important, met the pre-med requirements 2 year grad school (got a master's degree, not required, but I was having fun) 4 years medical school 5 years radiology residency
Famous last words.
Deleted Comment
I think we all have become hyper-optimistic on technology. We want this tech to work and we want it to change the world in some fundamental way, but either things are moving very slowly or not at all.
[0] https://waymo.com/safety/impact/
I spend a lot of time as a pedestrian in Austin, and they are far safer than your usual Austin driver, and they also follow the law more often.
I always accept them when I call an Uber as well, and it's been a similar experience as a passenger.
I kinda hate what the Tesla stuff has done, because it makes it easier to dismiss those who are moving more slowly and focusing on safety and trust.
Is there any saying that exists about overestimating stuff in the near term and long term but underestimating stuff in the midterm? Ie flying car dreams in the 50s etc.
"We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten."
I think about this quote a lot these days, especially while reading Hacker News. On one hand, AI doesn't seem to be having the productivity and economic impacts that were predicted, but on the other, LLMs are getting gold medals at the Math Olympiads. It's like the ground is shifting beneath our feet, but it's still too slow to be perceptible.
Rain, Snow etc. are still challenges but needs a bold bet in a place that wants to show how futuristic it is. The components are in place (Waymo cars), what is needed is high enough labor cost to justify the adoption.
This is exactly what came to my mind also.
https://www.reuters.com/technology/tesla-video-promoting-sel...
for me i have been riding in waymos the last year and have been very pleased with the results. i think we WANT this technology to move faster but the some of the challenges at the edges take a lot of time and resources to solve, but not fundamentally unsolvable.
The interesting thing is that there are problems for which this rule applies recursively. Of the remaining 20%, most of it is easier than the remaining 20% of what is left.
Most software ships without dealing with that remaining 20%, and largely that is OK; it is not OK for safety critical systems though.
What I find really crazy is that most trains are still driven by humans.
When there are relatively few dangerous edge cases, technology often works better than we expect. TikTok's recommendation algorithm and Shazam are in this category.
Gee, seems like about the worst fucking thing in the world for diagnostics if you ask me, but what do I know, my degree is in sandwiches and pudding.
We've seen this with all of the players, with many dropping out due to the challenges.
Having said that, there are several that are fielded right now, with varying degrees of autonomy. Obviously Waymo has been operating in small-ish geofences for a while, but they are managed >200% annual growth readily. Zoox just started offering fully autonomous drives in Vegas.
And even Tesla is offering a service, albeit with safety monitors/drivers. Tesla Semi isn't autonomous at all, but appears ready to go into volume production next year too.
Your prediction will look a lot better by 2030.
Easy to forget the rest of the world does not and never has ticked this way.
Don't get me wrong, optimism and thinking of the future are great qualities we direly need in this world on the one hand.
On the other, you can't outsmart physics.
We've conquered the purely digital realm in the past 20 years.
We're already in the early years of the next phase were the digital will become ever more multi-modal and make more inroads into the physical world.
So many people bring an old mindset to a new context, where maring of errors, cost of mistakes or optimizing the last 20% of a process is just so vastly different than a bit of HTML, JS and backend infra.
The truck part seems closer than the car part.
There are several driverless semis running between Dallas, Houston, and San Antonio every day. Fully driverless. No human in the cab at all.
Though, trucking is an easier to solve problem since the routes are known, the roads are wide, and in the event of a closure, someone can navigate the detour remotely.
These are all stepping stones, and eventually the technology is mature enough to productise. You would be shocked by how good Tesla FSD is right now. It can easily take you on a cross country trip with almost zero human interactions.
Not even close.
The vast majority of people have a small number of local routes completely memorized and do station keeping in between on the big freeways.
You can see this when signage changes on some local route and absolute chaos ensues until all the locals re-memorize the route.
Once Waymo has memorized all those local routes (admittedly a big task), it's done.
Yikes.
I recommend you take some introductory courses on AI and theory of computation.
You can do 99% of it without AGI, but you do need it for the last 1%.
Unfortunately, the same is true for AGI.
Deleted Comment
This was why he went into industrial robotics instead, where it was clear that the finances could work out today.
Who is "we"? The people who hype "AI"?
For some reason, enthusiasts always think this time is different.
Still true as work conditions are harsh, schedule as well, responsibilities and fines are high but payment is not.
Deleted Comment
In the case of Musk it has worked out. His lies have earned him a fortune and now he asks Tesla to pay him out with a casual 1 trillion paycheck.
As I recall, one group had fairly good success, but eventually someone figured out that their data set had images from a low-COVID hospital and a high-COVID hospital, and the lettering on the images used different fonts. The ML model was detecting the font, not the COVID.
[a bit of googling later...]
Here's a link to what I think was the debunking study: https://www.nature.com/articles/s42256-021-00338-7
If you're not at a university, try searching for "AI for radiographic COVID-19 detection selects shortcuts over signal" and you'll probably be able to find an open-access copy.
I couldn't for the life of me understand how this was supposed to work. If the coughing of COVID patients (as opposed to patients with other respiratory illnesses) actually sounds meaningfully different in a statistically meaningful way (and why did they suppose that it would? Phlegm is phlegm, surely), surely a human listener would have been able to figure it out easily.
[1] https://academic.oup.com/pmj/article/98/1157/212/6958858?log...
Sorry, but in the absence of general limiting principles that rule out such a scenario, that's how it's going to shake out. Visual models are too good at exactly this type of work.
Dead Comment
Everything else besides the above in TFA is extraneous. Machine learning models could have absolute perfect performance at zero cost, and the above would make it so that radiologists are not going to be "replaced" by ML models anytime soon.
>Human radiologists spend a minority of their time on diagnostics and the majority on other activities, like talking to patients and fellow clinicians.
The vast majority of radiologists do nothing other than: come in (or increasingly, stay at home), sit down at a computer, consume a series of medical images while dictating their findings, and then go home.
If there existed some oracle AI that can always accurately diagnose findings from medical images, this job literally doesn't need to exist. It's the equivalent of a person staring at CCTV footage to keep count of how many people are in a room.
I also recently had surgery and the surgeon talked to the radiologist to discuss my MRI before operating.
In the same fashion as construction worker just shows up, "performs a series of construction tasks", then go home. We just need to make a machine that performs "construction tasks" and we can build cities, railways and road networks for nothing but the cost of the materials!
Perhaps this minor degree of oversimplification is why the demise of radiologists have been so frequently predicted?
Do you have some kind of source? This seems unlikely.
The current "workflow" is primary care physician (or specialist) -> radiology tech that actually does the measurement thing -> radiologist for interpretation/diagnosis -> primary care physician (or specialist) for treatment.
If you have perfect diagnosis, it could be primary care physician (or specialist) -> radiology tech -> ML model for interpretation -> primary care physician (or specialist.
PCPs don't have the training and aren't paid enough for that exposure.
To understand why, you would really need to take a good read of the average PCP's malpractice policy.
The policy for a specialist would be even more strict.
You would need to change insurance policies before your workflow was even possible from a liability perspective.
Basically, the insurer wants, "a throat to choke", so to speak. Handing up a model to them isn't going to cut it anymore than handing up Hitachi's awesome new whiz-bang proton therapy machine would. They want their pound of flesh.
How often do they talk to patients? Every time I have ever had an x-ray, I have never talked to a radiologist. Fellow clinicians? Train the xray tech up a bit more.
If the mote is 'talking to people' that is a mote that doesn't need an MD, or at least not a full specialization MD. ML could kill radiologist MD, radiologist could become the job title of a nurse or x-ray tech specialized in talking to people about the output.
That's fine. But then the xray tech becomes the radiologist, and that becomes the point in the workflow that the insurer digs out the malpractice premiums.
In essence, your xray techs would become remarkably expensive. Someone is talking to the clinicians about the results. That person, whatever you call them, is going to be paying the premiums.
Is this uncommon in the rest of the US?
Deleted Comment
If we had followed every AI evengelist sugestion the world would have collapsed.
But he said it in the context of a Q&A session that happened to be recorded. Unless you're a skilled politician who can give answers without actually saying anything, you're going to say silly things once in a while in unscripted settings.
Besides that, I'd hardly call Geoffrey Hinton an AI evangelist. He's more on the AI doomer side of the fence.
With interventional radiologists and radio-oncologists it's different but were talking about radiologists here...
At the time? I would say he was a AI evangelist.
Dead Comment
People can't tell what they'll eat next sunday but they'll predict AGI and singualrity in 25 years. It's comfy because 25 years seems like a lot of time, it isn't.
https://en.wikipedia.org/wiki/List_of_predictions_for_autono...
> I'd wager in 25 years we'd get there so I think his opinion still has a large percentage of being correct.
What percent, and which maths and facts let you calculate it ? The only percent you can be sure about is that it's 100% wishful thinking
Maybe don't?
I mean if you change the data to fit your argument you will always make it look correct.
Lets assume we stop in 2016 like he said, where do we get the 1000 radiologist the US needs a year?
And a man to a radiologist for a lumbar perineural injection.
And a person to a radiologist for a subacromial bursa injection.
And a month ago I sent a woman to a radiologist to have adenomyosis embolised.
Also talked to a patient today who I will probably send to a radiologist to have a postnephrectomy urinary leak embolised.
Is an LLM going to do that ?
There is the another issue.
If AI commoditises a skill, competent people with options will just shift to another skill while offloading the commoditised skill to someone else.
Due to automated ECG interpretation built into every machine, reimbursement has plummeted. So I have let my ECG interpretation skills rust while focusing on my neurology and movement disorder skills.They are fun ... I also did part of a master's in AI decades ago ( Prolog, Lisp , good times, machine vision, good times...)
So now if someone needs a ECG, I am probably going to send them to a cardiologist who will do a ECG, Holter, Echo, Stress Echo etc. Income for the nice friendly cardiologist, extra cost and time for the patient and the health system.
I can imagine like food deserts, entire AI deserts in medicine that nobody want to work in. A bit like geriatrics, rural medicine and psychiatry these days.
Automating as much of that as possible and making healthcare more accessible should be pursued. Just like automated ECG interpretation made basic ECG more accessible.
Oof - I hope the tools you're using as a physician are better than in the field as a paramedic.
I have never met a Lifepak (or Zoll) that doesn't interpret anything but the most textbook sinus rhythm in pristine conditions as "ABNORMAL ECG - EVALUATION NEEDED".
AI is going to augment radiologists first, and eventually, it will start to replace them. And existing radiologists will transition into stuff like interventional radiology or whatever new areas will come into the picture in the future.
I am a medical school drop-out — in my limited capacity, I concur, Doctor.
My dentist's AI has already designed a new mouth for me, implants &all ("I'm only doing 1% of the finish-work: whatever the patient says doesn't feel just quite right, yet"—myDMD). He then CNCs in-house on his $xxx,xxx 4-axis.
IMHO: Many classes of physicians are going to be reduced to nothing more than malpractice-insurance-paying business owners, MD/DO. The liability-holders, good doctor.
In alignment with last week's (H)(1)(b) discussion, it's interesting to note that ~30% of US physician resident "slots" (<$60kUSD salary) are filled by these foreigner visa-holders (so: +$100k cost per applicant, amortized over a few years of training, each).
Deleted Comment
A) The night before, a woman in her 40's came in to the ER suffering a major psychological breakdown of some kind (she was vague to protect patient privacy). The Dr prescribed a major sedative, and the software alerted that they didn't have a negative pregnancy test because this drug is not approved for pregnant women and so should not be given. However, in my wife's clinical judgement- honed by years of training, reading papers, going to conferences, actual work experience and just talking to colleagues- the risk to a (potential) fetus from the drug was less than the risk to a (potential) fetus from mom going through an untreated mental health episode and so she approved the drug and overrode the alert.
B) A prescriber had earlier in that week written a script for Tylenol to be administered "PR" (per-rectum) rather than PRN (per requisite need). PR Tylenol is a perfectly valid thing that is sometimes the correct choice, and was stocked by the hospital for that reason. But my wife recognized that this wasn't one of the cases where that was necessary, and called the nurse to call the prescriber to get that changed so the nurse wouldn't have to give them a Tylenol suppository. This time there were no alerts, no flags from the software, it was just her looking at it and saying "in my clinical judgement, this isn't the right administration for this situation, and will make things worse".
So someone- with expensively trained (and probably licensed) judgement- will still need to look over the results of this AI pharmacist and have the power to override its decisions. And that means that they will need to have enough time per case to build a mental model of the situation in their brain, figure out what is happening, and override if necessary. And it needs to be someone different from the person filling out the Rx, for Swiss cheese model of safety reasons.
Congratulations, we've just described a pharmacist.
If every doctor agreed to electronically prescribe (instead of calling it in, or writing it down) using one single standard / platform / vendor, and all pharmacy software also used the same platform / standard, then our jobs are definitely redundant.
I worked at a hospital where basically doctors and pharmacists and nurses all use the same software and most of the time we click approve approve approve without data entry.
Of course we also make IVs and compounds by hand, but that's a small part of our job.
IDK, these are just limitations - people that really believe in AI will tell you there is basically nothing it can't do... eventually. I guess it's just a matter of how long you want to wait for eventually to come.
The kiosk is placed inside of a clinic/hospital setting, and rather than driving to the pharmacy, you pick up your medications at the kiosk.
Pharmacists are currently still very involved in the process, but it's not necessarily for any technical reason. For example, new prescriptions are (by most states' boards of pharmacies) required to have a consultation between a pharmacist and a patient. So the kiosk has to facilitate a video call with a pharmacist using our portal. Mind you, this means the pharmacist could work from home, or could queue up tons of consultations back to back in a way that would allow one pharmacist to do the work of 5-10 working at a pharmacy, but they're still required in the mix.
Another thing we need to do for regulatory purposes is when we're indexing the medication in the kiosk, the kiosk has to capture images of the bottles as they're stocked. After the kiosk applies a patient label, we then have to take another round of images. Once this happens, this will populate in the pharmacist portal, and a pharmacist is required to take a look at both sets of images and approve or reject the container. Again, they're able to do this all very quickly and remotely, but they're still required by law to do this.
TL;DR I make an automated dispensing kiosk that could "replace" pharmacists, but for the time being, they're legally required to be involved at multiple steps in the process. To what degree this is a transitory period while technology establishes a reputation for itself as reliable, and to what degree this is simply a persistent fixture of "cover your ass" that will continue indefinitely, I cannot say.
The other answer is that AI will not hold your hand in the ICU, or share with you how their mother felt when on the same chemo regimen that you are prescribing.
It has similar insights, and good comments from doctors and from Hinton:
“It can augment, assist and quantify, but I am not in a place where I give up interpretive conclusions to the technology.”
“Five years from now, it will be malpractice not to use A.I.,” he said. “But it will be humans and A.I. working together.”
Dr. Hinton agrees. In retrospect, he believes he spoke too broadly in 2016, he said in an email. He didn’t make clear that he was speaking purely about image analysis, and was wrong on timing but not the direction, he added.