Readit News logoReadit News
eig · 2 years ago
A few months ago there were articles going around about how Samsung galaxy phones were upscaling images of the Moon using AI [0]. Essentially, the model was artificially adding landmarks and details based on its training set when the real image quality was too poor to make out details.

Needless to say, AI upscaling as described in this article would be a nightmare for radiologists. 90% of radiology is confirming the absence of disease when image quality is high, and asking for complementary studies when image quality is low. With AI enhanced images that look "normal", how can the radiologist ever say "I can confirm there is no brain bleed" when the computer might be incorrectly adding "normal" details when compensating for poor image quality?

[0] - https://news.ycombinator.com/item?id=35136167

BobbyTables2 · 2 years ago
The Samsung phone wasn’t a technological advancement, it was sheer fraud.

A camera is supposed to take pictures of what it sees.

Imagine going to a restaurant, ordering French onion soup, and getting a bowl of brown food coloring in water.

pavlov · 2 years ago
> “A camera is supposed to take pictures of what it sees.”

Feels like that’s just a matter of expectations.

A phone used to be a device for voice communications. It’s right there in the Greek etymology, “phonē” for sound. But 95% of what people do today on devices called phones is something else than voice.

Similarly, if people start using cameras more to produce images of things they want rather than what exists in front of the lens, then that’s eventually what a camera will mean. Snapchat thinks of themselves as a camera company, but the images captured within their apps are increasingly synthesized.

(The etymology of “camera” already points to a journey of transformation. A photographic camera isn’t a literal room, as the camera obscura once was.)

sieste · 2 years ago
> Imagine going to a restaurant, ordering French onion soup, and getting a bowl of brown food coloring in water.

Welcome to England!

farseer · 2 years ago
Now that you mention it. I recently picked up a bottle of Red Vinegar with large pictures of red grapes on it. Naturally I assumed this was grape vinegar. How shocking it was to discover that this Chinese company was selling acetic acid mixed with food colors.
vhcr · 2 years ago
Where do you draw the line? RAW, HDR, photo stitching, blur removal?
jajko · 2 years ago
I wouldn't go that far to call it as a fraud, unless you call literally every phone-with-camera manufacturer these days a fraud. Then I agree as my trusty old nikon fullframe always catches only whats there, including noise and instability that modern phones handle easily.

People were commenting on that thread how apple phone ie mirrored only bunny within bigger picture of a bunny in the grass (thats rather hilarious 'bug'), and we all know how apple consistently removes all moles and wrinkles, changes completely skin tone and overall tonality like every single picture looks like its taken in the golden sunset hour. Ie that nasty samsung is much more truthful when it comes to this, including latest flagships.

That's outright lying too, IMHO much worse - moon is tidally locked so showing exactly same side with same features for millions of years, so they were adding details that are there, just impossible to see on non-stabilized tiny plastic lens&sensor combo in the night.

Making somebody 20 years younger, much prettier and changing their overall look on most important feature we humans have, doing it by default without any real option to turn it off, does a lot of long term body-perception damage in young folks.

coldtea · 2 years ago
>Imagine going to a restaurant, ordering French onion soup, and getting a bowl of brown food coloring in water

Isn't that like 80% of the mass food industry and 99% of the fast food industry.

zarmin · 2 years ago
It's kinda like the classic Ebay scam where you buy a picture of the item instead of the item.
YetAnotherNick · 2 years ago
> A camera is supposed to take pictures of what it sees.

You wouldn't like the picture of what it sees. The lens is just not big enough. Even the pro raw and other features that phone introduced apply processing.

eig · 2 years ago
An MRI machine is a fancy 3D camera. Is this "3D Deep-DSP Model" so different from the processing Samsung did on their phones?
bamboozled · 2 years ago
Yeah but that's too hard, and you can just use "AI" to make cool photos instead. Who wants an actual camera when you can have something "like" a camera at half the price?
bawolff · 2 years ago
> A camera is supposed to take pictures of what it sees.

If people wanted cameras to actually take what it sees, then we wouldn't have autofocus, photoshop or instagram filters.

The goal of a cell phone camera is to capture what you are experiencing, not to literally record what light strikes the cmos chip.

atoav · 2 years ago
This is one aspect about machine learning models I keep discussing with non-technical passengers of the AI-hype-train: They are (in their current form) unsitable for applications where correctness is absolutely critical.
teaearlgraycold · 2 years ago
I don’t know enough to make absolute statements here, but deep learning models can beat out human experts at discerning between signal and noise. Using that to guess at data and then hand it off to humans gives you the worst of both worlds. Two error probabilities multiplied together. But to simply render a verdict on whether a condition exists I’d trust a proven algorithm.
bufferoverflow · 2 years ago
As long as AI makes things better on average, it's useful. It doesn't have to be 100% correct.
cactusfrog · 2 years ago
This is not true, but it is a major challenge. See https://www.pathai.com/
nullc · 2 years ago
The state of the art MRI stuff uses "compressed sensing" -- essentially image completion in some domain or another. Presumably, carefully designed to not hallucinate details or one would hope.

There isn't necessarily a particularly neutral choice here: the MRI scan isn't in the pixel domain, artifacts are going to be 'weird' looking-- e.g. edges that move during the scan ringing across the whole image.

CooCooCaCha · 2 years ago
Compressed sensing is far more mathematically rigorous.
m463 · 2 years ago
They've already made this mistake. There was one model that detected skin cancer because there was always a ruler in the images.
mbirth · 2 years ago
So we’re not that much further than the neural networks urban legend from the early 90s.

https://gwern.net/tank

andbberger · 2 years ago
the future is already here, GE has put this into production. if you have remove the onerous constraint of being correct you can make some really crispy images! 9/10 radiologists. that was literally what the FDA approval process was, surveyed a bunch of radiologists to see what they preferred. no adults in the room.

heads should roll etc

wholinator2 · 2 years ago
Do you have a link or anything? I'm highly interested but unable to find more on this
wslh · 2 years ago
The interesting, indeed concerning thing, is that problem is not only applied to medical machines and mobile phones but to zillions of daily used wearable devices such as smart watches, brain eeg (e.g. Muse), and others without adverting users that what they see (e.g. HRV) couldn't be interpreted easily by a computer program.

Not saying that we humans are always better but saying that we are believing in number and conclusions from apps created as-is.

elektropionir · 2 years ago
It is just weird that papers like this can be published. "Deep learning signal prediction effectively eliminated EMI signals, enabling clear imaging without shielding." - this means that they have found a way to remove random noise, which if true, should be the truly revolutionary claim in this paper. If the "EMI" is not random you can just filter it so you don't need what they are doing. If it isn't random, whatever they are doing can "predict" the noise, they even use the word in that sentence. They are claiming that they can replace physical filtering of noise before it corrupts the signal (shielding) with software "removal" of noise after it has already corrupted the signal. This is simply not possible without loss of information (i.e. resolution). The images that they get from standard Fourier Transform reconstruction are still pretty noisy so on top they "enhance" the reconstruction by running it through a neural net. At that point they don't need the signal - just tell the network what you want to see. The fact that there are no validation scans using known phantoms is telling.
MrLeap · 2 years ago
Remember the early atomic age when people were doing wild shit like adding radium to your toothpaste so you can brush your teeth in the dark?

This is that, but again, with AI.

azalemeth · 2 years ago
I'm a professional MR physicist. I genuinely think the profession is hugely up the hype curve with "AI" and to a far lesser extent low field. It's also worth saying that the rigorous, "proper" journal in the field is Magnetic Resonance in Medicine, run by the international society of magnetic resonance in medicine -- and that papers in nature or science generally nowadays tend to be at the extreme gimmicky end of the spectrum.

A) Many MR reconstructions work by having a "physics model", typically in the form of a linear operator, acting upon the required data. The "OG" recon, an FT, is literally just a Fourier matrix acting on the data. Then people realised that it's possible to I) encode lots of artefacts, and ii) undersample k-space while using the spatial information using different physical rf coils, and shunt both these things into the framework of linear operators. This makes it possible to reconstruct it-- and Tikhonov regularisation became popular -- so you have an equation like argmin _theta (yhat - X_1 X_2 X_3.... X_n y) + lambda Laplace(y) to minimise, which does genuinely a fantastic job at the expense, usually, of non normal noise in the image. "AI" can out perform these algorithms a little, usually by having a strong prior on what the image is. I think it's helpful to consider this as some sort of upper bound on what there is to find. But as a warning, I've seen images of sneezes turned into knees with torn anterior cruciate ligaments, a matrix of zeros turned into basically the mean heart of a dataset, and a fuck ton of people talking bollocks empowered by AI. This isn't starting on diagnosis -- just image recon. The major driver is reducing scan time (=cost), required SNR (=sqrt(scan time)) or/and, rarely measuring new things that take too long. This almost falls into the second category

The main conference in the field has just happened and ironically the closing plenty was about the risks of AI, as it happens.

B) Low field itself has a few genuinely good advantages. The T2 is longer, the risks to the patient with implants are lower, and the machines may be cheaper to make. I'm not sold on that last one at all. I personally think that the bloody cost of the scanner isn't the few km of superconducting wires in it -- it's the tens of thousands of phd-educated hours of labour that went into making the thing and their large infrastructure requirements, to say nothing of the requirements of the people who look at the pictures. There are about 100-250k scanners in the world and they mostly last about a decade in an institution before being recycled -- either as niobium titanium or as a scanner on a different continent (typically). Low field may help with siting and electricity, but comes at the cost of concomitant field gradients, reduced chemical shift dispersion, a whole set of different (complicated) artefacts, and the same load of companies profiteering from them.

fnordpiglet · 2 years ago
Would it be easier to deploy devices like this to developing counties without the infrastructure to support liquid helium distribution? I imagine a much simpler device WRT exotic cooling and distribution of material requirements is a plus. Couple that with the scarcity and non-renewable nature of helium, maybe using devices like this at scale for gross MRI imagery makes sense?

The AI used here as I read it is a generative approach trying to specifically compensate for EMI artifacts rather than a physics model and it likely wouldn’t be doing macro changes like sneezes to knees, no?

bone_slide · 2 years ago
As one of the people that look at the images, this is the best comment in the thread.

Lots of AI nonsense permeating radiology right now, which seems to be fairly effective click bait and an easy way to generate hype and headlines.

op00to · 2 years ago
It would suck if lesions or tumors look like noise.
fnordpiglet · 2 years ago
Except there are other uses for an MRI and something that doesn’t require super conductors would be pretty awesome and deployable to places that lack the infra to support a complex machine depending on near absolute zero temperatures and the associated complexities.
cornholio · 2 years ago
> We conducted imaging on healthy volunteers, capturing brain, spine, abdomen, lung, musculoskeletal, and cardiac images. Deep learning signal prediction effectively eliminated EMI signals, enabling clear imaging without shielding.

So essentially, the neural net was trained to what a healthy MRI looks like and would, when exposed to abnormal structures, correct them away as EMI noise leading to wrong diagnostics?

I won't be very dismissive of this approach and probably deep learning has a strong role to play in improving medical imaging. But this paper is far, far from sufficient to prove it. At a minimum, it would require mixed healthy / abnormal patients with particularities that don't exist in the training set, and each diagnostic reconfirmed later on a high resolution machine. You need to actually prove the algorithm does not distort the data, because an MRI that hallucinates a healthy patient is much more dangerous than no MRI at all.

rossant · 2 years ago
Seems like a huge and obvious red flag to me indeed. I can't imagine how the authors managed to not even mention the issue in the abstract. If the model is trained on healthy scans, well, yes, it will spit out healthy scans. The whole point of clinical radiology is to get enough precision to detect (potentially subtle) anomalies.
nullc · 2 years ago
I don't think that (necessarily) says what you think it says.

You can read that as saying that the DL eliminated the background noise rather than saying that the system was conditioned on images of healthy people. From that it may well have been conditioned on just an empty machine or neutral test samples.

If so, there may be a good reason to suspect that it isn't likely to create artifacts that look like or mask anatomical structures.

cornholio · 2 years ago
You can read it like that, but they surely didn't prove it works like that and the burden of proof is squarely on them.

Realistically, the training set is most likely MRIs of similar tissues and would be naturally biased towards healthy structures. Even the remotest possibility of a hallucination should be addressed and disproved for such an application but they make no mention of it, just "OMG magic ENHANCE button!".

hackerlight · 2 years ago
If the noise exists only on a certain frequency then the model would learn a passband filter of sorts and won't necessarily filter out abnormal structures. But they'd need to verify that.
RivieraKid · 2 years ago
Well in theory you use a neural net to can generate realistic MRI images with 0 Tesla.
Toutouxc · 2 years ago
I love how succinct this argument is, and yet it contains everything.

Deleted Comment

alwa · 2 years ago
I can’t access the full paper, but from the abstract, is it accurate that they’re using ML techniques to synthesize higher-quality and higher-resolution imagery, and that’s the basis for their claim that it’s comparable to the output of a conventional MRI scan?

Do clinicians really prefer that the computer make normative guesses to “clean up” the scan, versus working with the imagery reflecting the actual measurements and applying their own clinical judgment?

eig · 2 years ago
I can say that most radiologists would not want a computer trying to fix poor scan data. If the underlying data is bad, they would have recommend an orthogonal imaging abnormality. "I don't know" is a possible response radiologists can give. Trying to add training data to "clean up" an image would bias the read towards "normal".
bone_slide · 2 years ago
Spot on. When I can't interpret a study due to artifact, I say that in my report.

Let's say there's a CTA chest that is limited because the patient breathed while the scan was being acquired, I need to let the ordering clinician know that the study is not diagnostic, and recommend an alternative.

If AI eliminates the artifact by filling in expected but not actually acquired data, I am screwed and the patient is screwed.

falcor84 · 2 years ago
To nitpick, wouldn't it by definition bias the read toward normal? I suppose the problem is more that you don't want to bias it to normal if it wasn't.
deepsun · 2 years ago
My understanding as well. That... will bias towards training data, and will miss more anomalies. And anomalies is the point of scanning.
j7ake · 2 years ago
They already use computers to make guesses to clean up the scan.

A core part of processing MRI is the compressed sensing algorithm .

bone_slide · 2 years ago
As a practicing radiologist, I think this is great. We can have AI enabled MRI scanners hallucinating images, read by AI interpreting systems hallucinating reports!
rossant · 2 years ago
And then we have systems hallucinating patients, data, and entire medical publications! The future is here.
rasmus1610 · 2 years ago
I'm a radiologist and very sceptic about low-field MRI + ML actually replacing normal high-field MRI for standard diagnostic purposes.

But in a emergency setting or especially for MRI-guided interventions these low-field MRIs can really play a significant role. Combining these low-field MRIs with rapid imaging techniques makes me really excited about what interventional techniques become possible.

sitkack · 2 years ago
There is an opinion piece in the same issue that agrees with you.

https://www.science.org/doi/10.1126/science.adp0670

> This machine costs a fraction of current clinical scanners, is safer, and needs no costly infrastructure to run (2). Although low-field machines are not capable of yielding images that are as detailed as those from high-field clinical machines, the relatively low manufacturing and operational costs offer a potential revolution in MRI technology as a point-of-care screening tool.

I don't think this machine is being billed as replacement to high-field machines.

xattt · 2 years ago
> I don't think this machine is being billed as replacement to high-field machines.

Countries where health regulation is less developed are likely to see misrepresentation where this form of MRI will be equated to full-field MRI by snake oil salesmen.

bagels · 2 years ago
What is it about lower fields that means you cannot get a good image? Interference? Tissue movement in longer exposures? Why can't the device just integrate over a longer period of time?
pbmonster · 2 years ago
It's just the physical reality of nuclear magnetic resonance. SNR scales with B^(3/2), since the signal scales with B^2 and the noise scales with root B.

This means going from 0.05T to 1.5T boosts your sensitivity ~150x. Measurement time scales with sensitivity^2, so you'd have to measure 20k x longer.

sitkack · 2 years ago
The application of a system like this could be as augmentation to imagers like CT and ultrasound. Because of its up resolution techniques and lower raw resolution (2x2x8mm), it might not be used for early cancer detection. But it looks really useful in a trauma center or for guiding surgery, etc. These same techniques could also be applied to CT scans, I could see a multi sensor scanner that did both CT and NMRI use super low power, potentially even battery powered.

Regardless, this is super neat.

> We developed a highly simplified whole-body ultra-low-field (ULF) MRI scanner that operates on a standard wall power outlet without RF or magnetic shielding cages. This scanner uses a compact 0.05 Tesla permanent magnet and incorporates active sensing and deep learning to address electromagnetic interference (EMI) signals. We deployed EMI sensing coils positioned around the scanner and implemented a deep learning method to directly predict EMI-free nuclear magnetic resonance signals from acquired data. To enhance image quality and reduce scan time, we also developed a data-driven deep learning image formation method, which integrates image reconstruction and three-dimensional (3D) multiscale super-resolution and leverages the homogeneous human anatomy and image contrasts available in large-scale, high-field, high-resolution MRI data.