I would never to ask to vote at a remote place where I do not live permanently, yet where I even not a citizen?
I would never to ask to vote at a remote place where I do not live permanently, yet where I even not a citizen?
On the medical side, we need statistically significant tests that physicians can know and rely on - this paper was likely obsolete when it was published, depending on what "AI CAD" means in practice.
I think this impedance mismatch between disciplines is pretty interesting; any thoughts from someone who understands the med side better?
"The images were analyzed using a commercially available AI-CAD system (Lunit INSIGHT MMG, version 1.1.7.0; Lunit Inc.), developed with deep convolutional neural networks and validated in multinational studies [1, 4]."
It's presumably a proprietary model, so you're not going to get a lot more information about it, but it's also one that's currently deployed in clinics, so...it's arguably a better comparison than a SOTA model some lab dumped on GitHub. I'd add that the post headline is also missing the point of the article: many of the missed cases can be detected with a different form of imaging. It's not really meant to be a model shoot-out style paper.
* Kim, J. Y., Kim, J. J., Lee, H. J., Hwangbo, L., Song, Y. S., Lee, J. W., Lee, N. K., Hong, S. B., & Kim, S. (2025). Added value of diffusion-weighted imaging in detecting breast cancer missed by artificial intelligence-based mammography. La Radiologia medica, 10.1007/s11547-025-02161-1. Advance online publication. https://doi.org/10.1007/s11547-025-02161
Am I running if I say "I'm running now" while I sit in my chair?
"I'm running now" doesn't make you jog if you're sitting down, but it certainly kicks off a campaign if you were considering elected office.
JL Austin called these sort of statements "performative utterances" and there's a lot of linguistic debate about them. Nevertheless, "I declare war", uttered by someone with the power to do so, is pretty unambiguously an example of one.
https://news.ycombinator.com/item?id=46289133
EDIT: The reason being, with reliabilities as bad as these, it is obvious almost all fMRI studies are massively underpowered, and you really need to have hundreds or even up to a thousand participants to detect effects with any statistical reliability. Very few fMRI studies ever have even close to these numbers (https://www.nature.com/articles/s42003-018-0073-z).
Within-subject effects (this happens when one does A, but not when doing B) can be fine with small sample sizes, especially if you can repeat variations on A and B many times. This is pretty common in task-based fMRI. Indeed, I'm not sure why you need >2 participants expect to show that the principle is relatively generalizable.
Between-subject comparisons (type A people have this feature, type B people don't) are the problem because people differ in lots of ways and each contributes one measurement, so you have no real way to control for all that extra variation.
They are indeed coupled, but the coupling is complicated and may be situationally dependent.
Honestly, it's hard to imagine many aggregate measurements that aren't. For example, suppose you learn that the average worker's pay increased. Is it because a) the economy is booming or b) the economy crashed and lower-paid workers have all been laid off (and are no longer counted).
Herting, M. M., Gautam, P., Chen, Z., Mezher, A., & Vetter, N. C. (2018). Test-retest reliability of longitudinal task-based fMRI: Implications for developmental studies. Developmental Cognitive Neuroscience, 33, 17–26. https://doi.org/10.1016/j.dcn.2017.07.001
It's not at all clear to me that teenagers' brains OR behaviours should be stable across years, especially when it involves decision-making or emotions. Their Figure 3 shows that sensory experiments are a lot more consistent, which seems reasonable.
The technical challenges (registration, motion, etc) like things that will improve and there are some practical suggestions as well (counterbalancing items, etc).
These can be measured themselves separately (that's exactly what they did here!) and if there's a spatial component, which the figures sort of suggest, you can also look at what a particular spot tends to do. It may also be interesting/important to understand why different parts of the brain seem to use different strategies to meet that demand.
I think your expertise would be very welcome, but this comment is entirely unhelpful as-is. Saying there are bad comments in this thread and also that there is good literature out there without providing any specifics at all is just noise.
You don't have to respond to every comment you see to contribute to the discussion. At minimum, could you provide a hint for some literature you suggest reading?
The BOLD signal, the thing measured by fMRI, is a proxy for actual brain activity. The logic is that neural firing requires a lot of energy and so active neurons will being using more oxygen for their metabolism, and this oxygen comes from the blood. Thus, if you measure local changes in the oxygenation of blood, you'll know something about how active nearby neurons are. However, it's an indirect and complicated relationship. The blood flow to an area can itself change, or cells could extract more or less oxygen from the blood--the system itself is usually not running at its limits.
Direct measurements from animals, where you can measure (and manipulate) brain activity while measuring BOLD, have shown how complicated this is. Nikos Logathetis and Ralph Freeman's groups, among many others did a lot of work on this, especially c. 2000-2010. If you're interested, you could check out this news and views on Logathetis's group's 2001 Nature paper [1]. One of the conclusions of their work is that BOLD is influenced by a lot of things but largely measure the inputs to an area and the synchrony within it, rather than just the average firing rate.
In this paper, the researchers adjust the MRI sequences to compare blood oxygenation, oxygen usage, and blood flow and find that these are not perfectly related. This is a nice demonstration, but not a totally unexpected finding either. The argument in the paper is also not "abandon fMRI" but rather that you need to measure and interpret these things carefully.
In short, the whole area of neurovascular coupling is hard--it includes complicated physics (to make measurements), tricky chemistry, and messy biology, all in a system full of complicated dynamics and feedback.
Maybe things have really changed a lot since I was in school, but that was certainly not the type of questions that were asked of set works.
The questions were asked such that, the more the student got into the book, the higher the mark they were able to get.
Easy questions (everyone gets this correct if they read the book): Did his friends and family consider $protagonist to be miserly or generous.
Hard questions (only those slightly interested got these correct): Examine the tone of the conversation between $A and $B in $chapter, first from the PoV of $A and then from the PoV of $B. List the differences, if any, in the tone that $A intended his instructions to be received and the tone that $B actually understood it as.
Very hard questions (for those who got +90% on their English grades): In the story arc for $A it can be claimed that the author intended to mirror the arc for Cordelia from King Lear. Make an argument for or against this claim.
That last one is the real deal; answerable only by students who like to read and have read a lot - it involves having read similar characters from similar stories, then knowing about the role of Cordelia, and at least a basic analysis of her character/integrity, maybe having read more works by this same author (they'll know if the mirroring is accidental or intentional), etc.
We were never asked "what color shirt did $A wear to the outing" types of questions (unless, of course, that was integral to the plot - $A was a double-agent, and a red shirt meant one thing to his handler while a blue shirt meant something else).
Did I like the set works? Mostly not, but I had enough fiction under my belt in my final two years of high-school that I could sail through the very difficult questions, pulling in analogies and character arcs, tone, etc from a multitude of Shakespeare plays, social issue fictional books ("Cry, The Beloved Country", "To Kill a Man's Pride", "To Kill a Mockingbird", etc), thrillers (Frederick Forsythe, et al), SciFi (Frederick Pohl, Isaac Asimov, Philip K. Dick), Horror-ish (Stephen Kind, Dean R Koontz) and more.
With my teenager now, second-final year of high-school, I keep repeating the mantra of "To get high English marks, you need to demonstrate critical thinking, not usage of fancy words", but alas, he never reads anything that can be considered a book, so his marks never get anywhere near the 90% grade that I regularly averaged :-(
The only books he's ever read are those he's been forced to read in school.
However, some of the teachers at my school also had short pop-quizzes meant to ensure that everyone kept up with the reading. These were usually just some details from the assigned chapters and, IMO, often veered into minutia. One really was about the color of something and I don’t remember it being particularly plot-relevant or symbolic, even if it was mentioned a few times.
It wasn’t a huge part of one’s grade, but I distinctly remember being frustrated that these quizzes effectively penalized me for “getting into” the book and reading ahead.
The idea behind the recent boom in low-field stuff is that you'd like to have small/cheap machines that can be everywhere and produce good-enough images through smarts (algorithms, design) rather than brute force.
The attitude on the research side is essentially "por qué no los dos?" Crank up the field strength AND use better algorithms, in the hopes of expanding what you can study.