Readit News logoReadit News
remmargorp64 · 2 years ago
Basically, if someone is obese, they probably have some amount of insulin resistance (leading to type 2 diabetes).

Fat affects vocal chords, and this can be detected in the vocal patterns.

This whole "AI Tool" could be replaced with a simple question:

"Are you fat?"

But people often aren't honest with themselves about this topic.

test6554 · 2 years ago
Sorry, best I can do is an app that tells you if the person you are talking with on the phone is fat. Great for 900 numbers and such. It can even chime in and say "Fat person detected" on the call in a robot voice.
bjornlouser · 2 years ago
"Scientists at Klick Applied Sciences in Canada, working with faculty at Ontario Tech University in Canada, trained the AI using 267 voice recordings from people living in India.

Roughly 72 per cent of the participants (79 women and 113 men) had already been diagnosed as nondiabetic. The other participants (18 women and 57 men) had been diagnosed with type 2 diabetes.

All participants recorded a phrase six times per day for two weeks, resulting in a total of 18,000 recordings. The scientists then pinpointed 14 acoustic differences between those with and without type 2 diabetes."

kelseyfrog · 2 years ago
> optimal prediction models achieved accuracies of 0.75±0.22 for women and 0.70±0.10 for men

For a clinician, sensitivity and specificity are much more useful. It's too bad they didn't publish these.

dragonwriter · 2 years ago
True, but accuracy tells us something, and an accuracy of 0.75±0.22 tells us that what we know is very close to “we have reasonable confidence that it is at least very slightly better than a coin flip”.
jncfhnb · 2 years ago
“You don’t have diabetes” is better than a coin flip
cauliflower2718 · 2 years ago
If 75% have diabetes (or don't have diabetes -- either way), then 75% accuracy is also the accuracy of a naive coin flip.

That is to say, don't compare to the accuracy of an even coin flip; compare to the accuracy of a coin flip that flips according to the proportion of the people with the attribute you care about.

julian_sark · 2 years ago
Woman with Fake Voice to do Hundreds of Tests on Single Drop of Blood:

"Arrest the witch!!"

AI to Diagnose Diabetes Based on Seconds-long Snippet of Voice:

"Uh. Interesting ...!"

MacsHeadroom · 2 years ago
It's literally just identifying overweight people and then leaning into the correlation between weight and obesity. If you're obese, chances are you either have diabetes or will soon.
nextworddev · 2 years ago
Off topic, but not sure if this level of audio classification task can be ever done with few-shot prompting a multi-modal model. Don't think GPT4V will be there yet, but food for thought.
jncfhnb · 2 years ago
Sounds like bullshit to me.

Note that the “optimal model” includes age and BMI.

The predictive model seems to be logistic regression and Naive Bayes. There’s no fancy AI here. Just some basic feature summary.

I don’t have time to go into it too detailed but I’m noticing they’re saying some vocal features are P <0.001 for the matched datasets even though the values across the two classes seem awfully similar. I’m not sure what’s going on there.

zer8k · 2 years ago
Does the person just say "I have type 2 diabetes"?
silveraxe93 · 2 years ago
I _highly_ doubt this would replicate. I bet they just leaked test data somehow and won't generalise.
mo_42 · 2 years ago
What evidence makes you believe this?

I _quickly_ checked the study [1] and didn't see anything obvious.

Given that our voices reveal so much about the state of a human (think of mood, emotions), I think it's at least possible that voices could reveal something about metabolism. Just we cannot hear it because hearing diabetes is not as useful as hearing something about the mood of another person.

[1] https://www.mcpdigitalhealth.org/article/S2949-7612(23)00073...

silveraxe93 · 2 years ago
I didn't see anything in the study (that I also skimmed tbh) that is clearly wrong, which makes me dismiss the results.

It's just because I don't think this is evidence enough to change my priors. i.e. My gut tells me this is wrong and I don't buy it.

The sample is 267 people, tiny enough that I'd expect an analysis to be done with a linear model with few features. They used 14 features, but had a pipeline to select "model (out of 3), feature set, and threshold for prediction".

There's _so_ many degrees of freedom there that _by default_ I assume there's leakage.

I'd love to see this paper replicated! It would be amazing if this were true. But if I had to bet on this, I wouldn't give more than 20% chance of being true.

adr1an · 2 years ago
Also, the numbers are so ridiculously small. They won't find any of those 14 accoustic differences again in another sample of people.
kelseyfrog · 2 years ago
Can you share the results of your power analysis?