Moral of the story kids: don't post on HN
Deleted Comment
Moral of the story kids: don't post on HN
Should they not have done so?
Like this guy for example, was he being stupid? https://www.thesun.co.uk/health/37561550/teen-saves-life-cha...
Or this guy? https://www.reddit.com/r/ChatGPT/comments/1krzu6t/chatgpt_an...
Or this woman? https://news.ycombinator.com/item?id=43171639
This is a real thing that's happening every day. Doctors are not very good at recognizing rare conditions.
They got lucky.
This is why I wrote this blog post. I'm sure some people got lucky when an LLM managed to give them the right answer, because they go and brag about it. How many people got the wrong answer? How many of them bragged about their bad decision? This is _selection bias_. I'm writing about my embarrassing lapse of judgment because I doubt anyone else will
"Turns out it was Lyme disease (yes, the real one, not the fake one) and it (nearly) progressed to meningitis"
What does "not the fake one" mean, I must be missing something?Lyme is a bacterial infection, and can be cured with antibiotics. Once the bacteria is gone, you no longer have Lyme disease.
However, there is a lot of misinformation about Lyme online. Some people think Lyme is a chronic, incurable disease, which they call "chronic lyme". Often, when a celebrity tells people they have lyme disease, this is what they mean. Chronic lyme is not a real thing - it is a diagnosis given to wealthy people by unqualified conmen or unscrupulous doctors in response to vague, hard to pin symptoms
I'm certainly not suggesting that you should ask LLM for medical diagnoses, but still, someone who actually understands the tool they're using would likely not have ended up in your situation.
The real lesson here is "learn to use an LLM without asking leading questions". The author is correct, they're very good at picking up the subtext of what you are actually asking about and shaping their responses to match. That is, after all, the entire purpose of an LLM. If you can learn to query in such a way that you avoid introducing unintended bias, and you learn to recognize when you've "tainted" a conversation and start a new one, they're marvelous exploratory (and even diagnostic) tools. But you absolutely cannot stop with their outputs - primary sources and expert input remain supreme. This should be particularly obvious to any actual experts who do use these tools on a regular basis - such as developers.
Yeah, no shit Sherlock? I´d be absolutely embarrassed to even admit to something like this, let alone share the "wisdom perls" like "dont use a machine which guesses its outputs based on whatever text it has been fed" to freaking diagnose yourself? Who would have thought, an individual professional with decades in theoretical and practical training, AND actual human intelligence (Or do we need to call it HGI now), plus tons of experience is more trustworthy, reliable and qualified to deal with something as serious as human body. Plus there are hundreds of thousands of such individuals and they dont need to boil an ocean every time they are solving a problem in their domain of expertise. Compared to a product of entshittified tech industry which in the recent years has only ever given us irrelevant "apps" to live in, without addressing really important issues of our time. Heck, even Peter Thiel agrees with this, at least in his "Zero to one" he did.
Using ChatGPT for medical issues is the single dumbest thing you can do with ChatGPT
Both ChatGPT o3 and 5.1 Pro models helped me a lot diagnosing illnesses with the right queries. I am using lots of queries with different context / context length for medical queries as they are very serious.
Also they have better answer if I am using medical language as they retrieve answers from higher quality articles.
I still went to doctors and got more information from them.
Also I do blood tests and MRI before going to doctors and the great doctors actually like that I go there prepared but still open to their diagnosis.
Even if you absolutely despise LLMs, this is just silly. The problem here isn't "AI enthusiasts", you're getting called out for the absolute lack of nuance in your article.
Yes, people shouldn't do what you did. Yes, people will unfortunately continue doing what you did until they get better advice. But the correct nuanced advice in a HN context is not "never ask LLMs for medical advice", you will rightfully get flamed for that. The correct advice is "never trust medical advice from LLMs, it could be helpful or it could kill you".