I misused LLMs to diagnose myself and ended up bedridden for a week
Posted by shortrounddev2 16 hours ago
Comments
Comment by labrador 15 hours ago
LLM sees:
my rash is not painful
i don't think it's an emergency
it might be leftover from the flu
my wife had something similar
doctors said it would go away on it's own
i want to avoid paying a doctor
LLM: Honestly? It sounds like it's not serious and you should save your moneyComment by cogman10 15 hours ago
But I have to say that prompt is crazy bad. AI is VERY good at using your prompt as the basis for the response, if you say "I don't think it's an emergency" AI will write a response that is "it's not an emergency"
I did a test with the first prompt and the immediate answer I got was "this looks like lyme disease".
Comment by morshu9001 15 hours ago
Comment by unyttigfjelltol 15 hours ago
Comment by morshu9001 15 hours ago
At no point was I just going to commit to some irreversable decision it suggested without confirming it myself or elsewhere, like blindly replacing a part. At the same time, it really helped me because I'm too noob to even know what to Google (every term above was new to me).
Comment by observationist 15 hours ago
Comment by andrepd 15 hours ago
Llama said "syphilis" with 100% confidence, ChatGPT suggested several different random diseases, and Claude at least had the decency to respond "go to a fucking doctor, what are you stupid?", thereby proving to have more sense than many humans in this thread.
It's not a matter of bad prompting, it's a matter of this being an autocomplete with no notion of ground truth and RLHF'd to be a sycophant!
Just 100B more parameters bro, I swear, and we will replace doctors.
Comment by cogman10 15 hours ago
Comment by SoftTalker 16 hours ago
If you want to be a doctor, go to medical school. Otherwise talk to someone who did.
Comment by hn_throw2025 3 hours ago
My flareups and their accompanying setbacks have been greatly reduced because I keep a megathread chat going with Gemini. I have pasted in a symptom diary, all my medications, and I check any alterations to my food or drink with it before they go anywhere near my mouth. I have thus avoided foods that are high FODMAP, slow digesting, or surprisingly high in fat or acidity.
This has really helped. I am trying to maintain my calories, so advice like “don’t risk X, increase Y instead” is immediate and actionable.
The presumption that asking a LLM is never a good choice assumes a health service where you can always get a doctor or dietician on the other end of the phone. In the UK, consultations with either for something non-urgent can take weeks, which is why people are usually pushed towards either asking a Pharmacist or going to the local Emergency department (which is often not so local these days).
So the _real_ choice is between the LLM and my best guess. And I haven’t ingested the open web, plus countless medical studies and journals.
Comment by FloorEgg 15 hours ago
If you've seen multiple doctors, specialists, etc over the span of years and they're all stumped or being dismissive of your symptoms, then the only way to get to the bottom of it may be to take matters into your own hands. Specifically this would look like:
- carefully experimenting with your living systems, lifestyle, habits, etc. best if there are at least occasional check-ins with a professional. This requires discipline and can be hard to do well, but also sometimes discovers the best solutions. (Lifestyle change solves problem instead of a lifetime of suffering or dependency on speculative pharmaceuticals)
- doing thoughtful, emotionally detached research (reading published papers slowly over a long time, e.g. weeks, months) also very hard, but sometimes you can discover things doctors didn't consider. The key is to be patient and stay curious to avoid an emotional rollercoaster and wasting doctor time. Not everyone is capable of this.
- going out of your way to gather data about your health (logging what you eat, what you do, stress levels, etc. test home for mold, check vitals, heart rate variability, etc.)
- presenting any data you gathered and research you discovered that you think may be relevant to a doctor for interpretation
Again, I want to emphasize that taking your health matters into your own hands like this only makes sense to do after multiple professionals were unhelpful AND if you're capable of doing so responsibly.
Comment by burningChrome 15 hours ago
He basically said, "I'm not worried yet. But I would never recommend someone do that. If you have health insurance, that's what you pay for, not for Google to tell you you're just fine, you really don't have cancer."
Thinking about a search engine telling me I don't have cancer was enough to scare the bejesus out me that I swung in the completely opposite direction and for several years became a hypochondriac.
This was also fodder for a lot of stand up comedians. "Google told me I either have the flu, or Ebola, it could go either way, I don't know."
Comment by herpdyderp 15 hours ago
Comment by morshu9001 15 hours ago
Except the author did it wrong. You don't just ignore a huge rash that every online resource will say is lyme disease. If you really want to trust an LLM, at least prompt it a few different ways.
Comment by cogman10 15 hours ago
It's anything beyond that which I think needs medical attention.
Comment by SoftTalker 8 hours ago
If I had some weird symptoms that I didn't understand, or even well known warning signs for something, I'd go to a doctor. What is Google going to tell me that I can trust or even evaluate? I don't know anything about internal medicine, I'll ask someone who studied it for 8 years and works in the field professionally.
Comment by bdangubic 8 hours ago
if you can afford that, many can’t
Comment by waweic 15 hours ago
Also: Amoxicillin is better than its reputation. Three doctors might literally recommend four different antibiotic dosages and schedules. Double-check everything; your doctor might be at the end of a 12-hour shift and is just as human as you. Lyme is very common and best treated early.
Edit: Fixed formating
Comment by mttch 15 hours ago
Comment by avra 15 hours ago
Comment by cowlby 15 hours ago
Comment by daveguy 15 hours ago
Comment by olsondv 15 hours ago
Comment by malfist 13 hours ago
Last time I was in for getting hundreds of tick bites in one hike (that was fun), I was also told to avoid eating red meat until labs came back. That Alpha-gal is getting more common in my area, and the first immune response is anaphylactic in 40% of the cases, best not to risk it.
If you wonder what one side of one leg looked like during the "hundreds of tick bites on a single hike" take a gander: https://www.dropbox.com/scl/fi/jekrgxa9fv14j28qga7xc/2025-08...
That was on both legs, both sides all the way up to my knees
Comment by monerozcash 1 hour ago
Yeah, if you develop a rash from a tick bite that even remotely looks like it could be lyme, just go to a pet store to buy amoxicillin (you can get exactly the same stuff they give to humans) if you can't quickly find a doctor who'll take it seriously enough to immediately write you a prescription (unless, of course, they have a very well reasoned explanation for not doing so).
The potential consequences of not getting fast treatment are indeed so so much worse than the practically non-existent consequences of taking amoxicillin when you don't need it, unless you're a crazy hypochondriac who constantly thinks they might have lyme.
But hey, also don't blindly trust medical advice from HN commenters telling you to go buy pet store antibiotics :)
Comment by jtsiskin 15 hours ago
Comment by orwin 12 hours ago
Comment by only-one1701 16 hours ago
Comment by arjie 15 hours ago
> If you read nothing else, read this: do not ever use an AI or the internet for medical advice.
Your comment seems out of place unless the article was edited in the 10 minutes since the comment was written.
Comment by only-one1701 15 hours ago
Comment by tapete2 6 hours ago
> I have this rash on my body, but it's not itchy or painful, so I don't think it's an emergency?
If you cannot use punctuation correctly, of course you cannot diagnose yourself.
Comment by blakesterz 15 hours ago
"Turns out it was Lyme disease (yes, the real one, not the fake one) and it (nearly) progressed to meningitis"
What does "not the fake one" mean, I must be missing something?Comment by shortrounddev2 15 hours ago
Lyme is a bacterial infection, and can be cured with antibiotics. Once the bacteria is gone, you no longer have Lyme disease.
However, there is a lot of misinformation about Lyme online. Some people think Lyme is a chronic, incurable disease, which they call "chronic lyme". Often, when a celebrity tells people they have lyme disease, this is what they mean. Chronic lyme is not a real thing - it is a diagnosis given to wealthy people by unqualified conmen or unscrupulous doctors in response to vague, hard to pin symptoms
Comment by cogman10 11 hours ago
The late stage of lyme disease is painful. Like "I think I'm dying" painful. It does have a range of symptoms, but those show up like 3 to 6 weeks after the initial infection.
A lot of people claiming chronic lyme disease don't remember this stage.
Lyme disease does cause a range of problems if left untreated. But not before the "I think I'm dying" stage. It's basically impossible for someone, especially with a lot of wealth, to get lyme disease and not have it caught early on.
Consider the OP's story. They tried to not treat it but ended up thinking "OMG, I think I have meningitis and I'm going to die!".
Lyme can kill, but it rarely does. Partially because before it gets to that point it drives people to seek medical attention.
Comment by pogue 12 hours ago
Comment by cheald 15 hours ago
The real lesson here is "learn to use an LLM without asking leading questions". The author is correct, they're very good at picking up the subtext of what you are actually asking about and shaping their responses to match. That is, after all, the entire purpose of an LLM. If you can learn to query in such a way that you avoid introducing unintended bias, and you learn to recognize when you've "tainted" a conversation and start a new one, they're marvelous exploratory (and even diagnostic) tools. But you absolutely cannot stop with their outputs - primary sources and expert input remain supreme. This should be particularly obvious to any actual experts who do use these tools on a regular basis - such as developers.
Comment by shortrounddev2 15 hours ago
Comment by monerozcash 15 hours ago
I'm certainly not suggesting that you should ask LLM for medical diagnoses, but still, someone who actually understands the tool they're using would likely not have ended up in your situation.
Comment by shortrounddev2 15 hours ago
Comment by monerozcash 15 hours ago
Should they not have done so?
Like this guy for example, was he being stupid? https://www.thesun.co.uk/health/37561550/teen-saves-life-cha...
Or this guy? https://www.reddit.com/r/ChatGPT/comments/1krzu6t/chatgpt_an...
Or this woman? https://news.ycombinator.com/item?id=43171639
This is a real thing that's happening every day. Doctors are not very good at recognizing rare conditions.
Comment by shortrounddev2 15 hours ago
They got lucky.
This is why I wrote this blog post. I'm sure some people got lucky when an LLM managed to give them the right answer, because they go and brag about it. How many people got the wrong answer? How many of them bragged about their bad decision? This is _selection bias_. I'm writing about my embarrassing lapse of judgment because I doubt anyone else will
Comment by monerozcash 15 hours ago
AI saves lives, it's selection bias.
AI gives bad advice after being asked leading questions by a user who clearly doesn't know how to use AI correctly, AI is terrible and nobody should ask it about medical stuff.
Or perhaps there's a more reasonable middle ground? "It can be very useful to ask AI medical questions, but you should not rely on it exclusively."
I'm certainly not suggesting that your story isn't a useful example of what can go wrong, but I insist that the conclusions you've reached are in fact mistaken.
The difference between your story and the stories of the people whose lives were saved by AI is that they did generally not blindly trust what the AI told them. It's not necessary to trust AI to receive helpful information, it is basically necessary to trust AI in order to hurt yourself using it.
Comment by looknee 15 hours ago
Comment by monerozcash 15 hours ago
Comment by xiphias2 15 hours ago
Both ChatGPT o3 and 5.1 Pro models helped me a lot diagnosing illnesses with the right queries. I am using lots of queries with different context / context length for medical queries as they are very serious.
Also they have better answer if I am using medical language as they retrieve answers from higher quality articles.
I still went to doctors and got more information from them.
Also I do blood tests and MRI before going to doctors and the great doctors actually like that I go there prepared but still open to their diagnosis.
Comment by jfindper 15 hours ago
Comment by monerozcash 1 hour ago
The problem isn't getting medical advice from LLMs, it's blindly trusting the medical advice a LLM gives you.
You do not need to trust the LLM for it to be able to save your life, you do need to trust the LLM for it to be able to harm you.
Comment by shortrounddev2 15 hours ago
Comment by only-one1701 15 hours ago
Comment by maplethorpe 15 hours ago
Note: I haven't updated this comment template recently, so the versions may be a bit outdated.
Comment by arjie 15 hours ago
But it was just a search tool. It could only tell you if someone else was thinking about it. Chatbots as they are presented are a pretty sophisticated generation tool. If you ground them, they function fantastically to produce tools. If you allow them to search, they function well at finding and summarizing what people have said.
But Earth is not a 4-corner 4-day simultaneous time cube. That's on you to figure out. Everyone I know these days has a story of a doctor searching for their symptoms on Gemini or whatever in front of them. But it reminds me of a famous old hacker koan:
> A newbie was trying to fix a broken Lisp machine by turning it off and on.
> Thomas Knight, seeing what the student was doing, reprimanded him: "You cannot fix a machine by just power-cycling it without understanding of what is wrong."
> Knight then power-cycled the machine.
> The machine worked.
You cannot ask an LLM without understanding the answer and expect it to be right. The doctor understands the answer. They ask the LLM. It is right.
Comment by hansmayer 15 hours ago
Yeah, no shit Sherlock? I´d be absolutely embarrassed to even admit to something like this, let alone share the "wisdom perls" like "dont use a machine which guesses its outputs based on whatever text it has been fed" to freaking diagnose yourself? Who would have thought, an individual professional with decades in theoretical and practical training, AND actual human intelligence (Or do we need to call it HGI now), plus tons of experience is more trustworthy, reliable and qualified to deal with something as serious as human body. Plus there are hundreds of thousands of such individuals and they dont need to boil an ocean every time they are solving a problem in their domain of expertise. Compared to a product of entshittified tech industry which in the recent years has only ever given us irrelevant "apps" to live in, without addressing really important issues of our time. Heck, even Peter Thiel agrees with this, at least in his "Zero to one" he did.
Comment by foobarbecue 15 hours ago
Comment by monerozcash 1 hour ago
Blindly trusting medical info from LLMs is idiotic and can kill you.
Pretty much any tool will be dangerous if misused.
Comment by shortrounddev2 15 hours ago
Comment by monerozcash 15 hours ago
Comment by sofixa 15 hours ago
> You need to go to the emergency room right now".
> So, I drive myself to the emergency room
It is absolutely wild that a doctor can tell you "you need to go to the emergency right now", and that is an act left to someone who is obviously so unwell they need to go to the ER right now. With a neck so stiff, was OP even able to look around properly while driving?
Comment by morshu9001 15 hours ago
Comment by jeffbee 15 hours ago
Comment by ikrenji 15 hours ago
Comment by OutOfHere 10 hours ago
It is up to you to query them for the best output, and put the pieces together. If you bias them wrongly, it's your own fault.
For every example where an LLM misdiagnosed, a PCP could do much worse. People should think of them as idea generators, subjecting the generated ideas to diagnostic validation tests. If an idea doesn't pan out, keep querying until you hit upon the right idea.
Comment by Escapado 15 hours ago
Now, I live in Germany where in the last 20 years our healthcare system has fallen victim to neoliberal capitalism and since I am publicly insured by choice I often have to wait for weeks to see a specialist so more often than not LLMs have helped me stay calm and help myself as best as I can. However I still view the output less as a the output or a medical professional and try to stay skeptic along the way. I feel like the augment my guesswork and judgement, but not replace it.
Comment by lenerdenator 15 hours ago
YouTuber ChubbyEmu (who makes medical case reviews in a somewhat entertaining and accessible format) recently released a video about a man who suffered a case of brominism (which almost never happens anymore) after consulting an LLM. [0]
Comment by abstractspoon 6 hours ago
Comment by shortrounddev2 14 hours ago
Moral of the story kids: don't post on HN
Comment by monerozcash 1 hour ago
Even if you absolutely despise LLMs, this is just silly. The problem here isn't "AI enthusiasts", you're getting called out for the absolute lack of nuance in your article.
Yes, people shouldn't do what you did. Yes, people will unfortunately continue doing what you did until they get better advice. But the correct nuanced advice in a HN context is not "never ask LLMs for medical advice", you will rightfully get flamed for that. The correct advice is "never trust medical advice from LLMs, it could be helpful or it could kill you".
Comment by Gibbon1 15 hours ago
Fuck man if this is you go to the ER.
Comment by reenorap 15 hours ago
Comment by WhyOhWhyQ 15 hours ago
Comment by jfindper 15 hours ago
I cannot believe that the top-voted comment right now is saying to not trust doctors, use an LLM to diagnose yourself and others.
Comment by vorpalhex 15 hours ago
I'll share mine.
Unusual severe one sided eye pain. Go to regular doctor's, explain, get told it's a "stye" and do hot compresses.
Problem gets worse, I go to urgent care. Urgent care doc takes one look at me and immediately sends me to the ER saying it's severe and she can't diagnose it because she is unqualified.
Go to ER, get seen by two specialists, a general practicioner and a gaggle of nurses. Get told it's a bad eye infection, put on strong steroids.
Problem gets worse (more slowly at least).
Schedule an urgent appoint with an opthalmologist. For some reason the scheduling lady just like, comprehends my urgency and gets me a same day appointment.
Opthalmologist does 5 minutes of exam, puts in some eye drops and pain is immediately gone. She puts me on a very serious steroid with instructions to dose hourly and visit her daily. Only reason I am seeing out of both eyes today.
As the top comment says, do not just "trust" Doctors. About 70% of hospital deaths are due to preventable mistakes in the hospital. People who are invested in their own care, who seek second opinions, who argue (productively) with their doctor have the best outcomes by far.
Nobody said not to work with doctors, but blindly trusting a single doctor will seriously harm your outcomes.
Comment by jfindper 14 hours ago
It's awful that you had a bad experience, but no. Nowhere near 70% of hospital deaths are from preventable mistakes.
I would also note that in your experience, you ended up trusting a different doctor (ophthalmologist), not ChatGPT. Second opinions from other qualified professionals is a thumbs up from me.
Comment by malfist 13 hours ago
Just like I wouldn't go to my podiatrist to treat a complex case of rosecea, urgent care and GCP aren't for specialized, complex and rare cases.
Comment by ponector 12 hours ago
Comment by hiyer 12 hours ago
Comment by daveguy 15 hours ago
Sure sounds like an asspull stat. Extraordinary claims, extraordinary evidence. Do you have a reference for that? Care to share?
Comment by dingnuts 14 hours ago
Comment by dingnuts 14 hours ago
Comment by nataliste 13 hours ago
Comment by tomhow 10 hours ago
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Please don't fulminate. Please don't sneer, including at the rest of the community.
Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
Eschew flamebait. Avoid generic tangents. Omit internet tropes.
We have to ban accounts that continue to comment like this, so please take a moment to remind yourself of the guidelines and make an effort to observe them if you want to keep participating here.
Comment by WhyOhWhyQ 12 hours ago
LLM's are heavily biased by leading questions, which is not a good property in a doctor. They also tend to respond in a way to please the prompter, which might be a frustrated parent, not in a way which is impartial and based on good medical practices. They frequently speak with high confidence and low accuracy.
Comment by jfindper 13 hours ago
>The problem is you're all dumb and getting dumber.
But yeah, you're the only smart person here. Clap clap.
Comment by nataliste 13 hours ago
Like clockwork.
Comment by jfindper 13 hours ago
Genius call.
Comment by nutjob2 4 hours ago
Usually you've got less than a 50-50 chance of getting useful help from a doctor on any one visit.
In my case multiple doctor's treatment of my hypertension was largely useless because it followed the "typical" treatment. I only found what worked for me by accident.
In other cases my own research was far superior to a regular doctor's visit.
I can also quote the experience of a family member whose research into a major surgery paid huge dividends and if they had followed the advice of the first 2 doctors the surgery would very likely have failed.
The chances of getting correctly diagnosed and treated for a problem that is not obvious is surprisingly close to zero unless you have a very good doctor who has plenty of time for you.
Comment by reenorap 15 hours ago
Comment by jfindper 15 hours ago
They did not say that in their comment.
Replying in obvious bad faith makes your original comment even less credible than it already is.
Comment by nataliste 15 hours ago
Comment by SketchySeaBeast 15 hours ago
They make an argument to always "verify", but if your first source is ChatGPT, where are you verifying next? And why not go their first?
Comment by nataliste 14 hours ago
And gee, what are people supposed to do when they encounter potentially unreliable information from a generic non-vetted source? The same thing you do with literally any other one:
https://usingsources.fas.harvard.edu/what%E2%80%99s-wrong-wi...
> If you do start with Wikipedia, you should make sure articles you read contain citations–and then go read the cited articles to check the accuracy of what you read on Wikipedia. For research papers, you should rely on the sources cited by Wikipedia authors rather than on Wikipedia itself.
> There are other sites besides Wikipedia that feature user-generated content, including Quora and Reddit. These sites may show up in your search results, especially when you type a question into Google. Keep in mind that because these sites are user-authored, they are not reliable sources of fact-checked information. If you find something you think might be useful to you on one of those sites, you should look for another source for this information.
> The fact that Wikipedia is not a reliable source for academic research doesn't mean that it's wrong to use basic reference materials when you're trying to familiarize yourself with a topic. In fact, the Harvard librarians can point you to specialized encyclopedias in different fields that offer introductory information. These sources can be particularly useful when you need background information or context for a topic you're writing about.
This isn't rocket science.
Comment by avra 15 hours ago
The author of the blog post also mentioned they tried to avoid paying for an unnecessary visit to the doctor. I think the issue is somewhere else. As a European, personally I would go to the doctor and while sitting in the waiting room I would ask an LLM out of curiosity.
Comment by beAbU 6 hours ago
But what do I know.
Comment by andrepd 15 hours ago
Comment by ludicrousdispla 1 hour ago
Comment by sofixa 15 hours ago
But it is a sycophant and will confirm your suspicions, whatever they are and regardless if they're true.
Comment by hn_throw2025 3 hours ago
In my experience to date, ChatGPT really is a sycophant. Claude can be moderately stubborn. Gemini is usually very stubborn, practically unmovable, unless you present new facts or make a rock solid counter-argument.
Comment by marcellus23 15 hours ago
Comment by nataliste 15 hours ago
Comment by cogman10 15 hours ago
... Um what?
The only way to diagnose a fractured arm is an xray. You can suspect the arm is fractured (rotating it a few directions) but ultimately a muscle injury will feel identical to a fracture especially for a kid.
Please, if you suspect a fracture just take your kid to the doctor. Don't waste your time asking ChatGPT if this might be a fracture.
This just feels beyond silly to me imagining the scenario this would arise in. You have a kid crying because their arm hurts. They are probably protectively holding it and won't let you touch it. And your first instinct is "Hold on, let me ask chatgpt what it thinks. 'Hey chat GPT, my kid is her crying really loud and holding onto their arm. What could this mean?'"
What possessed you to waste time like that?
Comment by reenorap 15 hours ago
Comment by cogman10 15 hours ago
Because the way you phrased it with the article in question made it sound like you hadn't first gone to the doctor. This isn't a question about doctors being fallible or not but rather what first instincts are when medical issues arise.
> uneducated comment filled with weird assumptions.
No, not uneducated nor were these assumptions weird as other commenters obviously made the same ones I did.
I'll not delete my comment, why should I? The advice is still completely valid. Go to the doctor first, not GPT.
Comment by jfindper 15 hours ago
Hilariously, this is the second time you posted this exact line, to yet another person who _didn't say this_!
Comment by shortrounddev2 15 hours ago
Using ChatGPT for medical issues is the single dumbest thing you can do with ChatGPT
Comment by hn_throw2025 4 hours ago
Guess what happened.
Comment by ndsipa_pomu 4 hours ago
Comment by hn_throw2025 3 hours ago
And they mostly just come to a conclusion, rather than actionable advice I was able to pass on, like take a bag and avoid eating because they might want to operate soon.
Comment by andrepd 4 minutes ago
But it's 2025, so old fashioned.
Comment by ndsipa_pomu 32 minutes ago
Comment by buellerbueller 16 hours ago
(Also, it is the fault of the LLM vendor too, for allowing medical questions to be answered.)
Comment by morshu9001 15 hours ago
Comment by robrain 15 hours ago
Comment by monerozcash 15 hours ago
This should be a configurable option.
Comment by morshu9001 15 hours ago
Comment by measurablefunc 15 hours ago
Comment by buellerbueller 15 hours ago
Comment by Starlevel004 15 hours ago
I completely disagree. I think we should let this act as a form of natural selection, and once every pro-AI person is dead we can get back to doing normal things again.
Comment by tsoukase 14 hours ago