Medicine Lives in Uncertainty & AI is Making That Impossible to Ignore
When I was running telemedicine during the COVID surge in Philadelphia, our patient volume jumped 650% in a single week. Practically overnight, we went from a small program to a primary way to see, screen patients and offer testing. I didn’t expect a pandemic, of course, but had always believed in telehealth. What I didn’t expect was how this shift would fundamentally change the doctor-patient relationship in ways that had nothing to do with the technology itself.
The messy process of medical decision-making stays mostly hidden from patients. A doctor would examine you, think through possibilities, maybe consult a colleague, and then present a clear recommendation. We would do this at our desk or when we stepped out. On a video call, some of that would happen during the call itself - there is no ‘stepping out and back in’ in telemedicine in the same manner. Granted, a patient would not necessarily see what we might be searching over our shoulder.
For years, doctors pulled up clinical guidelines ‘back at their desk’. Even if patients heard us say “let me look that up” or “I want to double-check something”, they would not necessarily see it. That changed with us on our EMRs, sitting in the room and also, while during a telehealth visit, as we tell the patient what we are doing in the spirit of ‘Telehealth Best Practices’.
Thanks for reading The Digital Stethoscope! If you are interested in my thoughts on Medicine, Digital Health, and our Humanism in Healthcare, subscribe below.
Absolute Knowledge Doesn’t Exist
Medicine has always lived in uncertainty. Every diagnosis is fundamentally a probability assessment. Every treatment plan is an educated bet based on available evidence, clinical experience, and pattern recognition. We use phrases like “most likely” and “in my clinical judgment” because that’s what medicine actually is—educated guessing guided by data and experience, not absolute knowledge.
Absolute knowledge is a path we aim toward; humans are never going to reach it. Medicine is simply the same.
A patient comes in with chest pain. Is it a heart attack, anxiety, reflux, or muscle strain? We run tests to rule things out, weigh the likelihoods based on what we find, and make our best judgment call. Sometimes we’re right immediately. Sometimes we need to adjust our thinking as we get more information. Sometimes we miss things until it’s too late, and we have to reckon with that failure—we train to make this as rare as possible. Emergency Medicine does this quickly and it separates those with experience from those learning. This is the reality of clinical practice, even though we rarely talk about it so directly.
Every medical student learns 'differential diagnosis'
LLM uses probability just as your MD does, using imaging and lab work.
Medicine has spent centuries building rituals and structures to manage this uncertainty: research, specialized language, evaluating risk-benefits etc. These aren’t just arbitrary traditions but part of how we’ve managed the psychological burden of making high-stakes decisions with incomplete information, and how we’ve maintained patient trust despite that inherent uncertainty.
How AI Changes the Picture
AI was supposed to make medicine more certain. Or that’s what I keep hearing from all the tech companies and enthusiasts out there. The promise was algorithms that could analyze millions of data points, spot patterns that humans miss, and deliver definitive diagnoses based on comprehensive analysis. Instead, what’s actually happening is that AI is exposing just how uncertain medicine has always been, but in ways we weren’t prepared for.
Consider what happens when an AI reads a mammogram and flags an area as having an 87% probability of malignancy. The radiologist reviews the same image and thinks it’s closer to 60% likely to be cancer. Now we have two probability estimates for the same finding. Which one should guide the clinical decision? The algorithm that’s been trained on 100,000 mammograms, or the human radiologist who’s been reading films for 20 years and has developed clinical intuition that’s hard to quantify? There’s no obvious answer, and suddenly the uncertainty that was always there becomes much more visible and harder to resolve.
Or think about what happens when we run the same medical imaging study through three different AI systems and get three different probability scores. The variation between these systems makes it clear that we’re not dealing with objective truth but with different interpretations of the same data. This has always been true in medicine—ask three doctors the same question and you might get three slightly different answers—but AI makes it quantifiable and impossible to ignore.
The symptom checker problem illustrates this in an even more troubling way. A patient enters their chest pain symptoms into an app and gets told it’s “likely anxiety.” Six hours later, they’re in the emergency room with a heart attack. The app wasn’t necessarily wrong in some absolute sense—chest pain can be anxiety and the algorithm was working with the limited information the patient provided. But the patient trusted the certainty of that algorithmic assessment more than their own instinct that something was seriously wrong, and that trust nearly cost them their life.
This is where AI becomes genuinely dangerous. When a doctor tells you “I think this is probably anxiety, but if your symptoms get worse or you’re worried, don’t hesitate to go to the ER,” you hear the hedging and uncertainty in that statement. You understand that there’s clinical judgment involved, that the doctor might be wrong, that you should stay alert to how you’re feeling.
But when an app gives you “Anxiety - 85% confidence,” it feels like the question has been definitively settled. You think the machine knows something certain, but it doesn’t. It’s making the same kind of probabilistic guess that the human doctor was making, just expressing it with mathematical precision that creates an illusion of certainty.
This entire scenario of patients with MIs thinking they have anxiety after a symptom checker keeps me up at night.
https://www.linkedin.com/pulse/medicine-lives-uncertainty-ai-making-impossible-joshi-md-msc-facep-lblfc/

