AI and the Disconnected Doctor
By Anna King, DO, MPH, Physical Medicine & Rehabilitation, Pediatric Orthopedic Medicine
Let’s say that even before your doctor enters the room, an AI-enhanced system has already reviewed your intake form, compiled your history and scanned for any relevant outside consults, phone calls and tests. It combines all of this with your recent diagnostic imaging to construct an assessment of your current problem, suggesting differential diagnoses, followed by a treatment plan. Your doctor walks in, still glancing at the screen, smiles, greets you, and clarifies one small point – “Where did you do PT the last time this occurred?” She proceeds to explain the computer-generated plan and directs you to check-out.
On your drive home, you’re relieved that you have answers and a treatment strategy. Still, you feel unsettled. Something about today’s visit felt different, maybe even off, but not to the point of necessarily causing dissatisfaction. Maybe “disconnected” would be the word? Oh! No exam. She didn’t do a physical exam. You feel a bit cheated, confused and slightly foolish for not having advocated for one. But there’s something else. As you pull into your driveway, your mind rests for a moment. Could it be an emptiness that you’re feeling? Possibly. It’s as if she weren’t even there. The whole encounter felt void of connection. The doctor did her job, but you don’t feel cared for. Your mind wanders as you begin to question whether this perceived difference has somehow impacted your actual care. You wonder if her diagnosis is accurate. Recalling a past misdiagnosis by a different doctor who, too, had bypassed the physical exam, you find yourself second-guessing what just occurred. Did she even order the most appropriate X-ray views? You didn’t see her personally interpreting them to confirm the diagnosis, nor did she review them with you. Wait a minute – is it possible that the computer did all the work, all the thinking? You resolve to follow up via a phone call later today, hoping for reassurance. As you step out of the car with renewed purpose, you can’t shake the subtle uneasiness. How might this type of care impact you when you’re really sick? After all, this current issue is pretty minor. With every passing hour thereafter, you can’t help but wonder if your doctor felt just as alienated from you as you did from her.
There is nothing personal about a patient encounter when your doctor is not personally part of your patient encounter.
As a patient, I’ve felt this. As a doctor, I’ve seen it. And while I admire the efficiency that augmented intelligence platforms can offer, I also feel that something essential is being lost in the process: human connection, presence, and the opportunity to heal through more than just data. One of the most gratifying aspects of patient care is not just being a caretaker, but a healer, and you can only be a healer through personal connection and human interaction. The truth is, many physicians don’t want to be reduced to technicians who simply deliver and explain machine-generated decisions. We chose this vocation to care for people by critically thinking about them while connecting with our own compassion. When done right, a patient will leave the encounter feeling better in ways no automaton can accomplish. It’s in this space where reservations about AI can exist for doctors. We remain open-minded and hopeful, but also somewhat conflicted due to concerns about how far the pendulum may swing.
The demands of EHR documentation in the context of historically high patient volumes and reimbursement challenges, all of which have compelled practices to put throughput ahead of thoughtfulness (quantity over quality), are likely contributing factors to the uncertainty surrounding the clinical application of AI. Consequently, physicians rush through complex visits or work late into the night to meet performance metrics at the cost of autonomy and pursuit of high-quality, personalized care. This unhealthy and exhausting practice pattern has been normalized, even defended as progress. But it’s devoid of meaning and drives many from the profession. To be clear, the problem is the loss of control and perceived disregard for our values and professional input, not so much the workload or pay, as is often the assumption.
This is where emerging technology and AI can be a solution rather than an enigma. It can be our partner rather than a threat, an enhancement rather than a replacement. But let’s approach thoughtfully and with cautious optimism. Even if AI eases some of the existing burdens, we don’t want it to inadvertently foster new assumptions about what doctors are now capable of accomplishing in the same amount of time with more “help”. If unchecked, the heightened demands and expectations will further distance us from our patients and exacerbate the existing problems.
We need change, not just in technology itself, but in how it’s applied and in who is designing it. Doctors are not opposed to innovation, nor completely reluctant to incorporate AI. We welcome tools that genuinely help us do our jobs better and support our scrupulous nature. But we must have a voice in shaping these tools (all of us, not just those with a keen interest or background in medical technology). Clinical physicians want to be involved and invited to share practical ideas and valuable insight for successful EHRs and care delivery models. We want to collaborate and learn from you, the experts in technological research and development.
We envision platforms that allow flexibility – tools that adapt to different practice styles, clinical settings and patient needs. Let physicians choose how much decision support they want. Let them preserve the space to examine, to listen, to think critically and independently. Don’t assume that what helps a patient also helps the physician. In the setting of rising patient expectations, let’s vow to honor our doctors’ time and energy by designing platforms that reasonably meet patients’ needs, but that also set healthy professional expectations and appropriate boundaries for interactions. Finally, let’s have honest discussions about which existing technological “advances” tend to be more burdensome than helpful in a clinical environment.
We need to invest in our doctors. This means collaborating to design augmented intelligence platforms and clinical decision support tools specifically for physicians, not beefed-up versions meant to support other allied health professionals. If we care about excellent patient outcomes, we must also care about the well-being of the doctors working hard to deliver them. Our collective goal should be to value physicians’ input, honor their time, protect their autonomy and preserve their connection to the work that gives them purpose.
It is impossible to automate the sacred bond between a doctor and patient. And when that bond is gone, patients feel it – even if they can’t identify it. If we’re truly committed to improving healthcare, we must build systems that support both sides of the exam table. AI and physicians need to be two sides of the same coin.

