Using AI to Improve Doctor-Patient Communication: Insights from a New Study

There's a peculiar form of modern purgatory between seeing your lab results online and hearing your doctor's explanation. It's a common story I've heard from family and friends, staring at an array of numbers in the patient portal, some ominously flagged in red. The hours tick by before the doctor calls as if this anxiety-inducing wait were just business as usual.

This isn't merely about impatience. Having worked for a pharmaceutical company specializing in rare diseases, I've seen patients grapple with life-altering diagnoses, transforming overnight into amateur medical researchers. They scour the internet and message boards, searching complex terms, treatment options, and others' experiences like cramming for an all-too-real life-or-death exam.

It's in these moments – waiting for test results or overwhelmed by new diagnoses – that our healthcare system shows its seams. We have cutting-edge treatments and brilliant doctors, but our communication methods seem stuck in the past.

This, some suggest, is an opportunity for artificial intelligence.

AI promises to be healthcare's ultimate multitasker: a tireless medical librarian, a lightning-fast analyst, and a 24/7 communication channel. Imagine immediate, comprehensible explanations of your lab results, or an AI assistant curating the latest research on your condition.

But as anyone who's tried having a long conversation with a computer knows, AI isn't the best or most trustworthy communication partner. I'm not sure we can we teach a machine to care or have empathy. But, we might at least use it to help our doctors better communicate with us.

The AI Doctor Will See You Now (Sort Of)

A team at the University of Utah recently tackled this challenge, focusing on using AI to communicate prognoses for patients with advanced tumors. Their goal wasn't to replace doctors with AI but to give doctors better tools for difficult conversations.

Taking a "user-centered" approach, they asked doctors what they needed. The researchers note, "Our process resulted in an interface that supports clinical use of AI by leveraging 2 recommended strategies to bridge clinician needs and developer goals: contextualize model output, and enable holistic patient assessment by providing context and cohort-level information."

User-centered model used by the researchers.

Surprisingly, doctors weren't interested in long explanations of AI decision-making. They wanted help interpreting and explaining results to patients – less "show your work" and more "help me help my patient understand." The study reports:

Unlike the focus on explainability reported by others in settings where [clinical decision support systems] supports urgent decision-making by clinicians, our process resulted in a design that prioritized interpretability over explainability.

The result? An interface resembling a well-designed infographic rather than a statistical readout. It provides a visual snapshot of a patient's prognosis with recommended next steps – a smart, somber version of a "You Are Here" map.

Crucially, doctors wanted a tool to help them "tell a story" to patients. As one oncologist in the study noted, "What I really like about graphs like in the bottom right, is it completely removes any external contextualization and it just allows the patient to see the information, to absorb it and to decide for themselves what the qualitative term is going to be...does this look...favorable for me or not?"

This underscores a vital point: AI in healthcare isn't about replacing the human touch but amplifying it.

AI Where It Makes Sense

Every doctor's visit shouldn't become a three-way chat with HAL 9000. We should consider a tiered approach to AI in healthcare.

For routine tasks – appointment scheduling and reminders, and basic health education – AI can take the lead. It's like having a tireless nurse handling day-to-day tasks.

But for high-stakes situations – discussing treatment options for serious diagnoses – AI takes a backseat. Here, it's a behind-the-scenes supporter, providing doctors with up-to-date information and ensuring no important points are missed.

The key is transparency. Patients prefer AI assistance to be upfront about what it is. No virtual avatars masquerading as humans: just clear, helpful information respecting the patient's intelligence and the situation's gravity. As the researchers emphasize, "Policies and technical solutions should enable AI-based CDS systems to respectfully and transparently support users (including clinicians, as well as patients and caregivers) to appropriately interpret outputs and recommendations." 

It's a delicate balance, using technology to enhance human connection rather than replace it. Done right, we might create a healthcare system that's not only more efficient but also more humane.

Ethical Considerations

The potential risks and ethical implications of AI in healthcare are as numerous as drug commercial side effects. Let's focus on a few key ones:

  1. Privacy: AI systems need vast patient data to work effectively. This data is usually dispersed across institutions and systems. Balancing data aggregation and synthesis with patient privacy can feel like trying to gossip about everyone in town without anyone finding out.
  2. Bias: AI systems reflect their training data. If that data is biased, so are the results – like learning about world cuisine by eating at only one restaurant.
  3. Accountability: If AI-generated content or recommendations lead to a poor outcome, who's responsible? The doctor? The developers? The hospital administrator? It can become a high-stakes game of hot potato.

The Utah researchers addressed these concerns with transparency. Their system is a tool for doctors, not a replacement, meant to augment human decision-making, not automate it. They emphasized clear communication about when and how AI is used – no virtual or avatar doctors here.

The Future is Here (Sort Of)

The integration of AI in healthcare is not a distant possibility—it's an ongoing reality, advancing at a quick pace. As these innovations continue to reshape the medical landscape, we must remain committed to human-centered design approaches. The University of Utah study exemplifies this ethos, demonstrating how AI tools can be developed with the needs of both healthcare providers and patients at the forefront.

The potential benefits are profound. AI has the power to significantly improve patient outcomes by providing faster, more accurate diagnoses and personalized treatment plans. For healthcare providers, AI-assisted tools can alleviate the administrative burden that contributes to burnout, allowing doctors to focus more on patient care. As one clinician in the study noted, AI tools could be "useful anytime [I'm]... having a prognostic discussion with the patient, particularly when the prognosis is poor or changing."

Perhaps most importantly, AI has the potential to transform healthcare communication. By providing clear, accessible information and freeing up healthcare providers' time, AI can facilitate more meaningful interactions between doctors, patients, and caregivers. 

As we move forward, our focus should remain on developing AI systems that enhance, rather than replace, the human elements of healthcare. The goal is not to create a healthcare system run by algorithms, but one where technology amplifies the best aspects of human care—empathy, intuition, and personal connection.

The future of healthcare is neither purely digital nor solely human—it's a thoughtful synthesis of both. By prioritizing human-centered design, focusing on improved outcomes, and enhancing communication, we can use AI to create a healthcare system that is not just more efficient, but more effective for all.