Artificial intelligence, big data, and natural-language processing hold great promise—including transforming clinicians from data-entry clerks to compassionate listeners who assess complex interactions with patients, empowered by algorithms from electronic health record data. The potential for AI to augment and distill the information that clinicians have available for a diagnosis, and the knowledge we can acquire about a patient’s health before they ever enter the exam room, is reason for optimism.
But we doctors can’t forget that these are individuals in front of us. And often, unlocking the key to diagnosis or treatment hinges upon the story your patient hasn’t told yet, or that a machine can’t understand.
Even more important: What if the patient you’re treating isn’t in the middle of the clinical trial bell curve?
It’s important to understand that these technological advances are, too often, programmed with demographic samples that don’t match an individual patient’s profile. Precision medicine is limited by the findings from trials that often leave out nonwhite patients with multiple health issues. If I’m looking at an AI-generated diagnostic recommendation or course of treatment, I have to wonder about the data that fueled the finding. If it was built on research studies that did not reflect a diverse patient population, what should I make of the information that’s being returned? And how am I, as a doctor, to know what went into the machine?
Although we are making progress, the truth is that after all these years, trials still don’t include diverse populations, and they inform much of the AI available today. Until we reach the elusive goal of including “all of us” in clinical trials, I will look for ways to span the gaps in our AI-enhanced understanding of health: I incorporate a sense of uncertainty into the checks and balances of my practice. I leverage my knowledge of my patients’ community to account for the influence of factors outside the clinic or hospital.
While machine learning may be able to spot patterns in x-rays or mammograms and aid in a diagnosis, only a physician can work with a patient to determine the best course of treatment for that individual. Wearable technology might get to the point of being able to sense physiological changes and alert health-care professionals that a patient is suffering from an overdose, but only a trained psychiatrist can help them work through the social, economic and mental-health issues that compound their substance use disorder.
For now, there is no substitute for being face to face with and focused on my patients. I know that I will need my human sense of smell, the visual cue from an averted glance that makes me ask more questions, and the self-doubt ingrained in me over the long course of my career to care effectively for my patients. I use it all to counterbalance and complement findings from AI to hopefully find a wise path forward.