When you’re afraid, where do you turn? 

Probably to someone, or something, you trust. 

In health care, trust is paramount. That’s because we seek care in some of our most vulnerable moments—for ourselves, or for someone under our care. 

So, as we see more and more use cases for generative AI (GenAI) in health care, I have a nagging question: Is it trustworthy? Similarly, I wonder: Can it ever be as trustworthy as a provider with whom I have a relationship? 

Of course, there are no one-size-fits-all answers to these questions. Instead, it depends on the use case. In some situations, like with questions around dermatological issues, Google’s AI dermatology tool has shown it’s trustworthy (or at least accurate) in that it can provide a reliable diagnosis. 

But in others, which JAMA Pediatrics highlighted in January 2024, it’s not. The research highlighted that ChatGPT (v. 3.5) incorrectly diagnosed 83% of pediatric cases. Seventy-two of the cases were incorrect while 11 were too broad to be considered accurate. 

The value of applying a Jobs to Be Done lens

As I contemplate these questions around trustworthiness, the Theory of Jobs to Be Done has a lot to offer. 

Jobs Theory is a useful framework that helps us understand customer behavior. It explains that people don’t simply buy products or services; they “hire” them to make progress in specific circumstances (what we call their Job to Be Done, or “job”). Understanding the job for which customers hire a product or service helps innovators more accurately develop products that align with what customers are already trying to accomplish.

Applied to this situation, understanding the desired progress, and perhaps more importantly, the context of someone’s current life circumstances, are critical to knowing if GenAI should be hired in any specific situation. And for innovators, this understanding can help them design products or services more likely to match the progress people seek.

For example, a significant struggle for health care providers is the amount of time they spend charting. Much of this occurs after patients are gone for the day, and it’s a big source of provider burnout. Ask any provider you know, and I guarantee they would like to spend less time entering data and notes into the EHR. Multiple GenAI innovations have come to market to address this job, and a related job held by health systems where providers practice. These innovations, such as Augmedix, employ a combination of speech recognition and natural language processing to listen to the doctor/patient interaction and create notes based on what they hear. In this situation, GenAI is well-matched to help providers achieve their job of spending less time charting after hours and reducing their burnout.

While saving physicians time and reducing their burnout seems like a net-positive way to leverage GenAI in health care, not all examples are so clear. The potential to hire GenAI solutions is also relevant to senior loneliness. We have a loneliness epidemic in the US, and many companies have launched AI companions in response to it. ElliQ is one example targeting seniors. Do solutions like these effectively address the job of “help me to have companionship so I can improve my mental and physical health”? Perhaps. Time will tell, and there are likely unknown consequences of outsourcing something as critical as human connection to AI. 

Where to go from here: Opening space in our minds for answers to fit 

Understanding Jobs Theory helps us identify whether GenAI tools or offerings might be a good fit to help people achieve their desired progress. But a Jobs lens alone won’t address all the open issues with GenAI in health care today. 

For that, we need to ask a series of pointed questions. Clayton Christensen once said, “Questions are places in your mind where answers fit. If you haven’t asked the question, the answer has nowhere to go.” There are many questions to ask as we move forward as a health care field and grapple with the role of GenAI in our future. Some questions I’m asking include: 

  1. How can we leverage GenAI to tackle administrative tasks, empowering practitioners to build more trust with consumers by spending more face-to-face time with them (AI scribes help with this now, but what are other options)? 
  2. How can GenAI serve low-acuity needs when fear or vulnerability aren’t at play, allowing more individuals with higher-acuity needs more direct access to a trusted, human provider? 
  3. What industry guardrails do we need in place to help ensure GenAI enhances instead of hinders human-centered and primarily human-delivered health care?  

What would the future be like if we were forced to “hire” a solution we didn’t trust? Last month I spoke with Todd Dunn, a wise health care innovator, leader, entrepreneur, and author about this topic. I hope you’ll tune in to hear us grapple with some of these questions. And if you have others to add to the list, please reach out to me via email or LinkedIn. I’d love to hear them! 

Author

  • Ann Somers Hogg
    Ann Somers Hogg

    Ann Somers Hogg is the director of health care at the Christensen Institute. She focuses on business model innovation and disruption in health care, including how to transform a sick care system to one that values and incentivizes total health.