Many have predicted that as AI improves, it will commoditize technical skills and knowledge but accentuate the things that make us human—things like our empathy and connection with other human beings. Or our ability to communicate.

But there’s something too blunt, too lazy, and too generalized about those observations in the face of mounting evidence showing that large language models (LLMs) are often better than humans at “performing” things with empathy—what we’ve thought of as “human skills.”

Take a Harvard study showing that “AI Companions Reduce Loneliness.” Or one from researchers at the University of California San Diego who found that healthcare professionals rated responses from chatbots as “significantly higher for both quality and empathy.” Or my Christensen Institute colleague Julia Freeland Fisher’s work chronicling 30 edtech companies and advising organizations that found that “imbuing bots with warmth is key to driving engagement.”

To think about what we can do that perhaps LLMs can’t, we need to get more precise. Ben Riley’s work pondering the nature of intelligence, for example, has moved us toward that goal. But at the level of tasks or skills, we need to get clearer both about what the skill itself is—and the level at which one is able to do it.

Here’s one way I’ve thought about it.

At a surface level, AI chatbots are good “listeners.” In an age when far too many of us don’t take the time to listen to others—especially those with different viewpoints—you can argue that they listen far better than most of us.

They take what you ask them, “listen” fully, and, for the most part, they don’t judge. Then, they engage with you. They offer a response—or, if trained like Khanmigo, for example, perhaps they ask questions back.

So, at one set of levels, they far outperform us, perhaps.

But are AI chatbots as good at listening as the master listeners among us—people like my colleague, Bob Moesta, of Jobs to Be Done fame, or Chris Voss of negotiation fame, or the deep listeners that journalist Amanda Ripley profiles in her book “High Conflict”? In other words, are they as good as what we might call “Level 10 Listeners”?

I suspect not.

Perhaps we need to get more precise at the skills people like these master listeners deploy. They’re not just listening. They’re good at actually understanding—and making sure the individual with whom they’re speaking feels as though they’ve been seen and understood, too. They are masters of empathy—understood as “the ability to understand and share the feelings of another.”

They do it through a variety of techniques. They don’t just listen passively to a person’s question or ramblings—and then respond.

Instead, they listen for a person’s affect and follow people’s social and emotional energy. They seek to understand more deeply by asking probing questions. They don’t settle for surface-level understandings of common or vague words that a predictive AI engine might just substitute for any number of other “synonyms.” They use contrast and other techniques to understand the true meanings. When someone says that an experience was “OK” or over “quickly,” they don’t settle for those generic responses. They unpack and dig into what “quickly” really means in this context. Is “quickly” less than a minute? Fewer than 10 minutes? Less than an hour? They loop and mirror. They make mistakes on purpose to see what others might correct—to understand what matters to them. They don’t assume.

At this level of listening, the language that large language models are built upon is far too imprecise. Just as diseases in the human body have a limited—and therefore shared—”vocabulary” of symptoms to express themselves, words share many meanings.

Can we build agents that mimic that level of deep listening by copying their techniques? Perhaps to a degree, but it’s an open question for engines built on prediction and averages.

Yet most of us humans are not the divergent kinds of listeners that Bob Moesta or Chris Voss are.

Understanding and investing in our assets

To become so—and perhaps “AI proof” oneself—we would need to more fully develop that capability, or what in our book “Job Moves” we think of as an asset on one’s personal balance sheet. An asset is a resource acquired at a cost that creates economic value in the future. And assets depreciate over time—whether because they degrade or because the world around us changes.

To develop those assets and keep them fresh and useful at a level beyond what AI could do, we need to incur liabilities—obligations in time and resources to “pay for” our assets.

A key question for a lot of us at the moment is if and where we as individuals are willing to make those investments in time and money through practice to develop “human skills” beyond what LLMs may be able to do. What would the useful life of those assets be? I suspect the useful life of Moesta and Voss’s skillsets, for example, is much more than your “Level 1” listener at the moment, given the advance of AI.

That raises another consideration. Is perhaps the better question to think about how we can pair with AI to make the assets we develop in ourselves the best of machine plus human? And to think of AI as part of our assets that really accentuate our human qualities at a level that is special and more enduring. My sense is that those are skills—empathy, communication, and more—that are being put into practice at levels far too low right now.

In other words, how do we incorporate AI as an enabler to become the best version of ourselves by offloading things so we can do more meaningful work? So that we can be more productive.

And yes, so that we as individuals can resist our total package of skills becoming commoditized anytime soon.

Author

  • Michael B. Horn
    Michael B. Horn

    Michael B. Horn is Co-Founder, Distinguished Fellow, and Chairman at the Christensen Institute.