Earlier this year, Princeton Computer Science Professor Arvind Narayanan set up a voice interface to ChatGPT for his nearly four-year-old daughter. It was partly an experiment and partly because he believed AI agents would one day be a big part of her life.
“What happens when the lights turn out?” his daughter asked.
It then gave some advice on using nightlights, closing with a reminder that “it’s normal to feel a bit scared in the dark.” Narayanan’s daughter was visibly reassured by the explanation, he wrote in a Substack post.
That might sound weird, but what’s weirder is that Google’s Bard and Microsoft’s Bing, which is based on ChatGPT’s underlying technology, are being positioned as search tools when they have an embarrassing history of factual errors: Bard gave incorrect information about the James Webb Telescope in its very first demo while Bing goofed on a series of financial figures in its own.
Margaret Mitchell, a former Google AI researcher who co-wrote a paper on the risks of large language models, has said large language models are simply “not fit for purpose” as search engines.
That is one reason why these tools are exceptionally good at mimicking empathy. After all, they’re learning from text scraped from the web, including the emotive reactions posted on social media platforms like Twitter and Facebook, and from the personal support shown to users of forums like Reddit and Quora. Conversations from movie and TV show scripts, dialogue from novels, and research papers on emotional intelligence all go into the training pot to make these tools appear empathetic.
To see if I could measure ChatGPT empathic abilities, I put it through an online emotional intelligence test, giving it 40 multiple choice questions and telling it to answer each question with a corresponding letter. The result: It aced the quiz, getting perfect scores in the categories of social awareness, relationship management and self-management, and only stumbling slightly in self awareness.
There’s something unreal about a machine providing us comfort with synthetic empathy, but it does make sense. Our innate need for social connection and our brain’s ability to mirror others’ feelings mean we can get a sense of understanding even if the opposite party doesn’t genuinely “feel” what we feel. Inside our brains, so-called mirror neurons activate when we perceive empathy from others — including chatbots — helping us feel a sense of connection.
Thomas Ward, a clinical psychologist with Kings College London who has researched software’s role in therapy, cautions against assumptions that AI can adequately fill a void for people who need mental health support, particularly if their issues are serious. A chatbot for instance, probably won’t acknowledge that a person’s feelings are too complex to understand. ChatGPT, in other words, rarely says “I don’t know,” because it was designed to err on the side of confidence rather than caution with its answers.
That might end up creating more problems than we think we’re solving. But for the time being, they’re at least more reliable for their emotional skills than their grasp of facts.
Disclaimer: This is a Bloomberg Opinion piece, and these are the personal opinions of the writer. They do not reflect the views of www.business-standard.com or the Business Standard newspaper