Login

AI and the Machine Age of Marketing

When Humanizing Customer Service Chatbots Might Backfire

Rhonda Hadi

Keywords

Chatbots, Customer Service, Angry Customers, Avatars, AI

download pdf

The rise of the customer service chatbot
Technological advancements in AI are continuously transforming the way companies operate, including how they interact with their customers. One clear illustration of this is the proliferation of customer service chatbots in the marketplace. In typical applications, these automated conversational agents exist on a company’s website or social media page, and either provide customers with information or help handle customer complaints. Remarkably, chatbots are expected to power 85% of all customer service interactions by 2020, with some analysts predicting the global chatbot market will exceed $1.34 billion by 2024. The cost-saving benefits are intuitive, but do chatbots improve customer service outcomes? While some industry voices believe that chatbots will improve customer service due to their speed and data synthesizing abilities, other experts caution that chatbots will worsen customer service and lead customers to revolt. Through a series of studies, we shed light on how customers react to chatbots. Our research finds support for both optimists and skeptics, contingent on the situation and specific characteristics of the chatbot.

From humanized brands to humanized bots
Technology designers often make deliberate attempts to humanize AI, for example, by imbuing voice-activated devices with conversational human voices. Industry practice shows these efforts also apply to customer service chatbots, many of which are given human-like avatars and names (see Box 1). Humanization, of course, is not a new marketing strategy. Product designers and brand managers have long encouraged consumers to view their products and brands as human-like, either through a product’s visual features or through brand mascots, like Mr. Clean or the Michelin Man. This strategy has generally been linked to improved commercial success: brands with human characteristics support more personal consumer-brand relationships and have been shown to boost overall product evaluations in several categories, including automobiles, cell phones, and beverages. Further, in the realm of technology, human-like interfaces have been shown to increase consumer trust. However, there is also evidence that in particular settings, these humanization attempts can elicit negative emotional reactions from customers, especially if the product does not deliver as expected.

Box 1: A variety of chatbot personalities can be found across the global marketplace
Lufthansa used a customer service bot named “Mildred” to provide flight information, while “Julie” helps Amtrak customers book their trains or get timetables. The financial services provider ING’s bot is named “Inga,” and the Australian government currently uses five separate bots – “Sandi,” “Mandi,” “Sam,” “Oliver” and “Alex” – to handle millions of citizen inquiries. In the Alibaba universe in China, customers get assistance from a cute little mascot called “AliMe” who can also provide humorous banter (see below). Finally,  
“Tinka” is a servicebot used by the telecommunications provider T-Mobile in Germany and Austria (see Figure 1). Tinka’s profile describes her as an alien with an amazingly high IQ score, and she can ask riddles if prompted by customers.

A brain behind the bot?
When a chatbot is presented in a human-like way, customers tend to assume it encompasses a certain level of agency. That is, they expect the chatbot is capable of planning, acting and reacting in similar manner to a human being. These heightened expectations not only increase customers’ hopes that the chatbot is capable of doing something for them, but they also increase customers’ beliefs that the chatbot should be held accountable for its actions and deserves punishment in case of wrongdoing. Of course, chatbots – no matter how human they may seem – do not always meet the high levels of performance a customer might expect. While expectancy violations are never a good thing, they are particularly harmful for angry customers, who tend to respond punitively to obstacles hindering their ability to achieve a desired outcome. To explore how angry customers react to chatbots, we conducted a series of studies to test whether and in which situations humanizing chatbots might be a good strategy.

Blaming the bot: When humanized chatbots exacerbate anger
My collaborators in this research were three colleagues from the University of Oxford: Felipe Thomaz, Cammy Crolic, and Andrew Stephen. In our first study, we analyzed over 1.5 million text entries of customers interacting with a customer service chatbot from a global telecom company. Through natural language processing analysis, we found that humanization of the chatbot improved consumer satisfaction, except if customers were angry. For customers who entered the chat in an angry emotional state, humanization had a drastic negative effect on ultimate satisfaction. In a series of follow-up experiments, we used simulated chatbot interactions and manipulated both chatbot characteristics and customer anger. These experiments confirmed our initial analysis, as angry customers reported lower overall satisfaction when the chatbot was humanized than when it was not.  Furthermore, results showed that the negative influence of humanized chatbots under angry conditions extends to harm customers’ repurchase intentions and evaluations of the company itself (see Figure 2).

Optimal chatbot characteristics depend on the situation
At first glance, it may seem like it is always a good idea to humanize customer service chatbots. However, our research suggests that the consequences of humanizing chatbots are more nuanced and that the outcomes depend on both customer characteristics and the specific service context. We believe chatbot humanization may act as a double-edged sword: it enhances customer satisfaction for non-angry consumers but exacerbates negative responses from angry consumers. Therefore, companies should carefully consider whether or when to humanize their customer service chatbots. Based on our findings we suggest the following guidance which might be helpful in designing a company´s (automated) customer service in an efficient, yet customer-friendly way:

  • Gauge whether a consumer is angry before they enter the chat
    In our studies, anger played a pivotal role in determining customer reactions to humanized chatbots. Therefore, it is advisable to predetermine whether customers are angry as a first step. This could be done using keywords or real-time natural language processing. Angry customers could then be transferred to a non-humanized chatbot while others could be introduced to a humanized version. Another option might be to promptly divert angry customers to a human agent who can possibly be more empathetic and has more agency and flexibility to actually solve a problem to the customer’s satisfaction.
     
  • Stick to non-humanized bots for customer complaint centers
    When a customer contacts a company specifically to complain, one can assume at least a moderate level of anger. Therefore, a non-humanized chatbot should be implemented in these settings to avoid potential negative effects on company reputation or purchase intentions when the bot is unable to offer adequate solutions. The use of humanized bots could be restricted to more neutral or promotion-oriented services like searches for product information or other assistance.
     
  • Manage expectations
    Finally, companies can try to limit excessively high expectations of chatbot performance. This might be done by incorporating explicit disclosure that the customer is interacting with a bot and not a human being, like in the Slack example below.

     

Get your bot right in a growing bot-scape
More and more brands will rely on chatbots to extend their customer service offerings, and these chatbots will grow increasingly sophisticated over time. Accordingly, customers will get more used to this type of service encounter but also get more demanding. Given the vast amount of industry investment into AI and machine learning technology, there may come a point in the future where chatbots are so functionally advanced that expectancy violations are no longer a feasible concern. In this future, chatbots will likely have greater freedom of action, and be able to perform even intuitive and empathetic tasks better than humans. When this becomes a reality, the differences between the reactions of angry and non-angry consumers may diminish, in that chatbot humanization may always be a good thing. Accordingly, companies are well-advised to be on the forefront of technical progress, to learn quickly and integrate the most advanced AI into their chatbots. However, in the near future, it is important to consider the variety of consumer contexts and conditions where the interaction is likely to occur. Customer service chatbots represent yet another contact point demanding thoughtful consideration in customers’ increasingly automated lives.

Authors

Rhonda Hadi
Associate Professor of Marketing, Saïd Business School, University of Oxford, United Kingdom, Rhonda.Hadi@sbs.ox.ac.uk

Further Reading

Hadi, R.; Thomaz, F. Crolic, C. & Stephen, A. (2019): “Blame the Bot: Anthropomorphism and Anger in Customer-Chatbot Interactions”, working paper.
Valenzuela, A. & Hadi, R. (2017): "Implications of Product Anthropomorphism Through Design", in Michael R. Solomon & Tina M. Lowrey (Eds.), The Routledge Companion to Consumer Behavior, Routledge, London.