Research

Do Anthropomorphic Chatbots Influence our Financial Decision-Making?

The way humans interact with computer systems and how they converse with humans has dramatically changed in the last couple of years. So-called digital Conversational Assistants (CAs) are more and more present in daily life. The sales for smart speakers, e.g., in 2020, was five times as high than in 2018. CAs are systems that which use text (e.g., chatbots), speech (voice assistants) or other ways for communicating with humans.

For online search, navigation, or entertainment, we use them quite naturally. But we also interact with artificial CAs in areas that we might not suspect at first glance. In call centers, e.g., we interact with voice-based CAs using natural language. Similarly, on web pages, chatbots welcome us and provide first support or navigate us through the web page.

The literature shows that human decision-makers may attribute human qualities (e.g., a consciousness or emotion) to objects when these objects have human-like characteristics like a voice or a name. This phenomenon is called anthropomorphism. Past research showed that anthropomorphic designs of CAs affects human perception and behavior in human-computer interaction, e.g., increasing trust and emotional connectedness as well as affecting the willingness to pay.

This project aims to answer the question: To what extend CAs are perceived as anthropomorphic, depending on the human-like characteristic, especially when the CA uses speech. Furthermore, we analyze the role of anthropomorphism on the acceptance of CAs in different decision contexts, especially in intrinsically motivated, emotion-based decision situations vs. in rational decisions situations.

In (a series of) experimental studies, the effect of anthropomorphic CAs on financial decision-making in different contexts is analyzed.

Respondents make investment decisions either on a standard micro-credit platform (rational decision) or on a platform for pro-social micro-credits (intrinsically motivated, emotional decision).

In both cases, a conversational agent with different levels of anthropomorphic characteristics, serves as a kind of filter and decision support system that leads through the preference selection process.

With this project, we aim to gain an understanding of how humans make decisions in financial-decision situations when supported by chatbots. We provide insights on how consumers change behavior when the chatbot shows human characteristics and may give guidelines on how to design conversational agents for different types of decision tasks.

Key Results

  • Compared to normal websites, so-called conversational agents lead to higher satisfaction with the service.
  • Conversational agents have a certain degree of social presence, which can have both a positive and a negative effect.
  • The social presence of conversational agents has a negative effect if users feel observed by the conversational agent.
  • It has a positive effect because it increases trust in the conversational agent.
  • The voice, as an additional human feature, has no effect.

Cooperation partner

  • Prof. Dr. Jella Pfeiffer, Professor of Business Administration and Information Systems, University of Stuttgart

Contact

Share Project
Scroll to top