
The way humans interact with computer systems and how they converse with humans has dramatically changed in the last couple of years. So called digital Conversational Assistants (CAs) are more and more present in daily life. The sales for smart speakers, e.g., in 2020 was five times as high than in 2018. CAs are systems which use text (e.g. chat bots), speech (voice assistants) or other ways for communication with humans.
For online search, navigation, or entertainment we use them quite naturally. But we also interact with artificial CAs in areas that we might not suspect at first glance. In call centers, e.g., we interact with voice based CAs using natural language. Very similar, on web pages chat bots welcome us and provide first support or navigates us through the web page.
Literature shows that human decision-maker may attribute human qualities (e.g., a consciousness or emotions) to objects when these objects have human-like characteristics like a voice or a name. This phenomenon is called anthropomorphism. Past research showed that designing CAs anthropomorphic affects human perception and behavior in human-computer interaction, e.g., increasing trust and emotional connectedness as well as affecting the willingness to pay.
This project aims to answer the question to what extend CAs are perceived as anthropomorph dependent on the human-like characteristic, especially when the CA uses speech. Furthermore, we analyze the role of anthropomorphism on the acceptance of CAs in different decision contexts, especially in intrinsically motivated, emotion-based decision situations vs. in rational decisions situations.
In a (series) of experimental studies, the effect of anthropomorphic CAs on financial decision making in different contexts is analyzed.
Respondents make investment decisions either on a standard micro-credit platform (rational decision) or on a platform for pro-social micro-credits (intrinsically motivated, emotional decision).
In both cases, a conversational agent with different levels of anthropomorphic characteristics, serves as a kind of filter and decision support system leading through the preference selection process.
With this project we aim on giving an understanding of how humans make decisions in financial decision situations when supported by chat bots. We provide insights on how consumers change behavior when the chat bot shows human characteristics and may give guidelines on how to design conversational agents for different types of decision tasks.
Prof. Dr. Jella Pfeiffer, Gießen Universität