More and more often we meet artificial language in everyday life: when driving a car with navigation, when doing a search on the Internet via voice or when speaking with Alexa, to name just a few examples. Thanks to the latest developments in the field of artificial intelligence, artificial voices do sound more and more natural. The voice is a powerful information carrier. Besides the content, the sound reveals a lot about personal and social characteristics such as gender, personality, identity and emotion. The interpretation of these characteristics is essential in interpersonal interaction and has a great influence on behavior and decisions. But what about the linguistic interaction between man and machine? The aim of our research project is to analyze the influence of the tone of artificial voices on consumer decisions. Using the latest deep learning methods from the field of artificial intelligence, we first generate artificial voices with a specific tone of voice and then test its influence on consumer decisions in various experiments.
Project step 1: Emotions in human-machine communication
An important social characteristic of the human voice is emotion. Emotions are often contagious in human-human interaction. By imitating facial expressions, gestures and voice, people empathize with the mood of their dialog partner. But does the phenomenon of emotional contagion also work in human-machine communication? And what effect does this possibly have on human behavior? From a marketing point of view, the question is e.g.: Can an enthusiastic-sounding language assistant make a consumer excited about a product and encourage him to make an impulse purchase? For this purpose, we have developed a Deep Learning Model, which is able to generate artificial language with different emotions. This enables us to test the influence of synthetic voices on buying behavior in laboratory experiments.
Project step 2: Known Voices in Human-Machine Communication
New deep learning methods make it possible to create artificial speech in a tone of voice so that it sounds like the voices of specific persons. Amazon, for example, offers its Alexa customers in the USA Samuel L. Jackson’s voice as output voice - and more prominent voices are planned. For marketing, this technology opens up both new opportunities (e.g. presentation of personalized content in the voice of a favorite star) – but also new risks (e.g. publication of fake news in the voice of a key decision-maker).
In our research, we therefore analyze the question how artificial voices in the tone of voice of well-known people affect consumers. Moreover, we examine to what extent education about the possibilities of modern speech synthesis can influence the effect - and may counteract fake voice news, for example. Existing research in the field of human-computer interaction has shown that people mentally assign a personality to artificial voices and find several artificial voices more credible than one artificial voice - even if they are told that they hear artificially generated voices. The research question for us is therefore whether this is also the case with well-known artificially generated voices.