Digital (voice) assistants like Cortana, Alexa, or Siri are becoming more and more part of our daily life. Even in decision situations, e.g., consumer consumptions or investment decisions, artificial agents make suggestions and, thus, try to support humans in making better decisions. Additionally, developers are working on it to make these assistants more and more human-like by giving them names, a voice or let them mimic human behavior like the usage of natural language or empathy.
In interactions with other humans, e.g., negotiations or economic transactions, social norms play an important role. But is this still the case when interacting with machines? In lab experiments, we investigate how socials norms change when interacting with a computer agent instead of humans and what impact human characteristics may have.
In the so-called Ultimatum Game, individuals could split an amount of money between themselves and another person. If the other person accepts this allocation, both will be rewarded according to the offer. If the offer is rejected, both receive nothing. In a series of experiments, we investigate whether the acceptance behavior with respect to unfair offers changes if the offer is made by a computer agent instead of another human individual.
In a second step, we analyze the acceptance behavior when the offer of both, the human and the computer agent, is made via voice.
It is well known that cultures differ with respect to social norms (e.g., individualism vs. collectivism) or the motivational focus (i.e., prevention vs. promotion focus). In this sub-project, we investigate whether it makes a difference in the perception of AI-based high-tech products when it represents different social norms and motivational foci.