Login

AI and the Machine Age of Marketing

Let the Machine Decide: When Consumers Trust or Distrust Algorithms

Noah Castelo, Maarten W. Bos and Donald Lehmann

Keywords

Algorithms, Algorithm Aversion, Algorithm Adoption, Task Objectiveness, Human-likeness, Trust

download pdf

The rise of algorithms
Algorithms – sets of steps that a computer follows to perform certain tasks – are increasingly entering consumers’ everyday lives. Thanks to the rapid progress in the field of artificial intelligence, algorithms are able to understand and produce natural language and learn quickly from experience. They can accomplish an increasingly comprehensive list of tasks, from diagnosing some complex diseases to driving cars and providing legal advice. Algorithms can even perform seemingly subjective tasks such as detecting emotions in facial expressions and tones of voice. While many algorithms can outperform even expert humans, many consumers remain skeptical: Should they rely more on humans or on algorithms? According to previous findings, the default option is to rely on humans, even when doing so results in objectively worse outcomes. However, our research provides insight into when and why consumers are likely to use algorithms, and how marketers can increase their use.

Consumers’ algorithm skepticism
One reason why consumers have ambivalent feelings toward algorithms is related to the kind of abilities consumers typically associate with algorithms. Consumers tend to believe that machines lack fundamentally human capabilities that are emotional or intuitive in nature. While capabilities such as logic and rationality are seen as something humans and machines have in common, machines are not perceived to be human-like when it comes to affective or emotional aspects.  Therefore, consumers often assume that algorithms will be less effective at tasks which humans approach with intuition or emotions. As beliefs about a technology’s effectiveness are fundamental determinants for its adoption, consumers tend to prefer humans in such cases. Whether or not consumers trust algorithms depends on the nature of the task to be performed, and also on the way the algorithm itself is presented. Framing both task and algorithm in appropriate ways can foster adoption of and trust in algorithms, according to our research.

While many algorithms can outperform even expert humans, many consumers remain skeptical.

Trust in algorithms depends on the characteristics of the task
Familiarity, the scope of consequences, and the perceived objectiveness of a task are important determinants of algorithm adoption by consumers. In general, consumers tend to rely more on algorithms they are already familiar with. For instance, algorithm-based movie recommendations on Netflix are quite convenient. Consumers also rely on algorithms for getting directions via smartphone. In general, past experience with algorithms increases trust and use.
Some tasks are much more consequential than others, like diagnosing or treating a disease. Performing such tasks poorly has more serious consequences than others with less potentially far-reaching outcomes, and consumers seem to be less willing to trust and rely on algorithms when the stakes are higher.
he main focus of this research was related to a third characteristic: the influence of the perceived objectiveness of a task, a quality that can be managed actively. The series of studies shows that consumers trust algorithms more for objective tasks that involve quantifiable and measurable facts than for subjective tasks, which are open to interpretation and based more on personal opinion or intuition. Objective tasks typically associated with more “cognitive” abilities are thus entrusted significantly more to algorithms than tasks perceived as being subjective and typically associated with more “emotional” abilities. For instance, consumers perceive data analysis or giving directions as very objective – and consider algorithms superior to expert humans for performing such tasks – while the opposite is true for tasks like hiring employees or recommending romantic partners. Importantly, this research also shows that perceived task objectiveness is malleable. Re-framing a task like recommending romantic partners as actually benefiting from quantification makes the task seem more objective. This in turn increases consumers’ willingness to use algorithms for that task. 

Trust also depends on how the algorithm is perceived
As mentioned earlier, consumers believe in the cognitive capabilities of algorithms, though not in the “soft skills” that humans possess, even if this belief is becoming increasingly inaccurate. With the progress in AI, algorithms are increasingly capable of performing tasks typically associated with subjectivity and emotion. Machines can, for instance, already create highly valued paintings, write compelling poetry and music, predict which songs will be hits, and even accurately identify human emotion from facial expressions and tone of voice. Even though algorithms may accomplish these tasks using very different means than humans do, the fact that they have such capabilities makes them seem less distinct from humans. Making algorithms seem more human-like when it comes to these soft skills could therefore be a means to reduce algorithm aversion and encourage use, especially for tasks that are perceived as less objective.

Box 1: An investigation of trust in algorithms
In a series of six experiments with over 56,000 participants, we investigated what makes consumers rely on algorithms. We found that consumers tended to rely on algorithms for objective, less consequential tasks, and for tasks they already had experience with. Further, we found ways to encourage reliance on algorithms.

Subjective tasks are entrusted to humans more than to machines
In one experiment, we found that consumers are equally likely to click on ads for algorithm-based and human-based financial advice. For the more subjectively-perceived dating advice, by contrast, click rates for the algorithm-based option were significantly lower than for human-based advice (see Figure 1).

Human-likeness of algorithms can reduce skepticism
In another experiment, we tested the extent to which participants trusted in algorithms to predict a stock index’s future value, manipulating perceived task objectivity as well as the human-likeness of the algorithm. In one setting, this task was framed as being objective (stock prices depend on objective numerical indicators) or subjective (stock prices are driven by feelings and intuition). In the human-like condition, participants read about the ability of algorithms to perform fundamentally human “intuitive” tasks like creating art and music and understanding emotions. Task objectiveness affected reliance on algorithms when human-likeness was low, but this effect was eliminated when human-likeness was high (see Figure 2).

How to encourage trust in and use of algorithms
Given that algorithms offer enormous potential for improving outcomes for both consumers and companies, encouraging their adoption can be in the entities’ own best interest. Our results demonstrate that the following interventions can nudge consumers and managers into increased reliance on algorithms and better decisions.

  • Provide evidence that algorithms work
    One of the most intuitive approaches for increasing consumers’ willingness to use algorithms is to provide them with empirical evidence of the algorithm’s superior performance relative to humans. However, when the task is perceived as being subjective, this might not be convincing enough. Experiments indicate that consumers are less likely to believe in algorithm superiority compared to human judgement, even when provided with evidence to support this. In this case, additional interventions are necessary.
  • Make the task seem more objective
    Given that consumers trust in the cognitive capabilities of algorithms, another way to increase trust is to demonstrate that these capabilities are relevant for the task in question. This might be particularly useful for subjective tasks. In our studies, we found that algorithmic movie recommendations and recommendations for romantic partners were perceived as being much more reliable when the task framing emphasized how helpful quantitative analysis could be relative to intuition for those tasks. The results demonstrated that the perceived objectiveness of a given task is indeed malleable. Perceived objectiveness can be increased and impacts the perceived effectiveness of algorithms as well as trust in the algorithm for that task. A practical marketing intervention can therefore be used to increase trust in and use of algorithms for tasks that are typically seen as subjective.
     
  • Present algorithms as more human-like
    The third intervention we found to be useful was making algorithms seem more human-like, specifically along the affective or emotional dimensions of humanness. Figure 2 shows that increasing awareness of algorithms’ affective human-likeness by explaining that algorithms can detect and understand human emotions encourages adoption of and trust in algorithms for subjective tasks. Although actual reliance on algorithms is typically lower when the task is seen as subjective, this effect can be eliminated by providing real examples of algorithms with human-like abilities.

While the general trend is clearly toward an increased use of algorithms in many domains of our private and corporate lives, the pace at which they are adopted – as well as the areas where they will be adopted first – depends on several factors. Managers face a balancing challenge: while increasing the capabilities of algorithmic products and services in subjective domains, they must simultaneously address consumers’ and decision-makers’ beliefs that algorithms might be less effective than humans at those tasks. Our results suggest several ways to reduce skepticism, increase trust, and smooth the transition of algorithms into our future lives.

Authors

Noah Castelo
Assistant Professor of Marketing, University of Alberta, Edmonton, AB, Canada, ncastelo@ualberta.ca

Maarten W. Bos
Senior Research Scientist, Snap Inc., Santa Monica, CA, USA ,maarten.w.bos@gmail.com

Donald Lehmann
George E. Warren Professor of Business, Columbia University, New York, NY, USA, drl2@columbia.edu

Further Reading

Castelo, Noah; Bos, Maarten W. & Lehmann, Donald (2019): “Task dependent algorithm aversion”, Journal of Marketing Research,  Vol. 56 (5), 809-825. 

Logg, Jennifer; Minson Julia & Moore, Don (2019): “Algorithm Appreciation: People Prefer Algorithmic to Human Judgment,” Organizational Behavior and Human Decision Processes,  Vol. 151, 90-103.

Longoni, Chiara;  Bonezzi, Andrea & Morewedge, Carey (2019): “Resistance to Medical Artificial Intelligence”,  Journal of Consumer Research, forthcoming.