Human Decision-Making in an Augmented World
The concept of augmentation or enhanced intelligence emphasizes, in contrast to the sometimes negatively evaluated concept of autonomous artificial intelligence (AI), a cooperation between humans and machines. While smart algorithms filter through data, identify patterns, and make recommendations, humans plan, think, and take the final decisions. Augmented intelligence is often considered as the future of decision-making for knowledge workers like doctors, managers or pilots. However, in our everyday lives, examples of augmented decisions are already omnipresent. Who determines what you see in your social media newsfeed, which movies and series you watch, or which products you buy? And think about the first thing you do when you plan to travel to a new destination. Most likely, you are using the map app on your smartphone, and not a classic road map or other directions. Following the route that the app suggests is usually the most convenient option.
Augmentation provides clear benefits in decision-making processes: AI helps reduce information overload, filter relevant information, and limit an otherwise overwhelming abundance of choices. The algorithms behind the services create a convenient world, freeing humans for more enjoyable tasks than gathering information, framing options and weighing alternatives for decisions. The recommendations and nudges of smart algorithms help humans to save time and still make choices that match their preferences. But this is only one side of the coin.
The dark side of digital convenience
There is a darker and often invisible side of the coin as well.
- Loss of Freedom of Choice
Augmented intelligence frees us from many chores, but it also limits free choice. We rely on our technologies, often unaware that we do no longer get the full picture, but a reality that might be curated for a specific purpose. In such cases, freedom of choice becomes an illusion. Humans have become accustomed to “doing everything” on their smartphones, and this tendency is reinforced by the apps and services of organizations such as Facebook, Google, and Netflix. Tech companies use technology as a vehicle to construct individual subjective reality, the internal space that frames our decision-making. Most of the information that humans base their decision on is filtered and pre-sorted by algorithms, which use huge amounts of user data to produce highly individualized recommendations to nudge us towards certain options (see Box 1).
While such algorithms make our lives more convenient, they also fulfill various organizational objectives that users may not be aware of and that may not be in their best interest. We do not know whether algorithms augmenting human decisions truly optimize the benefit of their users or rather the return on investment of a company. In other words, producing a positive user experience is often a means to an end not an end in itself.
- Polarization of beliefs
A potential harm to societies and democracies is the emergence of information bubbles, enabling and strengthening a polarization of beliefs. Biased outcomes shape our identities, our view of the world, our social relationships, and most importantly the decisions we make. For instance, YouTube alone accumulates in total more than one billion hours of watch-time a day, and 70% of this time comes from watching recommended videos. Smart algorithms instantaneously and simultaneously recommend millions of videos to its users. At the same time, they test how to best retain user attention. Once a user continues to view another video the recommendation was successful, and the algorithm has controlled the user’s decision-making process. Under these carefully designed circumstances, humans may lose the ability to consciously choose between freely exploring or stopping to explore the content on the platform. Free choice is competing against smart algorithms which track and use individual preferences, while the user cannot control or does not fully understand the purpose and functionality of these algorithms. If such an algorithm learns that conspiracy videos are optimizing user attention, it may continue to recommend such videos until even radical conspiracy theories become kind of a shared reality for users. What they consume affects how the users think and behave. Even though users decide what they watch, YouTube’s, but also Facebook’s and Twitter’s algorithms have a large influence on what content - and what ideas and opinions - gets amplified or silenced.