Algorithms are everywhere
In the digital age, algorithms are often praised as powerful tools that help people and organisations make better decisions and accomplish their objectives more effectively. It is typically assumed that they function purely fact based and would produce unbiased and objective outcomes. However, there is more and more evidence that algorithms might lead to outcomes which resemble the discriminatory tendencies of humans. For example, Amazon had to cancel plans for the implementation of an AI driven automized recruiting tool because the system turned out to favour male over female applicants. Apple’s algorithms associated with their newly launched credit cards in 2019 sparked an enquiry. The system had offered men much higher credit limits than women, even if they were married, sharing all their bank accounts.
Biases in automated advertising
Biased algorithms can also be observed in advertising. In an eye-opening study, computer science professor Latanya Sweeney investigated the role of race in Google ads. She searched for common African-American names and recorded the ads that appeared with the results. She then searched for names that are more common among whites. The searches for black-sounding names were more likely to generate ads offering to investigate possible arrest records. Apart from racial discrimination other findings also document gender biases. In our own study related to online advertising we investigated such effects in the context of STEM (Science Technology Engineering Mathematics) careers. We sought to understand how internet and social media algorithms determine whether advertising content gets seen more by men or women and why. Our results suggest that advertising algorithms are not gender-biased as such, but economic forces that govern them might lead to unintended uneven outcomes.