Algorithms and discrimination in the workplace

Anna Ginès i Fabrellas

Over the last decade, algorithms have gradually become more and more important in our everyday lives. Their best known function is to order the results of our searches on Google or the photos that appear in our Instagram feed and other social media, but algorithms are at work in many other aspects of our daily lives. When you use Google Maps, the best route for reaching your chosen destination is suggested, in reality organizing the traffic in big cities; or depending on many factors, such as your IP location, your shopping history or the type of shops you have visited, you are directed to one particular type of product or another.

All these examples may seem banal and, in most cases, merely suggestions that leave the final decision in the hands of the individual, be this what they choose to purchase, read or the route they wish to take. But algorithms also influence our lives in ways that do not give us an opportunity to respond, and one of the best examples here is their application in the workplace.

It has become clear that Artificial Intelligence (AI) systems are a tool that many companies can use to make people management decisions – hiring, distribution of tasks or remuneration, and even dismissal – in an automated way. These solutions, which enable companies to take these types of decisions quickly and efficiently, with the supposed added value of mathematical objectivity, carry a great risk of perpetuating and aggravating situations of discrimination that occur in the workplace.

AI systems are a tool that many companies can use to make people management decisions in an automated way

“There is increasing evidence that IA systems and algorithms not only fail to eliminate existing inequalities as if by magic, they also reproduce and even magnify these inequalities,” warns Anna Ginès i Fabrellas, associate professor of Labor Law at Esade and director of the Institute for Labor Studies and the research project LABORAlgorithm. This project, funded by the Ministry of Science, Innovation and Universities, aims to analyze the use of algorithms and intelligent technology in labor relations and, in particular, the legal implications of profiling and automated decision making in the workplace.

One of the most famous examples is the case of Gild one decade ago. This company suggested changing the criteria according to which programmers – one of the most sought-after jobs in Silicon Valley – are selected. Instead of only looking for graduates from the most prestigious universities or people with experience at the top companies in the sector, Gild suggested adding the “social capital" of the candidates to the selection process through other variables, such as their contributions to collaborative platforms like GitHub, the tone of their tweets, or proxy variables, such as the places they frequented with their friends and the websites they visited in their free time, including visits to a specific Japanese manga website. Up to 300 variables compiled from public databases were taken into account to look for programmers with the greatest potential for Silicon Valley startups.

One of the reasons why algorithms reproduce situations of discrimination lies in the data with which they have been trained

However, bearing in mind that many of these variables are related with free time activities, they may generate indirect discrimination against female programmers, who tend to have more care responsibilities, another factor being the high sexual content of Japanese manga on occasions.

Another sadly famous example is the response of Siri – Apple's assistant that comes with a default female name and voice – when she was called a “bitch”. The assistant used to reply: “I'd blush if I could.” Although the response was changed in 2019 to “I don't know how to reply to that,” the assistant continues to reflect a docile and submissive woman who does not respond when insulted.

These biases in AI programming perpetuate discriminations that already exist, but they also generate new discriminations, as in the case of the algorithms of platforms such as Uber and Deliveroo. These algorithms reward those persons or riders with greater availability during peak demand times, penalizing those who cannot work at specific times, even if they have justifiable reasons – situations beyond their control, such as illness, disability or care commitments – or they wish to exercise the right to strike.

Data bias

One of the reasons why algorithms reproduce situations of discrimination lies in the data with which they have been trained. “If this data contains discriminations, the algorithm learns to discriminate,” Ginès warns, citing the case of Amazon, which trained, and ended up rejecting, a hiring algorithm. With the aim of identifying the best candidates for jobs, Amazon analyzed the profiles of staff contracted over the previous 10 years. However, since most of the candidates hired were men, the algorithm automatically rejected the resumes of women in the selection processes.

How should we treat cases of algorithmic discrimination from a legal point of view? Anna Ginès considers that the current legislation against discrimination is sufficient to prevent it without the need to create new categories, since it could be classified as indirect indiscrimination. “These are not new problems, but problems that have been amplified,” she observes.

The current legislation against discrimination is sufficient to prevent it without the need to create new categories, since it could be classified as indirect indiscrimination

Nevertheless, algorithmic discrimination presents new challenges that require a response. In this respect, Ginès sees a need to incorporate more gender and racial diversity into the teams that develop the algorithms, bridge the data gap to prevent AI from reproducing existing discriminations, and look for technical solutions to eliminate bias. She considers it essential to improve transparency by prohibiting the use of non-transparent algorithms in the field of labor relations and introducing external audits.

Ginès also calls for an ethical debate in which society must evaluate what decisions can and cannot be left to algorithms. “Highly invasive or discriminatory technology such as facial or emotion recognition technology should not be authorized in the field of employment,” the professor declares.

“Artificial intelligence systems and algorithms are presented as tools for improving productivity, competitiveness and efficiency. But we must ask ourselves the question, what do we want this productivity, competitiveness and efficiency for? If technology only serves to increase and refine discrimination, what do we want it for? We must use technology to confront the biggest challenges faced by humanity, and among these we find equality and non-discrimination,” Ginès concludes.

algorithms-discrimination

 

All written content is licensed under a Creative Commons Attribution 4.0 International license.