Algorithms are driving inequality, not eliminating it

Have the discrimination and bias problems that plagued early AI really gone away?

Anna Ginès i Fabrellas

Discrimination in algorithms and AI has come a long way since the first iterations of the technology. But it took some truly awful circumstances to set change in motion. In 2011, if poor mental health prompted US citizens to tell Apple’s helpbot Siri they wanted to shoot themselves, it gave directions to a gun store. In 2015, Google Photos’ facial recognition software labeled two African American people as gorillas.  

After these cases, Apple has since partnered with the National Suicide Prevention Lifeline and now offers a helpline. Google Photos issued a groveling apology, saying it was “appalled” by its unforgivable mistake. 

Rather than eliminating inequalities, the use of algorithms is reproducing, systemizing and magnifying biases

But Esade’s Associate Professor Anna Ginès i Fabrellas says the problems that plagued early AI haven’t gone away. Writing in the Spanish journal Revista Trabajo y Derecho, the professor of law and director of the Institute for Labor Studies says the AI that has permeated the workplace is responsible for discriminatory decisions that affect millions of lives

Rather than eliminating inequalities by handing over decision-making to algorithms, she says, their use is reproducing, systemizing and magnifying gender, race, sexual orientation and disability biases and stereotypes at an alarming rate. 

Confirmation of bias

In 2018, research from Joy Buolamwini and Timnit Gebru found that racial recognition software was significantly more accurate when identifying white men than black women. And in a 2023 Bloomberg study of more than 5,000 images generated by the AI system Stable Diffusion, images generated by searches for well-paid jobs mostly included white-skinned people, while darker-skinned people were in images of lower-paid jobs.  

Although 70 percent of US citizens who work in fast food restaurants are white, the system represented black people 70 percent of the time. And, while 34 percent of judges in the United States are women, Stable Diffusion only managed to conjure up three percent of the images it generated when asked to illustrate the profession.  

This problem can be partly explained by the continuing lack of diversity in the tech sector, and more specifically in A

A 2024 UNESCO analysis into stereotypes of gender, racial origin or sexual identity concluded that OpenAI's GPT-2 and ChatGPT systems and Meta's Llama 2 disproportionately associated women's names with words such as house, family, children or marriage. Men’s names were bestowed with meanings including business, executive, salary and career. When the systems were asked to complete a sentence, 20 percent included sexist or misogynistic language and up to 70 percent generated negative content about homosexual people.  

Rip it up and start again?

Ginès’ research shows that, while the appalling earlier incidents were addressed, unacceptable biases and stereotypes are re-emerging in new products and systems.  

This can be partly explained by the continuing lack of diversity in the tech sector, and more specifically in AI: women represent just 20 percent of people who develop technical AI, 12 percent of researchers and six percent of software developers.  

And the technology that fuels AI isn’t the result of peer-reviewed scientific progress — it’s developed by secretive tech companies that rush to send products to market without scientific scrutiny. Whether intentional or not, the lack of diversity in coding, analyzing and reviewing this technology is compounding systemic discrimination in all forms. 

According to Ginès, if there is to be any level of progress in reducing this discrimination, the unequal power structures that characterize the industry need to be dismantled and rebuilt. This means addressing the three main sources of algorithmic discrimination: biases in the database on which the algorithm has been trained; biases in the variables used by the algorithm to make decisions; and biases in the correlations identified by the algorithm. 

Built-in bias

The algorithmic management of work is presented as a fix for discrimination in the workplace by removing individual bias and presenting a mathematically objective form of decision-making. But it’s never going to achieve that when the discrimination is built in. 

Algorithms aren’t magical. They are trained using large volumes of data to identify and reduce statistical patterns. And if the data or patterns contain bias, that bias will be reproduced to create more data, which then feeds more data — and on it goes.  

Algorithms can’t distinguish between correlation and causation

Take Amazon, for example: its AI recruitment system was trained with hires from the last 10 years, who were predominantly men. The algorithm learned that men were the best candidates for the firm and automatically discarded resumes from candidates with female names or characteristics

Bias in the variables used by AI can also systematically create bias. The algorithm used by Deliveroo to allocate shifts was designed to sanction workers who didn’t attend at a previously reserved time slot. Courts ruled this practice to be discriminatory, since it did not allow workers to justify absences due to the right to strike, caring responsibilities or health issues.  

Algorithmic discrimination can also be rooted in existing biases in the statistical correlations they use — known as correlation bias, or proxy discrimination. Algorithms can’t distinguish between correlation and causation. If a system has been trained with data that shows hires stay longer if workers live closer to their workplace, it will favor people in those areas. This fails to account for the lack of affordable housing, the need for family support and various other discriminatory factors unrelated to ability to do the job. 

A losing battle?

Far from removing bias, AI has simply automated it. The decisions made by these systems impact millions of lives, often with little or no human intervention.  

Non-native speakers or those with regional accents are penalized by language-based tests; people whose faces don’t match the gender on their passport are prevented from completing automated processes; neurodivergent people whose AI-scanned facial expressions aren’t the accepted norm for ‘interested’ or ‘professional’ are rejected for jobs. The examples are endless.  

Algorithmic discrimination adds multiple new dimensions to discrimination, systematizes them and amplifies them at breakneck speed. 

Progress has been made. In the EU, the AI Act classifies AI systems based on their risk and imposes different obligations to providers and deployers of AI systems, including a risk management system, data governance to guarantee the quality and relevance of the data sets used or transparency obligations to ensure that the system’s outputs are interpretable

Nevertheless, the scheme introduced by the AI Act is based on an auto-declaration of conformity by the providers of AI systems. It is a matter of time, however, to see if this scheme is sufficient to guarantee non-biased algorithms. According to Anna Ginès, external audits for AI systems used in high-risk sectors, such as the labor market, are better to ensure fair and just AI systems in high-risk scenarios. 

All written content is licensed under a Creative Commons Attribution 4.0 International license.