By Isabella Galeano

For the first time in history, a judge has ruled that an algorithm violates citizens’ human rights. It happened in the Netherlands in early February when a judge ordered the Dutch Government to stop using a system based on AI designed to predict the likelihood of a citizen perpetrating tax evasion.

The algorithm in question is part of a system designed by the Dutch Ministry of Social Affairs and Employment which combines many confidential, personal details collected by the Dutch Government – ranging from employment records to subsidies received, personal debts, level of education and history of housing – to calculate the risk of each individual. The Government should institute an investigation, within two years, of any person identified by the algorithm as likely to perpetrate tax evasion.

Several citizens’ rights protection associations argued that the new system had been rolled out “selectively” in predominantly low-income areas and that this constituted “a surveillance system that disproportionately subjected the poorest citizens to intrusive scrutiny”. The courts have now agreed with them and ordered the new system to be withdrawn immediately.

Algorithm Human Rights

This is probably the case that has gone the furthest down the legal road, with a sentence that clearly highlights the threats that these algorithms pose to the basic principles of justice and equality before the law, but it is not the only one. The most widespread system in terms of the numbers of people and the technology involved, is probably the social credit system implemented in China.

Launched experimentally in 2009, this system consists of several algorithms fuelled by vast amounts of data gathered about the everyday activity of Chinese citizens. It assigns points to each one which will determine whether their behaviour is regarded as ideal, in which case they are given access to loans or freedom of movement, whilst citizens with fewer points are prevented from buying tickets. All this has been integrated into a facial recognition system using CCTV cameras – the effectiveness of which has been confirmed by using them to implement the lockdown imposed during the coronavirus outbreak at the beginning of this year.

CCTV cameras fitted with facial recognition technology to monitor the population are not found only in authoritarian regimes such as China. As a case in point, the London Metropolitan Police has a van fitted with cameras providing live facial recognition of citizens. It has been used twice, the second occasion being in late February, with a view to reducing serious crime. The device is driven around the streets, identifying the pedestrians it sees, and when the system detects anyone wanted by the police, the officers arrest them. The problem is that the algorithm tends to make mistakes depending on age, race and gender, shifting the outcome from positive to discriminatory, as explained in this article in The Technolawgist.

The problem is that the algorithm tends to make mistakes depending on age, race and gender, shifting the outcome from positive to discriminatory

These new technologies and their application in countless realms of citizens’ public and private lives are creating increasing numbers of legal conflicts, and lawyers need to understand how these algorithms work and what legal implications they have in an increasingly globalised society. The main reason why self-driving cars are not yet on the road is not because of the technology, which is already available, but the difficulty in deciding how the algorithm must be trained to take certain decisions in unexpected situations.

Programmers are essential for programming these algorithms and also, implicitly, for taking decisions about how they behave – but they cannot do this by themselves. Programmers must accept the fact that we all have biased outlooks and that if we are not careful they will be reflected in any algorithm we programme. In this respect, it might be very helpful to build teams with a variety of profiles so that their different points of view and outlooks can offset each other and produce less biased algorithms.

Algorithm Human Rights

However, it is also necessary for legislation to regulate these algorithms effectively. Because this is a global issue, national restrictions have a limited impact although the measures taken by a country such as the USA, home to the world’s most important technology companies, are particularly important. In this respect, the US Congress is discussing the Algorithmic Accountability Act, a new law that will oblige big companies to audit their algorithms if they meet certain requirements, e.g. annual turnover of more than US$50 million, or more than one million users or dealing in data management. If this law is passed, it will affect the algorithms that big companies use to sell their products, suggest news items, insert advertising and even decide who should be promoted. Amazon, for example, is starting to delegate this type of decisions to certain algorithms that are not always free of prejudice.

Just as big banks have to be audited, should algorithms not be scrutinised too? Algorithm-based AI is increasingly present in our lives and has ever greater impact on unimaginable areas. If we do not want these algorithms to perpetuate the prejudice and discrimination we suffer as a society and aggravate existing inequalities, we must seize the opportunity to make them fairer by encouraging diversity and developing regulations that govern their application effectively.

All written content is licensed under a Creative Commons Attribution 4.0 International license.