How to move towards a fairer machine learning

Irene Unceta

In the last few decades, machine learning has become increasingly popular as a means to support decision-making. From banks and insurance companies to internet service providers, dentists and even the supermarkets where we do our weekly shopping, this technology is ever more pervasive in our lives.  

Its ubiquitous presence, however, is not just limited to the private realm. Public institutions have also begun using this technology to improve a multitude of processes, for example, to prevent crime, detect tax fraud and award subsidies and grants, among others. 

To a large extent, machine learning’s success can be explained by its promise of greater coherence and the resulting perception of greater objectivity. In this sense, claims abound about how using machine learning can help us “make better decisions”. 

Despite these ambitious promises, a significant portion of the scientific community has been warning for years about the risks of delegating decisions to automated systems. There is overwhelming evidence today demonstrating that automated decision-making systems based on machine learning can actually increase inequality in those areas in which they’re applied, including at work, in our homes and in the legal system. 

The weight of history 

A key factor to bear in mind about machine learning is the legacy concept. Systems based on machine learning address realities shaped almost exclusively by the data used to train those same systems. Given that these data reflect past human decisions, the systems inherit any underlying bias found in those previous decisions.  

In fact, recent studies have only underscored this point, demonstrating that, in the absence of control mechanisms, machine-learning models reproduce past patterns of discrimination, including sexist, racist and/or homophobic prejudices, among others. 

In 2015, for example, Google users complained that the company’s app to automatically tag photos mistakenly classified Blacks as gorillas. That same year, others found that the facial recognition software incorporated into Nikon cameras assumed that people of Asian descent were, by default, blinking.  

Together with these examples, the following year a well-known study revealed that the software used in the US legal system to assess the risk of recidivism among convicts was twice as likely to incorrectly associate a higher probability of becoming repeat offenders to Black convicts. That same software was also twice as likely to erroneously associate a low probability of recidivism to White criminals. 

In addition, numerous other studies have also demonstrated that women have fewer probabilities of being shown ads for well-paying jobs compared to men when using Google. Similarly, searches for images of highly-qualified jobs primarily feature pictures of men. 

In most cases, including this bias is not the programmers’ explicit intention. It stems from the combination of historical data which form the basis of past decisions. 

Evidence such as that provided by the previous examples underscores the weight the past has on the predictions machine-learning systems make today. However, these systems can be both objective and effective at the same time. To realize this potential, we have to analyze what they have inherited from the data we use. This in turn implies adopting specific measures to ensure that machine learning is used fairly and ethically. 

Fair machine learning as the choice of the future 

Ensuring that machine learning is fair entails reconsidering how we use this technology and designing tools that fit and conform to the different contexts in which the technology is used. For this, we need to clearly understand two concepts: 

On the one hand, we have to understand that a model’s performance will only be optimal when that model is also fair. We can use numerous criteria to define a model’s fairness. Meeting at least one of these criteria will, in most cases, imply that its predictive performance, its hit and miss ratio, drops. However, we should not see this as a step backwards. On the contrary, it allows us to explore solutions to create systems with a high percentage of correct predictions that are also fair. 

On the other hand, we also have to be aware of intentionality: Creating fair machine-learning models requires our express and explicit intention to do so. This intentionality cannot be solely limited to the realm of discourse; it also requires actions that aim to mitigate the effects of the inequality found in practice. 

Action 1: Representing diversity 

Firstly, we have to ensure that different groups are fairly represented in the databases we use to train the systems. The data have to encompass diverse realities and be heterogenous. When they’re not, we have to adopt the necessary measures to balance and/or stop using certain data to ensure that the resulting predictions aren’t affected. 

This means providing access to people from groups which have traditionally been underrepresented in roles requiring a high degree of technical training. The tech industry has tended to hire professionals who are primarily men and Caucasian in origin. Promoting data diversity starts with fostering diversity among the people who actually gather, store and analyze the data.  

Action 2: Use of sensitive data 

A second key action is assessing the suitability of including sensitive data among the model’s variables. This, however, presents a quandary. At times, the data can evidence the existence of an intrinsic difference between the values observed for different groups.  

For example, people residing in primarily Black neighborhoods may possibly have greater problems meeting their loan payment obligations. It’s important to be aware that these differences may also mask other issues, such as unequal access to quality education, a lack of upward mobility options or problems finding well-paying jobs.  

In other words, the observed correlations between someone’s skin color and their propensity to default on loans doesn’t necessarily imply a causal relation between the two. And, even if this were the case, it’s important to determine if we should use those correlations. In other words, if we aim to ensure equality, should we reflect the differences we can observe between groups or, contrarily, should those differences be corrected?  

With respect to this last point, the discussion on the appropriateness of including sensitive data when training databases is important. That is, we need to ask when and to what end that data should be included. In this respect, there are numerous examples of when prohibiting access to sensitive data to avoid possible bias has had the contrary effect to what was originally desired. 

The US legal system expressly prohibits using sensitive variables when training machine-learning models. For example, this means that, when companies create models to identify the ideal internal candidates for promotions, they cannot include information about their employees’ gender, skin color, ethnicity, religion or sexual orientation.  

Similarly, when carrying out risk assessments for clients requesting mortgages, banks cannot gather protected data regarding those clients. However, explicitly prohibiting access to that information does not keep that data from being included in the models. Zip codes, for example, can be a good indicator of people’s skin color. In the same way, information about people’s careers or their membership in certain groups or associations can be a good indication of their gender.  

In 2018 an article in Reuters reported that a candidate recruitment search engine created by Amazon systematically discriminated against women. This occurred despite not including candidates’ gender in the model’s training process. However, it incorporated other variables such as where candidates had studied or the university clubs to which they belonged. The model thus penalized applications which included terms such as “women’s”, for example, “captain of the women’s chess team” or applications from candidates who studied at women’s colleges.  

Examples such as these demonstrate how ineffective measures prohibiting the use of sensitive data can be when training machine-learning models. That data will still be accessible and encoded in other variables unless we ensure this doesn’t occur. At the same time, having access to sensitive data is often fundamental to be able to measure the inequality found in predictions.  

Measuring inequality 

Measuring inequality also requires that the processes adopted to create automated decision-making models be transparent. This involves a series of conditions: 

  • Data-gathering mechanisms have to be traceable, and the variables they incorporate have to be readily available.  
  • Systems’ designers have to detail their data-scrubbing techniques in accessible public documents.  
  • They have to provide information about the type of model used in addition to the values of the parameters established during the training process when needed 
  • Designers need to identify the criteria they used to ensure the model’s fairness and justify why they used that criteria.  
  • And, lastly, interacting with the model has to of course be possible to determine the appropriateness and fit of its predictions in different cases and contexts.  

In sum, to measure inequality, we have to be able to audit machine-learning systems. 

Audits 

The auditing process requires those developing machine-learning models to provide people outside their organizations access to their systems in order to examine and evaluate the models. This serves to ensure that their design meets all the necessary requirements and that their use in the specific context contemplated is not prejudicial. 

In addition to the obvious obstacles to these audits, we have to consider the opacity of the models themselves. Models based on machine learning have been developed solely with the aim of ensuring their predictive performance.  

They are thus notably effective in this area and, as a result, they are markedly complex and increasingly opaque, making their assessment all the more difficult. Even when auditors have access to all the information about a given system, they may not be able to understand it and, consequently, assess the underlying logic in the system’s predictions. 

In this respect, we have to ask ourselves if we always have to resort to complex models. Rather, we should be asking when using those models is reasonable. In those cases when using a simpler and easier-to-interpret model meets our established performance needs, perhaps we should refrain from developing the more complex solution.  

A preventative focus 

Lastly, it’s important to bear in mind that the mechanisms required to ensure the responsible use of machine-learning systems should not be relegated solely to the realm of audits. We of course have to be able to identify and report system deficiencies and ensure accountability when they have a negative impact. We also have to be able to propose solutions which allow us to mitigate the potentially adverse effects of those deficiencies. In this sense, we have to adopt both a preventative and reactive focus. 

For this, we have to fill the gap between theoretical proposals focused on promoting fair machine learning and our practical needs. What minimum requirements do databases have to meet to ensure their representativity? When is including data about protected attributes relevant while also safeguarding that data? What criteria should we use to assess inequality in each context? How should said criteria be transferred to mathematical artifacts to create training models based on those criteria? 

Responding to all these questions requires adopting a broad perspective, one that is sensitive to different realities and capable of reflecting on complex concepts, thinking about the long-term impact of decisions made in the present and identifying clear objectives that can be fulfilled in the future. Consequently, it requires a debate between data scientists, engineers, management professionals, sociologists, legal experts and philosophers.  

The advances made in the machine-learning field have opened the door to a multitude of possibilities. It depends on us, however, to ensure machine-learning models have a positive impact on people’s lives. It is worth remembering that, even if these models are in charge of making predictions, people are the ones who make the corresponding decisions. 

The full presentation can be viewed here.

Presentation organized in the context of the LABORAlgorithm project funded by FEDER/Ministerio de Ciencia, Innovación y Universidades – Agencia Estatal de Investigación/ _ Proyecto PGC2018-100918-A-100.

All written content is licensed under a Creative Commons Attribution 4.0 International license.