Join us

Machine Learning, Public Policy and Social Welfare: Challenges and Solutions

As the use of machine learning becomes commonplace, the legal and ethical implications of the mass adoption of the technology are being increasingly scrutinised. While the deployment of machine learning models is expected to bring economic and social benefits, the complexity of the technology and some of its applications also generate concerns.

Pol Borrellas and Irene Unceta, from the Department of Operations, Innovation and Data Sciences at Esade, have examined, from an economic point of view, whether the free use of machine learning models maximizes aggregate social welfare or, alternatively, regulations are required. Their research, which appeared in the international peer-reviewed journal Entropy, addresses four main challenges affecting machine learning: its interpretability, fairness, safety and privacy.

“Given the increasing impact of automatic decision-making systems on our everyday lives, this work aims to understand the potential harms derived from the use of this technology and to propose public policy tools to optimally prevent them,” they explain. According to them, the objective of regulations should not be to minimize the risk of damages and attacks or to ensure that all models are interpretable and fair, which would make some applications inviable, but to guarantee that their aggregate net effect on social welfare is positive and the largest possible.

The rise of machine learning

Machine learning is a subfield of artificial intelligence (AI); it uses mathematical algorithms to identify complex patterns in data and perform complex tasks in an automated way.

“The most distinctive attribute of machine learning is its ability to learn without being explicitly programmed,” explain Borrellas and Unceta. “Learning in this context is not achieved by manually writing decision rules but by providing algorithms with examples from which patterns are uncovered.”

There are three categories of machine learning. Supervised learning is the most prevalent type; it is widely used to identify the relationship between a target variable, which represents a phenomenon of interest (e.g., the price of an option, the probability of loan default or the likelihood of a purchase), and a set of features (e.g., historic prices for this option, the current interest rate or the market volatility), based on a given collection of examples. Unsupervised learning does not include such target variable. Instead, its aim is to identify and segment structures (e.g., segment the client portfolio) by analysing the relationships between different data points.

Finally, reinforcement learning uses a ‘trial and error’ approach, widely used in gaming. In this approach algorithms are allowed to interact with their environment under a set of restrictions and look for actions that maximise the obtained reward.

Artificial neural networks are being increasingly used to inform many high-stakes decisions

Nowadays, many commercial applications of machine learning are based on artificial neural networks (ANNS). These models were originally inspired by biological neural networks and refer to algorithms that learn by transforming the data in consequent steps. Today, these techniques are collectively referred to as deep learning and are able to solve very complex tasks with a high degree of accuracy. As a result, they are being increasingly used to inform many high-stakes decisions. But, while these complex algorithms perform better and more accurately, these benefits come at a cost.

“These algorithms are also more difficult to interpret,” explain Borrellas and Unceta. “This makes it challenging to verify and validate them, provide explanations about their outcomes, and make sure that their behaviour is fair. Plus, they are not free from other issues affecting machine learning, such as those related to safety and privacy.”

The potential lack of interpretability, fairness, safety and privacy of machine learning is placing increasing pressure on policymakers to regulate the use of the technology. But, say Borrellas and Unceta, “Before deciding to regulate the technology and designing laws for this purpose, it is essential to consider the economic implications of these challenges.”

The potential lack of interpretability, fairness, safety and privacy of machine learning is placing increasing pressure on policymakers to regulate the use of the technology

The economic incentives of machine learning vary for public and private organisations. For private organisations, increased efficiency improved user experience, enhanced targeting and customised results all help to increase profits. From a social welfare standpoint, users receive value in the form of better products and services. “The derived gains in productivity are expected to eventually lead to lower prices, which increases the purchasing power of customers,” they add.

For public institutions, the use of modelling to improve public services and allocate resources more efficiently are the main drivers. Aiding police departments to monitor and prevent crime, detecting tax fraud, the development of more impartial (and fairer) decision-making processes such as the models used to inform judicial decisions are some of the use cases of machine learning for public institutions identified by Borrellas and Unceta.

Policy proposals

“We analysed the potential lack of interpretability, fairness, safety and privacy of the technology,” they explain. “The aim was to determine, from a positive economics point of view, whether the free use of machine learning maximises aggregate social welfare or, on the contrary, regulations are required. In the cases in which restrictions should be enacted, we outlined possible policies.”

To achieve an optimal level of interpretability and fairness, current tort and anti-discrimination laws should be adapted to the specificities of machine learning, they suggest.

“Regarding tort law, we propose a combination of fault-based and strict liability and the reversal of the burden of proof under some circumstances,” they say. “In the case of anti-discrimination laws, we propose two policies: the publication of industry standards and good practices regarding algorithmic fairness, and the modification of the current legal procedures to ensure that juries have the appropriate information and knowledge to examine these kind of cases.”

Existing market solutions can, they add, encourage machine learning operators to equip models with security and privacy that maximises social welfare. “This happens naturally because machine learning operators are incentivised to invest resources to optimally meet market demands,” they explain.

To achieve an optimal level of interpretability and fairness, current tort and anti-discrimination laws should be adapted to the specificities of machine learning

“In other words, if security and privacy-related threats are perceived as too severe for users and consumers to adopt the models, operators are forced to find solutions. Consequently, there is no need for any additional policy; the current incentives appear to be the appropriate to make machine learning models optimally safe and privacy-preserving.”

For machine learning models employed by companies with abnormal market power, as well as public institutions that have the power to enforce the use of a model, there is no incentive to adapt offerings to the preference of clients and users. “In these specific cases, enforcing a right to explanation could optimise the aggregate social welfare,” say Borrellas and Unceta.

“Public institutions around the world are starting to study the ways to regulate machine learning,” they conclude. This study aims to serve as a stimulus to encourage further research on additional ways to efficiently deal with the challenges of machine learning and mitigate the inefficiencies that externalities and monopolistic structures may generate.

All written content is licensed under a Creative Commons Attribution 4.0 International license.