Artificial intelligence through the lens of behavioral economics

How does AI impact the human brain, decision-making, and how we interact with others?

Anna Bayona

This article is part of Esade Economic and Financial Report #33: AI’s Moment. 


In 1950, the English mathematician Alan Turing asked in a paper, “Can machines think?” Elsewhere, using slightly different wording, he wondered whether it was possible for machines to show intelligent behavior. Since then, these questions have revolutionized the history of science and served as the seed for the development of artificial intelligence (AI). In the intervening years, huge strides have been made in AI, to the point where computers can now learn, reason, use language, perceive, and solve problems, all functions usually associated with human intelligence. Over the last ten years, the amount of data and processing power used to train AI systems has increased by a factor of 100 million. This vast amount of data, coupled with faster computers and new advances in algorithms, has taken AI development to new heights: its uses, which are already having a significant impact on the economy, most industries, society, and humanity, range from self-driving vehicles to applications that can read, understand, and transmit language, such as chatbots.  

This article looks at how human-AI interaction affects economic organization and behavior. Although this is a multi-faceted issue, here we will take a behavioral economics perspective. In other words, we will focus on our current understanding of how individuals make decisions from the point of view of economic theory, which is enriched by disciplines such as psychology, sociology, and cultural studies, among others. Behavioral economics often uses experiments to support or refute hypotheses about people’s beliefs and choices.  

AI and human behavior  

A good way to begin the analysis of human-machine interaction in economic contexts is to look at how people behave in an experiment when interacting with artificial agents. There is conclusive evidence that interaction between humans and artificial agents often changes the former’s strategic behavior. Specifically, human subjects adapt to artificial agents, even when they are not given any information about them. Furthermore, when interacting with artificial agents, they behave both more rationally and, in general, more selfishly. This raises the question of how humans cooperate with AI versus with other humans. There is evidence that humans cooperate less with benevolent AI actors that with human actors and that such cooperation only occurs if it satisfies their selfish interests. The lesson to be drawn from this is that, when humans interact with computers, both their individual behavior and the economic outcomes are often different. This, in turn, raises questions about how the economy and society will evolve when many interactions are between humans and AI, or between one AI and another. 

When interacting with artificial agents, humans act more rationally and more selfishly

Several studies show that, when people interact with each other, they have social preferences, that is, individuals do not only act in their own self-interest, as traditional economic models assume, but have positive or negative preferences for the benefits to others. In particular, individuals prefer reciprocal fairness, meaning our past actions influence how we are treated by others. Additionally, a person’s identity (i.e., their self-awareness and how others perceive them) influences individual choices, interactions, and economic outcomes. For example, identity and custom have been shown to explain charitable contributions and alumni donations, contributions to public goods, advertising, and school choice, among other things. To the extent that computers and AI influence how we experience the world, it remains an unsettled question how AI will affect our concept of who we are and those traits that are intrinsically human, such as consciousness, emotions, intuition, creativity, conversations, free will, and moral reasoning.  

Furthermore, as behavioral economists such as Kahneman, Tversky, Thaler or Shiller have shown, people often make decisions that are not fully rational. People use heuristics to make decisions and, unlike Homo economicus, have biases such as mental accounting, the anchoring effect, framing, herd mentality, present bias, recency bias, confirmation bias, familiarity bias, status quo bias, and attention bias, among many others. Some of them have proven to be potentially detrimental to people’s personal and economic well-being. One might thus think that AI will allow us to eliminate all possible biases and heuristics and make Homo economicus a reality.  

Yet it is worth asking whether these heuristics and biases have been useful on our evolutionary journey toward Homo sapiens. In a very interesting study, Chen, Lakshminarayanan and Santos study the purchasing behavior of capuchin monkeys when they are asked to allocate a budget of tokens across a variety of foodstuffs. They find that capuchin monkeys act rationally to price and wealth shocks, but have biases such as reference dependence and loss aversion. This suggests that some behavioral biases are not uniquely human and are innate, i.e., not learned. In fact, some scholars have demonstrated that some of these seemingly non-rational preferences have a certain evolutionary value. It thus seems dangerous to let AI eliminate most human behavioral biases from decision-making. Going forwards, it is worth considering how delegating many intellectual tasks to AI will affect our brain. The American writer Nicholas Carr argues that Internet use is eroding our ability to read and think critically, and that AI can affect our perception of personal fulfillment and happiness. One can imagine a future in which humans have to go to the intellectual gym to exercise our untrained brains.  

It seems dangerous to let AI eliminate most behavioral biases from decision-making

Herbert Simon, a pioneer in the multidisciplinary study of AI and behavioral sciences, was the first to address computers and economics simultaneously. In fact, he has received awards in both fields: the Turing Award for computer science in 1975 and the Nobel Prize in economics in 1978. Simon’s work noted that human rationality has limits, an idea influenced by his study of AI. He believed that analyzing computer simulations was a useful way to study human cognition. This approach can also be applied in the current context: today, AI can contribute to behavioral economics research by trying to find new models of behavioral patterns that affect human choices. 

AI as a creator and decision-maker

AI has an enormous influence on our lives, because it is already making automated decisions with little human intervention. These decisions range from production processes to services, from the decision to grant a loan to the selection of a job candidate. Additionally, the latest generative AI systems can create text, images, music, videos, or code, thereby expanding the types of tasks it can do for people to include creative jobs. As AI continues to develop, it is coming to encompass more aspects of general artificial intelligence (AGI), which will someday be able to do most of the tasks that humans do. The benefits of AI for the economy are potentially huge, as it can save costs; its processes are automated and faster; its decisions are more consistent than those made by humans; and it can help humans with certain biases and heuristics (e.g., attention). Even more importantly, AI can help us tackle some of the challenges facing humanity, such as the climate crisis or finding cures for certain diseases. In short, if AI tools are properly managed, they have the potential to transform our economy and societies for the better. 

Machine learning fairness
Related article — Navigating the complexities of machine learning fairness 

However, the automated decisions made by AI algorithms often differ from human decisions. This raises the question of whether AI algorithmic decisions can be fair (which is not to say that human ones always are). Scholars generally distinguish between biased algorithmic objectives and biased algorithmic predictions. Algorithms typically have an objective function (e.g., maximizing profits), but this can be biased, because it involves tradeoffs that require some understanding of fairness. Of course, this opens up a long philosophical debate over which preferences these algorithms should represent. Which is quite important, because a large share of AI algorithms are owned by just a handful of companies, whose private interests may not align with those of society. As for algorithmic predictions, they can also be biased, for example, due to under-representations in the data samples used to train the algorithms (e.g., if only the performance data for hired candidates are used, and not the data for those who were not hired). Mislabeled training data can lead to even more biases. For example, a worker who is discriminated against by his or her boss might be labelled as low-performing, prompting the algorithm’s prediction to generate even more discrimination. The computer programmers who write the code could also be biased, resulting in algorithmic feedback loops, meaning the predictions causally affect the intended outcome. Many biased algorithmic predictions have technical solutions that are likely to be solved as AI develops. However, there is no technical solution for biased algorithmic objectives, which require regulation and the consensus of society as a whole. This matters, because humans are delegating important decisions to machines. 

AI also poses some challenges related to the use of personal data and property rights. Many AI systems require techniques that process personal data to predict a person’s behavior and achievements (profiling). This can pose numerous problems for the privacy of our data and lives, as any digital activity is used and recorded. Additionally, generative AI entails two types of risks. The first is related to intellectual property rights. Determining who owns AI-generated content and whether it infringes copyright, patent, and trademark laws remains an unresolved issue. The second is a more basic problem of generative AI models that apply machine-learning to a dataset. These can produce very specific answers capable of earning a good grade on a college-level exam, even as they invent facts and fabricate stories. What if AI is used to generate fake news that benefits one part of society? What if it becomes increasingly difficult for society to distinguish truth from falsehood? What if we don’t realize that we are talking to an AI speaker? Yuval Noah Harari argues that AI could be used to destroy democracies, because it undermines our ability to speak to each other. Understanding how AI creates content or makes decisions means understanding how it works. This is important, because much of science is about understanding cause and effect and the process through which it occurs. Yet there are types of AI that not even their programmers understand. Can we crack open the black box of AI? Just as neuroscientists study the brain, computer scientists are now researching ways to understand how AI makes decisions. But if understanding AI decisions or recommendations is challenging for computer scientists, how can we expect society at large to do it? There is strong evidence that humans are ambiguity-averse, that is, that we prefer known risks to unknown ones. This makes one wonder whether society will agree to allow AI algorithms to make important decisions.  

Artificial intelligence: popularity and adoption

In 2022, the Pew Research Center surveyed Americans about their views on AI. The survey asked how the increased use of AI computer programs in daily life made them feel: 37% of American adults said they were more concerned than excited; 45% that they were equally concerned and excited; and 18% that they were more excited than concerned. Those who were more concerned than excited mentioned reasons such as the loss of human jobs, surveillance, hacking and digital privacy, and the lack of human connection and qualities. Those who were more excited than worried pointed to improvements in life and society, time savings and increased efficiency, and the inevitability of progress and the fact that AI is the future. Further analysis reveals that Americans are most concerned by the prospect of AI systems that could know people’s thoughts and behaviors and make important life decisions for them. Although the survey results do not show a uniform opinion about AI, most Americans lean more toward concern than excitement.  

These opinions prompt the question of how AI is currently being used at organizations. A 2022 global survey on AI by McKinsey showed that 50% of the organizations in its diverse sample had already adopted AI. The most commonly used AI functions were robotic process automation, computer vision, understanding natural-language text, and virtual agents or conversational interfaces. The survey also showed that, among organizations that use AI, the top use cases are the optimization of service operations, the creation of new AI-based products, AI-based product enhancement, customer service analytics, and customer segmentation. As can be seen, many of these functions typically fall within the remit of companies’ operations and marketing departments. However, the report points to a striking conclusion: companies have not improved their strategies to mitigate the risks of AI, for example, in relation to cybersecurity, regulatory compliance, personal privacy, AI model explainability, organizational reputation, equity and fairness, workforce displacement, and physical safety. This casts doubt on the existence of private incentives to reduce the risks posed by AI and shows that the divergence between private and social incentives calls for regulation of AI. The regulatory response of governments and international bodies will be essential to balance the potentially extraordinary benefits of AI against its significant risks.  

Algorithmic opacity
Related video — Opening the black box: The right to transparent algorithms

The future  

Many speculate about the future of AI and its impact on society, the economy, and humanity. This article has focused on the impact of AI on the human brain, decision-making, and interaction with others. In conclusion, if properly used and regulated, AI can help humanity thrive by helping to solve the world’s complex problems. However, humans must not stop exercising our brains and must continue to read, write, and count, as well as engage in critical thinking and creativity, communicate, and foster understanding. Only in this way will we cultivate our intrinsic human nature. 

All written content is licensed under a Creative Commons Attribution 4.0 International license.