Artificial Intelligence: Technological revolution or existential threat?

The crucial challenge of AI is not technical, but human. Its deployment will depend on our individual and collective capacity to design prosperous futures.

Núria Agell
Queralt Prat-i-Pubill

This article is part of Esade Economic and Financial Report #33


In recent years, three news reports have caused concern in the artificial intelligence (AI) community. First, in August 2022, a survey of 738 AI experts found that 50% of them believed there is a 10% or greater chance that humans will go extinct due to our inability to control AI. In March 2023, the Future of Life Institute, specialized in human extinction risks, published an open letter signed by experts from around the world calling for a six-month pause on advanced AI models. Finally, in May 2023, the Center for AI Safety issued a statement signed by the executives of some of the leading AI companies, including OpenAI, DeepMind, Anthropic, and Turing Award winners. Their message was clear: “Mitigating the risk of extinction from AI should be a global priority, alongside other societal-scale risks such as pandemics and nuclear war.”  

Are the fears emerging in some sectors of society justified? Can AI change how we think about the world today? These are some of the questions we will try to answer in this article. 

Throughout history, many of humanity’s technological advances have generated confusion and fear in the period of adaptation to the change. The question is always: will we be better off or does this new technology entail substantial risks for humanity? Today, it would be impossible to live without artificial light or cars. Yet the technological progress that these disruptive advances ushered in was likewise highly controversial in the 18th and 19th centuries.  

The discourse on the risks of AI helps exaggerate the perception surrounding intelligent systems

When we think about AI, we recognize that we are dealing with a new reality that is transforming how we live. We understand that is already yielding, and can yield, an array of benefits for humanity’s future. AI is used in a wide variety of applications, from medical surgery to financial fraud detection, logistics planning, recommendation systems, virtual assistants, or judicial decisions. We live surrounded by intelligent systems that we can ask to adjust the thermostat, draw up a shopping list, tell us when the car is charged, and help us find our way around an unfamiliar city. AI is also driving innovation in scientific research and discovery, creating new business and job opportunities.  

We sense that AI can lead to unimaginable breakthroughs. We have good theories, and with improvements in processors and increased data storage capacity AI will be able to provide solutions in situations almost no one expects. But, while the development of AI brings improvements and efficiency gains in many areas, it also sparks fear, as it is not without potential negative consequences (such as algorithmic biases, security, privacy or transparency issues, or gaps in accountability for decisions).  

Paradoxically, the current discourse on the risks of AI, particularly concerning humanity’s possible extinction, helps exaggerate the perception surrounding intelligent systems. This heightened perception of the power of AI (it could wipe us out!) increases the valuations and funding opportunities for AI-related companies.  

As with any other scientific breakthrough, the consequences of AI will impact us in different ways depending on its design and implementation. But before we turn to the potential risks and benefits, let’s take a moment to define what AI is.  

What is AI?  

Broadly speaking, AI is the ability of a machine to perform tasks that would normally require human intelligence, such as learning, reasoning, pattern recognition, decision-making, and problem-solving. This basic definition of AI is a pragmatic one. In the field of computer science, however, the term “artificial intelligence” usually refers to systems that perceive their environment and are capable of providing solutions to modify it, i.e., that can make decisions to maximize the chances of achieving a goal.  

AI systems do not need to achieve superintelligence to pose risks to society

AI is commonly classified into two types: artificial narrow intelligence (ANI or narrow AI) and artificial general intelligence (AGI or general AI). Narrow AI is task-specific, while general AI is designed to perform autonomous and complex tasks without human supervision. Examples of narrow AI include chatbots (such as Siri), which are designed to understand and respond to voice or written commands; self-driving cars, which are designed to navigate roads and traffic; or ChatGPT, which can generate text from prompts. It is the most common type of AI and the only one that presently exists. In contrast, general AI would be able to perform any human intellectual task, including understanding and answering highly complex questions and adapting, understanding, and being aware of its own actions. For now, however, it is still the stuff of science fiction. 

Governance and education: key aspects

Geoffrey Hinton, an influential researcher in the field of AI development and winner of the prestigious Turing Award, has emerged as a vocal critic of the accelerating development of AI. In Hinton’s view, our societies are not prepared to face the dangers arising from this rapid progress. Indeed, contrary to popular belief, AI systems do not need to achieve superintelligence or possess general AI to pose risks to society. The current capabilities of narrow AI systems, coupled with insufficient regulation and governance, are already exposing us to substantial dangers and risks. Hinton thus underscores the urgent need for adequate governance to effectively mitigate these dangers.  

Yet it is crucial to recognize that these risks and dangers of AI stem not only from malicious actors, but also our limited understanding of these systems and their operational limitations. To achieve a hopeful future with an AI capable of innovating and helping to solve society’s problems, we must minimize these two types of risks, that is, risks arising from both harmful and ethically dubious behaviors resulting from insufficient governance and risks arising from the limited understanding, or even outright incompetence, of those who use AI systems.  

One of the challenges for AI governance is that regulation lags behind known externalities

Managing the first type of risks, i.e., those arising from controversial, harmful, or damaging practices, requires adequate regulation. However, achieving effective governance in this field is complicated, as it entails the agile management and participation of experts from a variety of disciplines, as well as interaction with civil society and companies. The forthcoming European Artificial Intelligence Act will define different risk levels for the application of each AI system, which will serve as a basis for determining the type of regulation and governance to which AI advances will be subject. 

However, understanding the actions and behaviors of AI systems is complex not only due to the technology involved, but also because companies and organizational ecosystems possess essential information that is not publicly shared. This prevents citizens and governments from reaching a clear understanding of its scope. In view of these challenges, in 2019, Japan put forward an innovative proposal known as “Society 5.0” at a meeting of the G20. The initiative recognizes the need to adopt a multi-stakeholder approach to governance and promotes a new type of democracy that seeks to address the challenges and harness the benefits of AI equitably and transparently.  

Opacidad algorítmica
Related Content — [Video] Opening the black box: The right to transparent algorithms

The challenges we face to govern AI are in some ways similar to those seen in industries such as oil and gas or tobacco, where regulation often lags behind known externalities. In 2018, the eminent researchers Ramon López de Mántaras and Luc Steels led a group of leading European scientists to establish the Barcelona for AI manifesto, a pioneering initiative that is still relevant today due to the lucidity and concrete nature of the proposals it makes, which merit greater attention and consideration.  

Managing the second type of risks, i.e., those associated with the misuse of AI due to insufficient knowledge, can be mitigated with robust training targeted at users of AI-based tools or systems, as well as anyone impacted by these systems, which is, ultimately, the population at large. It is vital to understand that the success, proliferation, and effectiveness of the uses of AI depend on such collective involvement. In this regard, guaranteeing proper training will make it possible to minimize the risks and maximize the implementation and benefits of AI, ensuring that people understand how it works and use the technology responsibly and ethically. AI projects often suffer setbacks and fail due to the difficulty of implementing changes in existing processes and systems. It is thus essential to educate people about the capabilities and limitations of AI and the need for constant monitoring and improvement. 

Our future in the age of AI

Finally, much has been said about the destruction of jobs, an aspect about which there are still many unknowns. AI can help workers by broadening their skills and reducing repetitive and dangerous work. It will also create new types of jobs. Perspectives on work and its centrality in life can vary across cultures and individuals. The constant acceleration of science and technology may give rise to new possibilities for living a full life without being so tied to the current notion of work.  

Where should AI go from here? What scientists are proposing is a people-centered AI that enhances our capabilities. In other words, we need to move toward an AI that does not replace people but rather improves our capabilities and places us squarely at the center of technological evolution. An open AI, based on clear legislation that envisages the development of techniques to make it possible to encrypt personal data and work securely. An AI that helps humanity innovate and develop clearer strategies and solutions to the challenges we face.  

Technological acceleration may make it possible to live a full life without being so tied to the current notion of work

In short, although many AI systems today have functionality issues that hinder their effectiveness and implementation in society, the crucial challenge posed by AI is not technical, but human. It depends on the human quality of people and the individual and collective capacity to design prosperous futures. We must encourage debate and urge all citizens and leaders to advance toward a project of collective value that benefits society as a whole, curbing exploitative behaviors that, due to the characteristics of AI, might be fatal for humanity. AI can be an ally in this challenge, driving new forms of innovation, efficiency, and justice in society. We must remember that the question of how AI systems impact our world is a human choice. It should not be a technical issue nor left to the discretion of tech companies. There is nothing inevitable about our current situation. 

All written content is licensed under a Creative Commons Attribution 4.0 International license.