What can ethics and spirituality contribute to the development of AI?

Technological deployment requires us to get back to the basics of ethics and humanistic reflection. In turn, the uncertainties around its future pose questions about the value and meaning of human life.

Alberto Núñez

“Artificial intelligence (AI) is bringing about profound changes in the lives of human beings, and will continue to do so.” This is the opening sentence of the document entitled Rome Call for AI Ethics, drafted by the Vatican and the RenAIssance Foundation and signed to date by 150 participants, including representatives of the world's 11 main religions, 13 countries, the FAO, numerous organisations and some leading technology firms such as Microsoft, Cisco and IBM. 

Due to the widespread coverage it is receiving, it constitutes a reference document on the relationship between AI and ethics in a world characterised by multiple models and regulatory initiatives of all sorts that seek, depending on the country and the context, to promote, channel or curb the development of this technology. One author has described this morass of initiatives as a “vast sea of documentation on AI that overwhelms us with the force of a tsunami.” 

In July we witnessed two especially relevant initiatives. The first was the invitation extended by the G7 to Pope Francis to talk about AI. The second was the meeting held by the world's major religions in Hiroshima (Japan), with the presentation of a document on this technology drafted by the organisers and representatives of the three Abrahamic religions: the Pontifical Academy for Life, the Abu Dhabi Forum for Peace, and the Commission for Interreligious Relations of the Chief Rabbinate of Israel. This document has been called the Hiroshima Appeal, thereby linking the power of this technology to all the symbolism of the site. The text reiterates the need to use AI solely for the good of humanity and the planet and urges the international community to employ peaceful means to solve any conflict, requesting an immediate cease to all armed conflicts. 

Without a viable and reliable legal and ethical framework in place, companies can be caught between inconsistent or contradictory regulations

The feeling is, then, that we live at a momentous time. Almost daily we hear some new announcement about the huge possibilities of this technology. Parallel to this, the awareness of its risks increases. From a business point of view, no sector of economic activity is left untouched by this technological wave. This, of course, is a great business opportunity for some. AI takes us one step further along the road of optimization or maximization of efficiency that characterises our global economy. But there are also three main risks. The first is reputational, in view of the consequences of certain scandals or design errors. The second is operational, since “if industry leaders do not quickly adopt an actionable and reliable legal and ethical framework with demonstrable effectiveness, they will quickly find themselves trapped in a quagmire of inconsistent and even contradictory laws and regulations,” to quote the handbook published by Santa Clara University (California, USA) Ethics in the Age of Disruptive Technologies (p. 16). The third is strategic, since, depending on where they are located, there are companies that have few or no restrictions to develop this technology without hindrance. 

The extent of the challenges

It is not only an economical or business problem. The changes and the disruption this technology is causing have consequences on crime, war, climate change, immigration, social relations within each country, and ultimately the stability of democracy. Not all these challenges are exclusively a consequence of technological change, but it undoubtedly affects and accentuates them in a special way. 

Yet the challenges are so numerous and great that it is not easy to know where to begin. The shock or the overflow experienced by our traditional ways of thinking lead us to seek new benchmarks from which to build a new framework of standards that is truly universal. Hence our gaze turns back to ethics. The handbook quoted above states: 

  • Ethics is the bedrock upon which people build everything else. Good ethical relationships create trust, and trust is what every social institution relies upon. Without it, relationships fall apart, and if enough social relationships fall apart, one is no longer living in a society, but anarchy. (p. 11) 

The problem is how and where to find that bedrock precisely when our societies are fostering pluralism and diversity — or it is increasing for various reasons. In other words, the trouble is that there is no undisputed foundation on which to build. 

Back to moral basics

The traditional definition of ethics, which dates back to Aristotle, describes this branch of philosophy as “the science that studies character and human actions for the purpose of doing good.” Ethics therefore has three elements: a) the subject who performs the action; b) the action that he or she performs; and c) the outcome of the action, in the understanding that human action is not neutral. Depending on which element has been considered most important, we find the main trends in ethics that have marked the history of the Western world: virtue ethics, deontology and consequentialism. The following table provides a brief description of them: 

Emphasis Umbrella theory Moral question Ethical theories Source 
The person Virtue ethics How to be a good person Aristotelian ethics Midpoint between extremes, the purpose of the human being 
The action Deontology What actions are right in themselves or with what intention they have been performed Ethics of duty Universalism, the human being as an end 
Human rights Human rights declarations 
Religious ethics Divine commandments 
The consequence of the action Consequentialism Acts are good depending on their consequences Utilitarianism Weighing up pain vs pleasure for the largest number of people 

This framework, valid for a general ethics, is significantly disrupted by the emergence of AI. Initially, we can say that AI strengthens approaches of a consequentialist or utilitarian sort. The justification for this technology lies in the extraordinary results it yields, thanks to the huge amount of data it handles and its computing speed, although there is an open discussion on its effects in some fields of activity. 

Some authors highlight the ethical risks of this technology associated with biases or design errors in the algorithms. Such drawbacks should, however, be remediable as they are identified. In our opinion, the greatest problem lies precisely in its success, and how a system with capabilities significantly greater than those of human beings can disrupt the understanding of what we are and what our place in society is

AI and its effects on ethics

In comparison with the general framework of ethics, AI introduces three new elements. Firstly, with regard to action. This technology does not act as an extension of human action (like when a car is driven by a person, for example), but rather is a system capable of acting of its own accord, adapting its behaviour to the context by analysing the effects of its previous actions and working autonomously (Pegoraro and Curzel, 2023, p. 316). The person performing the action either disappears or becomes an appendage to a decision made by the machine. 

Secondly, it doubles the importance of business organizations as moral subjects or agents. The relevant role that companies play in our economic and social reality is well known. Nevertheless, the irruption of AI assigns a quantitatively and qualitatively far more important role to these organisations. Little by little, the technology is becoming omnipresent and it is difficult to find a human activity in which it is not or could not be involved. Technology companies spread their action and influence all over the world, even disputing power with some states. As a result, any reflection on this technology should be not on the level of a particular sector or activity, but on a more global level. This can be seen in the number of initiatives that seek to guarantee that the technology is “at the service of humanity,” “for good,” “human-centred,” “ethical by design” and “open.” 

In this regard, one very opportune initiative that we have already mentioned is the agreement between the Vatican and Santa Clara University, the Jesuit university located in Silicon Valley, to produce a handbook for the use of this technology by business organisations. The handbook, entitled Ethics in the Age of Disruptive Technologies: An Operational Roadmap or ITEC Handbook, was published in 2023 and has as its primary goal “to help companies developing, procuring, or leveraging advanced technologies understand the ethical risks that such technologies introduce, and help them implement the infrastructure necessary to mitigate those risks throughout the product and service life cycle.” 

Hybridization between man and technology may lead to the emergence of a new human subject

The document proposes a detailed and complete ethical framework that is designed to work on two levels: life cycle management of AI-based products and services available on the market, and corporate mindset and culture. This framework has a threefold structure. It is built on one anchoring principle: the technology must serve the common good of humanity and the environment. It hinges around seven guiding principles, which are the outcome of a major process of dialogue between different cultures and religions, including people who do not belong to any religious confession. 

These seven principles are as follows: 

  1. Respect for human dignity and rights 
  2. Promote human well-being 
  3. Invest in humanity 
  4. Promote justice, access, diversity and inclusion 
  5. Recognise that Earth is for all life 
  6. Maintain accountability 
  7. Promote transparency and explainability 

Each of these is in turn developed in other more detailed principles. 

The third element is the emergence of a new ethical subject that affects the value and meaning of human life. Our philosophical tradition links the dignity of the human being to autonomy and reason. But AI will have these characteristics in a future that may not be far away. Elaborating on this line of thought, some authors, such as Nick Bostrom, have postulated granting a status not inferior to human to those applications of this technology that develop self-awareness or something similar to it. Ultimately, the hybridization between man and technology proposed by some may lead to the emergence of a new human subject. Ray Kurzweil's book The Singularity is Near is a good example. 

On a positive note, perhaps AI might help us realise that what makes us human is much more complex, rich and ambiguous than just a series of rational capabilities. In The Essentials of Theory U, MIT Senior Lecturer Otto Scharmer explains that we have three constituent dimensions: our relationship with ourselves, our social relationships and our relationship with nature (pp. 16-18). An analysis of these dimensions helps us understand many of the social and psychological problems that characterise our times. It is significant that one of the effects of the new technology is precisely the fragmentation of relationships (Pegoraro and Curzel, 2023, p. 317). These three dimensions lead to a spiritual understanding of the human being, which is characterised precisely by the search for a balance between corporeity and the desire for transcendence

Training in a humanistic AI

At Esade we have the advantage that our training framework incorporates a holistic vision of the human being, together with a marked sensitivity towards our world and the injustices or inequalities that exist in it. Through our tradition and our participation in the Jesuit network of universities we can also enjoy special access to some of the institutions or movements that we have mentioned in this article. In any event, the development of AI forms part of a challenge that deeply affects teaching and our future in this sector. The commitment to reinforce ethics and our 4Cs pedagogical model are positive steps to, at least, start the reflection from a shared ground.

All written content is licensed under a Creative Commons Attribution 4.0 International license.