

Regulation of artificial intelligence and ethical limits

An opinion expressed by the leader of a "real" power can sometimes make a quicker and deeper impact than any official declaration.
Pichai Sundararajan, better known as Sundar Pichai, CEO of Google and Alphabet, highlighted recently in statements made to the Financial Times the need to regulate artificial intelligence (AI) so that ethical limits are respected.
He emphasised that given the many and diverse benefits that technological advancement powered by AI offers, its appropriate use in the best interests of society must prevail.
Pichai says the time has come to mark the ethical limits so that progress is not arbitrary, disorderly and harmful to the rights of individuals.
Technologies such as facial recognition must be regulated to protect individual rights
Technologies such as facial recognition must be regulated and adapted to respect and protect individual rights.
On the same day that Pichai spoke about the need for AI regulation, the White House published a draft Memorandum for the heads of executive departments and agencies on artificial intelligence, as part of an agenda established in February 2019 when it issued Maintaining American leadership in artificial intelligence (Executive Order 13859).
The ten principles for the "stewardship of AI applications" announced in the memorandum are:
- Public trust in AI
- Public participation
- Scientific integrity and quality information
- Risk assessment and management
- Benefits and costs
- Flexibility
- Equity and non-discrimination
- Disclosure and transparency
- Protection and security
- Agency coordination
This declaration by the White House is intended to strengthen public and private confidence in the development of AI applications. The declaration reveals two clear priorities: innovation must not be slowed by a lack of regulatory clarity and international cooperation is essential.
In his statement to the Financial Times, Pichai stresses the need for international agreement on the "core values." Moreover, he describes the European General Data Protection Regulation as a good regulatory foundation that also makes clear what is unacceptable.
The European Commission will propose new AI regulations in high risk sectors
However, the need for regulation may become an insurmountable obstacle because "innovation culture" differs between America and Europe.
Bloomberg has just revealed that the European Commission is drafting a white paper on artificial intelligence that will propose new rules and legal requirements in high risk sectors such as healthcare and transport (publication is expected in February).
The paper may propose a ban on the use of facial recognition by the state, as well as rules on its use in public spaces (until fears regarding privacy can be discounted).
The document is part of a broad EU initiative to compete with America and China on AI, but always incorporating European values (rooted in ethics and user privacy).
A new regulation on artificial intelligence is one of the priorities announced by Ursula von der Leyen, president of the European Commission.
The White House memorandum refers to Executive Order 13609 on Promoting international regulatory cooperation. This order states that: "… agencies are required to consider, to the extent feasible, appropriate and consistent with law, any regulatory approaches by a foreign government that the United States has agreed to consider under a Regulatory Cooperation Council work plan."
As societies and cultures deal with innovation in different ways, the key issue for enabling international cooperation will be determining the values in matters of privacy on which AI is regulated.

- Compartir en Twitter
- Compartir en Linked in
- Compartir en Facebook
- Compartir en Whatsapp Compartir en Whatsapp
- Compartir en e-Mail
Related programmes
Join the Do Better community
Register for free and enjoy our recommendations and personalised content.