Deepfakes as a new challenge for lawmakers: can I believe what I see and hear?
Following the controversy surrounding the disinformation engendered by fake news and the spate of measures to combat it, a new conversation is emerging about whether artificial intelligence (AI) is actually working for disinformation.
The hullabaloo stems from the appearance of what are known as deepfakes on digital platforms, which are simply algorithms created and trained to manipulate human faces and voices.
Deepfake is a portmanteau of deep learning and fake. Deepfakes use generative adversarial networks (GANs) in which two machine learning (ML) models go head to head. One ML model, the generative network, is trained in a dataset and then produces fakes, while the other, the discriminative network, tries to spot the fakes thus created.
The generative network continues pumping out fakes until the other ML model cannot identify them. The larger the training dataset, the easier it will be for the generative network to put together a credible, seemingly real fake.
Deepfakes are algorithms created and trained to manipulate human faces and voices
This idea of confronting generative adversarial networks competing in a constant zero-sum game came from a then 29-year-old University of Montreal student, Ian Goodfellow, who later worked for Google Brain and was cited in MIT Technology Review’s “35 Innovators Under 35” in 2017.
His idea was based upon previous research including the papers published by Jürgen Schmidhuber in the 1990s on predictability minimisation and artificial curiosity along with the Turing learning concept propounded by Li, Gauci and Bruto in 2013.
Goodfellow himself said that one of the reasons he wanted to work in GANs is because these models have the potential to generate objects which we can use in the real world. In particular he saw the benefits of this AI by pointing out that: “in the future I think GANs will be used in various areas such as drug design."
Deep fakes have been used to impersonate personalities, mainly politicians and public figures
Well, over the last two years the use of deepfakes to impersonate faces and voices using GANs has shown us that one section of society has embarked on a journey which has nothing to do with the idyllic purpose for which these networks were invented. Indeed, this technique has been used to impersonate personalities, mainly politicians and public figures.
The international non-profit human rights organisation, WITNESS, places deepfakes in the conceptual framework of information disorder.
Its analysis draws a distinction between three categories:
- Misinformation: when the bad information is not due to malevolence but rather the result of an error or mistake.
- Malinformation: when truthful but private information is disclosed with the explicit purpose of causing harm.
- Disinformation: which involves creating and spreading fake information with malicious intent.
We are venturing into a world where we do not know whether the person we are talking to is who we think it is or someone else disguised as them, since with deepfakes it is perfectly possible to completely replace the face (and voice) of one person by another, when really it is a fake.
In terms of legal certainty, numerous regulations in various areas of law provide for penalties in this case. The problem is now compounded by the fact that advances in AI are taking place at such a rate that experts are beginning to wonder whether it will be possible to spot the fakes. This would be a quandary of the utmost seriousness if that level of sophistication were reached.
Advances in AI are taking place at such a rate that experts are beginning to wonder whether it will be possible to spot the fakes
At first deepfakes entertained people by using the faces of celebrities in videos and images of joking politicians.
Thus, two years ago the University of Washington presented a pilot project called Synthesizing Obama, an algorithm trained to manipulate videos synchronised with facial movements using the image of former US president Barack Obama to replicate it in other contexts repeating the same statement. There are also many other cases (Nancy Pelosi, Donald Trump and Mark Zuckerberg).
However, using deepfakes can have profound consequences and lawmakers need to begin building the regulatory framework to prevent or penalise them.
Just imagine that evidence in a lawsuit is undermined by deepfakes; or that deepfakes are used to shop via facial recognition; or that a marriage is destroyed by a deepfake video compromising the impersonated partner; or an employee is fired for making statements in a deepfake video against their employer.
Evidence in a lawsuit could be undermined by deepfakes
It is evident that the issue has to be controlled, and not only by building AI to spot deepfakes (as Facebook, Microsoft and Amazon have begun to do in the pivotal Deepfake Detection Challenge involving investment upwards of $10 million) but also by constructing a solid regulatory framework to curb the misuse of GAN technology.
New York Democrat Congresswoman, Yvette Clarke, a senior member of the Subcommittee on Security in Emerging Threats, Cybersecurity and Science and Technology, recently introduced a federal bill called the Deepfakes Accountability Act. This law would force social media businesses to set up better detection tools on their platforms and make it possible to punish anyone who posts malicious deepfakes.
This is not the first time that US lawmakers have sought to take action on this issue. In December 2018, Nebraska Republican Senator, Ben Sasse, introduced another bill aimed at banning malicious deepfake videos, while Florida Republican Senator, Marco Rubio, has repeatedly warned of the harmful effects of technology misuse for years, going so far as to call deepfakes (with perhaps a hint of exaggeration) “the modern equivalent of nuclear weapons."
Deepfakes are AI and need to be tackled with AI that can spot and stop them
Meanwhile in China, the other major AI power, the Standing Committee of China’s National People’s Congress is debating a reform of the Civil Code to better protect the right of publicity from the menace of deepfakes.
China is also at the forefront of facial recognition payments, and in view of the threat of deepfakes and the grave ramifications they may have, perhaps even greater than a fintech loan platform which steals money, they are working hard on technology to identify deepfakes. In fact, Alipay, the payment system administered by Ant Financial, a subsidiary of the giant ecommerce behemoth Alibaba, requires a three-dimensional face for facial recognition-based payment.
It is obvious that deepfakes are AI and need to be tackled with AI that can spot and stop them. Yet it also seems clear that a legal framework is required to provide public and private security. Successfully combating this misuse of technology will, in all likelihood, take a concerted effort and a consensus on global agreements and regulatory frameworks.
Further reading: Dodging deception & seeking truth online [survey results]
- Compartir en Twitter
- Compartir en Linked in
- Compartir en Facebook
- Compartir en Whatsapp Compartir en Whatsapp
- Compartir en e-Mail
Related programmes
Do you want to receive the Do Better newsletter?
Subscribe to receive our featured content in your inbox.