It was only ten years ago that we started to talk about how robots were appearing in our lives. Today, their presence is commonplace.
These artificial intelligence (AI) systems, commonly referred to as robots, take many forms – drones, bots, virtual assistants and so on. They have popped up in our domestic lives, our health systems, the public sector and even in electoral campaigns.
The debate on the legal responsibilities that the use of artificial intelligence entails is an ongoing, worldwide one. Regulations should be clear and concise on this issue. In the United States, this debate has been fuelled by the use of bots and their role in spawning fake news and misleading the public.
In February 2018, Senator Robert Hertzberg, a Van Nuys democrat in California, obtained support from Common Sense Kids Action to approve the Bolstering Online Transparency bill – also known as the BOT bill. The bill became law in July 2019.
Common Sense Media is an organisation that seeks to protect children from fake information published in social media. In an announcement, the organisation stated that there are now over 100 million bot accounts on Facebook and Twitter and that “these tools are wreaking havoc on these platforms.”
66% of all the headlines tweeted are shared by presumed robots
The Pew Research Centre also warned that on Twitter, two thirds (66%) of all the headlines tweeted were shared by presumed robots.
These figures strengthen Hertzberg‘s argument that “robot armies” are out there right now manipulating real people. His efforts have been reflected in the State of California’s BOT Act. A brief Act (running to only a page and a half) aims to identify bots as such, so that people who interact with them know what they are dealing with. This Act states that when bots are used to lie to or otherwise deceive people, they should be eliminated within 72 hours if they fail to identify themselves as bots.
The approval of the BOT bill in California sparked a big debate. One of the most critical reactions came from the Electronic Frontier Foundation, a non-profit organisation based in San Francisco that works to protect civil rights in the digital era. It argued that:
- Bots have multiple functions ranging from composing poetry to writing political speeches.
- Eliminating bots would be an attack on a person’s project whose feasibility depends on the bot’s anonymity.
- Grouping all bots into a single category and silencing them could violate The First Amendment covering Freedom of Speech.
This reaction begs another question: Does the First Amendment apply to robots? The amendment forbids the government making laws that limit freedom of speech but makes no reference to people or human language (speech).
Does this mean that by silencing bots, the BOT Act limits the right to free speech? If this were true, it would be tantamount to saying that the First Amendment includes both human and non-human’s right to free speech. Those favouring this argument warn that the BOT Act may be unconstitutional.
Adopted on the 15th of December 1791, the First Amendment to the United States Constitution represents the first ‘constitutionalisation,’ in a modern sense, of freedom of speech. It is a symbol of the accountability of public power – far from prohibiting regulation, it protects citizens from unjustified censorship and misinformation.
But whether forcing bots to identify themselves as such constitutes an attack on free speech seems to be of lesser importance than admitting that they have a right to freedom of speech, like humans. In this scenario, freedom of expression would then be based on the physical individuals who always exist behind a bot.
Adopting legal measures to identify and eliminate bots to fight lies and half-truths targeting millions of people could be the first step in stopping AI from causing harm and breaking the law.
The United States’ First Amendment establishes clear limits and the role of legislators is to materialise these. Is the BOT Act performing this function? A different thing, of course, is whether the law can be improved regarding liability. As things stand right now, it still excludes digital platforms.
Join the Do Better community
Become a member and enjoy our free benefits. Get recommendations, receive personalised content in your inbox and save your favourite articles to read later.