The Digital Services Act, a chance to lead the way again

Tirso Virgós

When outlining her proposals for the current European Commission, Ursula Von der Leyen spoke about "A Europe fit for the digital age." Amongst other regulations, she promised a new Digital Services Act (DSA) "to upgrade our liability and safety rules for digital platforms, services and products, and complete our digital market." This year saw the first steps to fulfil that promise, as the Commission announced its plan to put forward a DSA package in the final months of 2020, after launching a public consultation.

The intentions of this legislative package are noble. It aims to update the e-Commerce Directive of 2000, as well as to protect citizens from the growing range of "illegal goods, activities, and content."

The online spread of illegal and 'harmful content' calls for regulating hate speech

The internet is a ubiquitous reality — much more so than 20 years ago. We only have to look at the increasing prevalence of social media and online platforms, both for discussion and business. Thus, this new regulation aims to tackle two trends. Firstly, the growing power of a series of businesses that, given their market share, act as gatekeepers in their sectors. Secondly, the online spread of illegal and "harmful content," with calls for regulating hate speech and taking down certain post or videos.

Recently, the assassination of the middle-school teacher Samuel Paty in France, after a Facebook video mobilised thousands of Muslim parents against him and his classes, has sparked renewed public interest in this matter.

Earn It Act
Related content: Should digital platforms be responsible for illegal content published by other users?

To combat the first of these trends, the proposal is rumoured to focus on avoiding uncompetitive practices, such as pre-installing apps in devices, self-preferencing use of the gatekeeper apps, and using data collected on their platforms for their own commercial activities. Alongside this, there would be requirements for more transparency and availability of data for research purposes. This would level the playing field, with SMEs being better able to break into different markets and counteract monopolistic practices. 

However, the focus of this article is on the second of these trends, which has not received the same level of attention as the first in the Commission’s communications regarding the DSA.

A recent article by The Economist presented worrying figures: Facebook removes around 17 million fake accounts each day, YouTube had to take down more than 11 million videos in the last quarter of 2020, TikTok has removed 100 million video clips from its social network, and Twitter has erased almost 3 million tweets in the second half of the year. All of these numbers are higher than in previous years.

Facebook removes around 17 million fake accounts each day, and Twitter has erased almost 3 million tweets in the second half of the year

But we must tread lightly when it comes to controlling content on the internet. In the name of protecting human rights and avoiding hate speech, we can end up censoring and repressing free speech against governments willing to silence opposition. It has been shown, for instance, that certain hate speech laws have been used to crack down on lesbian, gay, bisexual, and transgender supporting posts.

The dilemma is related to the wider debate between "militant" and "non-militant" democracies. While the latter allow for any political expression that does not attempt to violently tackle the constitution and the rule of law, the former have provisions against those political parties or movements which defy the principles of that democracy. Regardless of the intention, this can easily become a manner to outlaw manifestations of disagreement with the government.

An added problem has come with the development of artificial intelligence (AI) and the post-pandemic world. When many workers in charge of managing content in social media had to go home due to lockdowns, it was up to AI to continue their jobs.

Chatbot
Data shows that machines remove more content than human controllers, and they are prone to misidentifying what is actually harmful (Photo: Getty Images)

However, data shows that machines remove more content than human controllers, and they are prone to misidentifying what is actually harmful. This has led them to take down, for instance, health news on Covid-19, or reports of war crimes in Syria. Understanding cultural cues, contexts, and the latest "meme" trending in social media is an ability that AI does not yet have. If social networks refuse to expand the number of human content managers, and chose instead to rely exclusively on machines, we may move further down the slippery slope of constraining free speech.

Thus, there has been a lot of criticism towards the very idea of “harmful content”. This label can cover anything from fake news to legitimate attacks against a government or political party, and can be easily weaponised to reduce the power of political oppositions. In view of these concerns, Commissioner Jourová has emphasised that the plan is to tackle illegal content, not “harmful” content, and to focus on how such content spreads online – that is, the “freedom to reach” of that content.

If social networks refuse to expand the number of human content managers, and chose to rely exclusively on machines, we may move further down the slippery slope of constraining free speech

This is a promising avenue of regulation, given that it seems more proportionate and less dangerous for free speech. Recent national laws, such as Avia, in France, have pressed for a stronger regulation and surveillance of social networks, with provisions such as the compulsory removal of content flagged as “illegal” by users within 24 hours, and with a broad definition of "illegal."

The French Constitutional Court has struck down that part of the law, as well as other prescriptions (such as the powers vested in the French High Audio-visual Court), arguing that the requirements of proportionality, appropriateness, and necessity for restrictions of freedom of speech are not fulfilled. Trying to prevent the spread of messages instead of the emission of those messages could be more respectful of liberal values and, at the same time, more effective.

The DSA is an opportunity for the EU to lead the way again, giving a proper "constitution" to the internet and enhancing regulatory power. This proposal must go beyond a mere reinforcement of a competitive European market.

While this is undeniably positive, there must also be vigorous action to regulate social networks. Such regulation must be proportionate to its goal, avoiding excessive fines or unrealistically limited time frames for removal of illegal content. It must focus on the prevention of the spread of hate messages, and not on trying to take down content deemed as "harmful." Freedom of speech is one of the key values of liberal democracies, and even in times of difficulty, we must strive to protect democracy against the temptation to silence views we do not like.

All written content is licensed under a Creative Commons Attribution 4.0 International license.