The growing adoption of artificial intelligence and the accumulation and use of data as a corporate raw material raises ethical and social questions that are difficult to ignore in a world that is increasingly driven by algorithms, behavioral predictions, and automated decisions.
We are already seeing organizations begin to reconfigure their businesses around data mining and processing. The implications and challenges that arise – especially in the workplace – must be considered from both a societal and regulatory perspective.
New technological advances enable us to automate the management of a warehouse or predict our workplace behavior and future performance. Here the law pushes us to study what happens to the privacy of the people we study, who owns the data generated, and who is responsible following an accident caused by an algorithm or a machine.
The challenges for regulators, however, go much further. To address the spread of these practices from the legal perspective, we can base ourselves on three starting points: establish an analogy between data and oil that helps us understand the undesired damage caused by extraction; see digital platforms as agents of organizational and regulatory change with their own political agendas; and, finally, understand law as an essential space for negotiation between agents that must protect a public confronted with the dilemmas and tensions caused by the rise of AI.
Data as the new oil
If we understand that the technological structure of AI rests on algorithmic logic and that this depends on the extraction, manipulation, and integration of data in large databases, then the ‘data-as-oil’ analogy enables us to see the pollution produced by this extraction, as well as its indirect (and not always studied) damage.
Data, like oil, offers great benefits – but also immense potential threats. Data, like oil, can be more or less clean (or useful); the cost of extracting it varies; processing it requires knowledge and technology that extracting companies often outsource in the form of an app or a specialist firm (which generates situations of inter-company dependency and also the distinction between data-rich and data-poor organizations). Data analysis and data refinement has environmental costs (energy consumption) and poses important security challenges. If economics teaches us anything, it is that negative externalities cannot be corrected by companies alone. As with oil, we must regulate to guarantee collective welfare. There is no other agent who can manage this function.
Technological platforms as agents of institutional change
Who is behind attempts to limit regulation of AI and algorithm management? If we look at the growing controversies from a discipline such as the sociology of technology, it is obvious that the major digital platforms have taken this role as both creators of regulatory frameworks and agents of cultural change. This means that technology is routinely used as a tool in the service of an ideology (with its agenda and principles), and the regulator must be aware of this.
A major social challenge is posed by techno-optimism – an approach that underestimates the dangers, exaggerates the potential benefits of AI, and calls regulation a ‘brake on innovation’. This interpretation that regulations are a brake is fueled by hopes for a quick financial return from markets and media, and usually ends up sustaining dubious, if not openly illegal, corporate actions. Such cultural ecosystems pose a challenge for the regulator: can our society allow the perpetuation of competitive strategies based on selling at a loss (or dumping), and on the constant evasion of compliance, for example, in the case of supporting inappropriate or dubious labor relations?
Law as a result of negotiation between agents
The European Commission's recent fines on large platforms reveal a change of trend. The recent regulatory activism around digitization and artificial intelligence, particularly within the EU framework, enables us to understand that the stage of free experimentation around data mining and the uncritical cult of technology is ending. Historically, technology has always been negotiated. The relationship of technology with society is one of symbiosis with stages of conflict and dialogue. And law is one of the key pathways in which this negotiation is catalyzed and how changes are accelerated or slowed.
At the global level, we see three approaches to regulating business in the field of AI and data: the American laissez-faire approach; the recentralization of platform governance and oversight models (see China); and the increasingly interventionist European social model.
Advancing AI regulation and self-regulation: an EU priority
But how do we close the gap between the speed of technological development and the adaptation of a regulatory framework and its implementation? The urgency and size of the challenges means that public support for the creation of private sector self-regulatory frameworks is essential, as shown by the growing number of initiatives in AI governance and responsible data management. The priorities of the legislator, particularly in the EU, should therefore be to accompany and encourage private sector self-regulation.
At the same time, and paradoxically, the new governmental regulation must be detached from certain ill-conceived premises supported by technology companies on how technology should be regulated. One example is the EU's General Data Protection Regulation (GDPR). This regulation assumes that individuals can decide whether to accept certain data mining practices and assumes that our actions are rational, conscious, and informed. As our daily experience tells us, we must assume that the independence and knowledge necessary to tackle the challenges of AI and the automated management of our data, simply put, does not exist, nor can regulation depend solely on individual behaviour.
Demystifying the mathematical aura of algorithms
Regulators must support new and more complex forms of accountability adapted to the reality of automation and management by algorithms. Mathematics or statistics are disciplines that only reveal the truth of the numbers on paper or the screen. They tell us nothing about the timeliness, social impact, or suitability of implementing a technology or automating a process. If the former is a numerical logic, the latter is a social logic.
Understanding that the result of an algorithm is a basically an informed opinion (and nothing more) enables us to strip away its mathematical aura. This starting point also enables us to ask who is behind the opinion, and reinforce the application of transparency criteria, traceability of the data chain, and explainability of the algorithm. A claim of algorithmic veracity must always be questioned– and this task that cannot depend on technological knowledge alone.
Regulation of automation in the workplace: empowering people
Finally, trade union dialogue is needed to translate into the reality of each sector and company the questions raised by the spread of management by algorithms. Future standards on AI in a work environment (data management and process automation in the the workplace, among them) must give a key role to workers and their representatives, while supporting their education and training so that they can understand the size and scope of these challenges. Fixing the growing power imbalance within organizations is one of the workable ways for correcting some of the challenges posed by AI within organizations.
The full presentation can be viewed here.
David Murillo, Department of Society, Politics and Sustainability at Esade, is a researcher on the ‘LaborAlgorithm’ project on algorithms and labor relations. Project LABORAlgorithm funded by the European Regional Development Fund (ERDF) and the Spanish Ministry of Science, Innovation and Universities (State Agency of Research), project number PGC2018-100918-A-100.
Join the Do Better community
Register for free and enjoy our recommendations and personalised content.