Decalogue for applying the Artificial Intelligence Regulation in the workplace

A practical guide to complying with the new obligations regarding the use of AI in the workplace, identifying the main challenges and proposing solutions for responsible implementation.

Anna Ginès i Fabrellas

This article is part of the 2024 Spanish Labor Market Report, prepared by InfoJobs and Esade. 

The introduction of artificial intelligence into companies is a strategic objective of the EU’s digital transformation. The European Union’s Digital Decade sets goals for 2030 regarding infrastructure, digital skills, businesses, and public services. In terms of artificial intelligence, the goal is for 75% of companies to use AI systems in their operationsan ambitious challenge, considering that in 2024, only 8% of companies in the European Union (9.2% in Spain) were doing so. 

The workplace is an expanding field for the use of artificial intelligence systems. Smart technology is transforming labor relations by enabling new forms of business process management, from automated recruitment to the automated assignment of tasks, or the evaluation and monitoring of employee performance. 

To ensure a sustainable, human-centered digital transformation, the integration of artificial intelligence into companies must comply with the AI Regulation, which establishes a phased implementation schedule from February 2025 to August 2027. 

The Regulation aims to establish harmonized rules on artificial intelligence in the European Union to foster the development and use of trustworthy, human-centered systems, ensuring the protection of people's health, safety, and fundamental rights

It follows a risk-based approach, classifying AI systems according to the level of risk they pose (unacceptable risk, high risk, and limited risk) and setting out corresponding obligations for information, transparency, and human oversight of AI systems. 

However, the inherent complexity of the technology and the new regulation could hinder companies’ digitalization processes. In this regard, the following decalogue outlines the application of the Artificial Intelligence Regulation in the workplace: 

  1. Identification of artificial intelligence systems

    It is important to determine when a system should be classified as artificial intelligence—and thus subject to the Regulation—and when it should not. The Regulation defines an AI system as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptability after deployment, and that, for explicit or implicit objectives, infers from input data how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Three key elements determine whether a system qualifies as AI: (i) being machine-based (thus excluding human processes), (ii) autonomy, allowing the system to operate independently, and (iii) inference capability, meaning the system generates outputs (such as predictions, content, recommendations, or decisions) based on received input data. 

  2. Risk level assessment

    AI systems used in the workplace are generally classified as high-risk systems due to their potential to significantly affect people's future career prospects and livelihoods and their rights (Article 57). Specifically, high-risk systems include those used for (i) hiring or selecting employees and (ii) making decisions on working conditions, promotions, contract termination, task assignments based on behavior or personal characteristics, or monitoring and evaluating employee performance and conduct. 

    Additionally, certain systems used in the workplace are prohibited. These include systems designed to infer emotions within the context of employment (except when used for medical or safety purposes) or to deduce sensitive personal characteristics such as race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. It is important to note that not all facial recognition systems are banned—only those intended to infer emotions or predict sensitive information about individuals. 

    Moreover, some AI systems used at work are considered low-risk, such as chatbots. However, the risk level is determined not by the technology itself but by how it is used. For instance, if chatbots are used in decision-making during recruitment processes, they are considered high-risk and must comply with the associated legal obligations. Similarly, when AI systems are introduced as productivity tools, for automating production processes, or for quality control of products or services, they are generally not classified as high-risk systems—unless they are also used to monitor or evaluate employee performance. 

  3. Identification of the company's role

    The Regulation assigns different obligations to providers of high-risk AI systems and to deployers—companies that implement such systems. Companies that develop their own systems are considered both providers and deployers and must meet the obligations applicable to both roles. 

  4. Ensuring transparency and explainability

    High-risk AI systems must be transparent and explainable. Providers must design and develop systems that ensure transparency and interpretability, include user instructions describing the systems’ characteristics, capabilities, and limitations, and guarantee appropriate levels of accuracy, robustness, and cybersecurity. 

    Deployers must inform employees and their legal representatives about the use of AI systems in the workplace (prior to their implementation) and provide a clear and meaningful explanation of the system’s role in decision-making processes, including the main factors used to reach a decision. 

  5. Ensuring human oversight

    High-risk AI systems must be designed and developed to allow for human monitoring throughout their operation to prevent or mitigate risks to health, safety, or fundamental rights. 

    Deploying companies must assign system oversight to individuals who have the competence, training, and authority to understand the system’s capabilities and limitations (e.g., to detect and resolve anomalies or malfunctions), who are aware of the risk of overreliance on automated results, and who can correctly interpret results, choose not to use the system, disregard its outputs, or intervene in its operation. The challenge lies in establishing effective human oversight mechanisms, as required by the Regulation, without undermining the automated nature of the process. 

  6. Risk management system

    Companies implementing high-risk AI systems must establish and maintain a risk management system throughout the system’s lifecycle. This includes identifying and analyzing known and foreseeable risks to health, safety, and fundamental rights, and adopting measures to mitigate these risks. 

  7. Personal data protection impact assessment

    The introduction of AI systems must comply with personal data protection obligations under the GDPR, particularly respecting the principles of lawfulness, fairness and transparency, purpose limitation, and data minimization, as well as conducting a data protection impact assessment. 

  8. Data quality and non-discrimination

    Data used in high-risk AI systems must be relevant, complete, and sufficiently representative. Companies must assess data for potential biases that could impact health and safety, negatively affect fundamental rights, or lead to unlawful discrimination, and take appropriate measures to prevent and mitigate identified biases. 

  9. Training and skills development

    Companies must ensure that individuals responsible for deploying AI systems have an adequate level of AI literacy. 

  10. Legality, necessity, and environmental impact

    While the Artificial Intelligence Regulation sets requirements for AI systems, it does not determine their legality in specific cases. Companies must still comply with regulations on data protection, equality and non-discrimination, privacy, occupational risk prevention, and others. For example, a system that monitors and evaluates employee performance could still be illegal if, despite complying with the Regulation, it increases work intensity to a degree that harms employees’ mental or physical health. 

Finally, although not explicitly mentioned in the Regulation, it is important to assess the necessity of introducing a high-risk AI system into the workplace, considering the availability of less invasive alternatives and analyzing its environmental impact. 

All written content is licensed under a Creative Commons Attribution 4.0 International license.