Under pressure from the algorithm: Workers in the age of AI
AI is transforming workplaces—but not always for the better. Beneath the promise of speed and efficiency lies a hidden cost: growing surveillance of workers to perform like machines.
AI promises faster, smarter, more efficient workplaces. But who pays the price? An AI app assisting an office worker can read and analyze a spreadsheet in seconds, or translate complex texts into simple facts in the blink of an eye, saving hours of slogging through heavy tasks. Used correctly, AI can be a significant boost to efficiency. But there’s another side to AI in the workplace: when algorithms are used to manage people, rather than support them.
Uma Rani, Senior Economist at the International Labor Organization, has been examining the human aspect of AI adoption in the workplace. Drawing on research within the automotive, healthcare, and logistics sectors, she explained how productivity gains often come with hidden costs for workers.
The impact varies across regions: in the Global South, where weaker labor protections and temporary contracts are common, the pressure on workers is severe. In the Global North, stronger unions and legal protections can help reduce the risks to workers' wellbeing.
This question was addressed during an Esade Law School seminar on the promises and perils of the digitalization of work. The event was part of a seminar of the DigitalWORK research project, led by professors Anna Ginès i Fabrellas and Raquel Serrano Olivares, to explore how digital technologies are transforming work and promoting fair, equitable and transparent labor conditions.
How AI is used in the workplace
The current conversational buzz around AI focuses on how it increases speed and efficiency. But as Rani asks, what happens to workers’ daily lives when algorithmic management becomes the norm?
When algorithms dictate workplace rules, there’s no room for human flexibility
Monitoring, evaluation, and decision-making—once overseen by human managers—are now more likely to be delegated to machines. Of course, workplace monitoring isn’t new. Punch cards, barcodes, and CCTV as tools to track worker activities have been around for decades. But unlike these legacy tools, AI can collect, analyze, and act on data automatically, often with no human oversight, and certainly without empathy. That makes surveillance far more intrusive and far less reasonable.
“Optimizing with AI can increase productivity and efficiency, but it can negatively impact workers’ health and safety by pressuring them to keep up with an enforced pace,” Rani explained in her talk.
When algorithms dictate workplace rules, there’s no room for human flexibility. A machine doesn’t recognize that a worker might have had a bad night’s sleep or is struggling with a health problem. The algorithm expects the same pace, day in, day out. Workers are expected to work like robots.
Automotive industry: Working like a machine
This dynamic is especially stark in the automotive industry. In countries such as Argentina and Malaysia, Rani found that temporary contracts combined with algorithmic tracking placed a huge burden on workers. Every movement is measured, every output compared. Failing to keep up brings the risk of not having your contract renewed.
This creates a workforce driven by fear: fear of underperforming and thus losing their livelihood. The long-term stress and psychological burden are undeniable and probably unsustainable. As Rani put it: “Managers will tell you that AI boosts productivity and efficiency, but workers are being pushed to the extreme to get every penny out of them, which inflicts huge consequences on them.”
Companies using AI to manage the workforce should heed the warning. AI may raise efficiency, but it achieves this by squeezing the most vulnerable workers the hardest.
Healthcare against the clock
AI is also a frequently used tool in the healthcare sector. One example is India, where dashboards are used to compare doctors’ efficiency. On the positive side, algorithms can speed up diagnoses and enable smoother scheduling. Patient flow improves, coordination is easier, and tools such as WhatsApp groups make communication instant.
Algorithm-driven healthcare risks missing the nuances between patients
But the risks should not be overlooked. Sharing patient data informally through apps can result in confidential information being leaked. And due to the sheer number of patients, there’s constant pressure on medical professionals to speed up treatment, which leaves doctors and nurses stressed.
Crucially, the subtle differences between patients can be lost. Two people may present with the same illness but require different treatment paths due to secondary illnesses or personal preferences. When efficiency is the overriding goal, those differences risk being overlooked. In Rani’s view, algorithm-driven healthcare risks missing the nuances between individual patients, leading to one-size-fits-all treatment.
Warehouses run by algorithms
In logistics, algorithmic management has been incorporated into the workplace on a significant level. Warehouses are now run by dashboards that allocate tasks, rate productivity, and track the time taken on tasks. Managers can oversee every movement on the shop floor in a way that feels distinctly ‘Big Brother’-esque, affecting workers’ right to privacy.
The problem is not just the intensity of the monitoring, but the fact that employees are often not even aware of its scale. This lack of transparency raises serious ethical concerns. Importantly, Rani’s research also found that decisions about adopting these AI management systems are commonly made with little consultation with staff or unions. Efficiency, once again, comes first. Worker wellbeing isn’t even part of the equation.
Germany: Where workers have a voice
Fortunately, not every country is following the same path. Existing laws in the EU create fairer terms for workers. Germany, for example, offers a different model of best practice through its long-standing system of co-determination. Workers, via works councils, are given a formal voice in decisions involving technology adoption. Existing laws such as GDPR and the Works Council Constitution Act are actively enforced, giving employees a measure of protection from invasive surveillance.
The German example shows how leveraging existing regulations and opening the way for dialogue can create a balance between improved efficiency and worker wellbeing. When employees feel they have a voice, stress is reduced, and AI can be integrated more responsibly.
Balancing efficiency with dignity
Rani believes that laws on their own are not enough. She argues that unions need to be empowered with regulatory and institutional backing to protect workers in the age of AI. Social dialogue is not optional—it is essential if the benefits of AI are to be reaped without sidelining workers’ rights.
There are two challenges here: to make better use of the regulations already in existence, and to create new legislation that reflects the realities of AI-driven management. As these systems become a permanent fixture of modern workplaces, ensuring fairness, privacy, and dignity must be part of the conversation.
AI’s integration into the workplace is only just beginning. But before we blindly embrace it, there’s a question to answer: do we allow it to become a tool that dehumanizes workers, or do we shape it into something that supports both efficiency and human wellbeing?
- Compartir en Twitter
- Compartir en Linked in
- Compartir en Facebook
- Compartir en Whatsapp Compartir en Whatsapp
- Compartir en e-Mail
Do you want to receive the Do Better newsletter?
Subscribe to receive our featured content in your inbox.