Hero Image
Irene Unceta, Sergi Bastardas, Migle Laukyte and Xavier Domingo i Albin at the 4YFN panel discussion

Who’s really in charge? Rethinking human control in the age of AI agents

Innovation & technology 07 April 2026

AI is shifting from tool to decision-maker, taking on tasks that once required human judgment. As organizations delegate more responsibility to these systems, the challenge is not just what they can do, but how they should be used. 

Do Better Team

Watch the full video of the talk here


AI is evolving fast. Initially, we just asked it questions, but now it can manage workflows, make decisions, and execute tasks automatically. ‘AI agents’ are tools that can work without human intervention. Google defines AI agents as “software systems that use AI to pursue goals and complete tasks on behalf of users.” 

These agents will undoubtedly boost productivity, but to what degree should humans rely on these systems? Where do we draw the line between efficiency and human autonomy?

This was the focus of a recent panel discussion at 4YFN Barcelona, hosted by Irene Unceta, Associate Professor in the Department of Data, Analytics, Technology and AI at Esade. The panel included Sergi Bastardas, Founder and CEO of OrbioAI; Migle Laukyte, Professor of Artificial Intelligence and Law at Universitat Pompeu Fabra and member of the European Group on Ethics in Science and New Technologies; and Xavier Domingo i Albin, Director of the Applied Artificial Intelligence Unit at Eurecat Technology Center. They examined not only the capabilities of AI agents but also how organizations can ensure that humans remain meaningfully in control as these systems advance. 

From assistants to agents 

So how has AI changed? AI systems began by responding to prompts, but AI agents take action. As Domingo i Albin explained, these systems are designed to understand context, define objectives, and execute a sequence of decisions. They are “moving from recommendation to autonomy.” 

This marks a significant change. When AI was only providing answers to questions, we could choose whether to base our next actions or thoughts on the AI’s answer. But if the AI system is now deciding on its own next steps, the decisions it makes and the outcome it produces are less predictable and potentially even more difficult for us to understand.  

An AI agent can learn, adapt, and coordinate with other systems, which means humans can’t always predict the system’s behavior. This represents a fundamental change for organizations: the technology is no longer only a support system; it now plays a role in shaping decisions. 

Delegation at scale: what are we really handing over? 

Companies are delegating more and more tasks to AI. Bastardas highlighted how AI agents are already in use across operational workflows such as employee management and customer service. The agents can process more data faster than humans while adapting to changing contexts and making real-time decisions.  

This is not necessarily a bad thing. Bastardas noted, “We may be overestimating human judgment—AI systems can offer more traceability and consistency.” Humans often make inconsistent, biased decisions, and it’s difficult to trace where the bias lies. By contrast, AI systems can offer greater visibility into how decisions are made, creating a clearer audit trail. 

The scale of how much work could be delegated to AI is highlighted in a recent McKinsey report: by 2030, up to 30 per cent of current work activities in the USA could be automated by generative AI, particularly in areas such as customer service, operations, and human resources.  

Organizations are not just automating tasks; they are delegating judgment. 

Automation without understanding 

The risk, then, is that humans may end up supervising systems that they don’t fully understand. There’s a growing gap between action and understanding.  

Domingo i Albin raised a critical concern: “What will happen when we have hundreds of agents cooperating? How will we detect an error? Who is responsible?” As complex AI systems interact, errors may not be immediately apparent, and responsibility becomes harder to assign. 

The human supervisors in the workplace may end up operating under an illusion of control. They may be formally in charge, but they are simply validating decisions made by AI systems. This way of working could lead to over-reliance, where AI outputs are accepted by default rather than being critically scrutinized. 

Accountability, law, and the limits of autonomy 

Legal and ethical questions about accountability must also be considered. Laukyte emphasized that the sheer scale and impact of today’s AI systems are unprecedented. It’s crucial to establish where responsibility lies for the decisions that AI makes. “Humans should set the goals—and remain accountable for the outcomes,” she argued.  

The European Union’s AI Act aligns with this. It’s a risk-based framework, focused on how systems are used rather than the technology itself. High-risk applications, such as those in healthcare or employment, are subject to stricter requirements to ensure transparency, accountability, and human oversight. 

Designing for control: how companies can get it right 

How we design AI systems and integrate AI agents into decision-making processes will be critical. Bastardas pointed out that “The real challenge is not the intelligence—it’s how the system is designed and controlled.” 

The panel discussed the importance of building systems that follow different approaches, such as deterministic processes where outcomes must be predictable, and more flexible agent-based models where adaptation is valuable. 

A cautious approach to allocating tasks to AI is also wise. Decisions could be delegated to AI incrementally, with its role increasing as it proves to be reliable. This way, organizations can reap rewards in efficiency while maintaining control, as the weightiest decisions are still governed by humans.  

A third criterion for AI system design is that of observability. The more complex AI systems become, the greater the need to monitor and audit them, and to understand their behavior. If we can’t comprehend what the systems are doing, our oversight becomes ineffective. 

Keeping humans relevant 

The role of humans in the workplace is shifting more to the supervision of AI systems. The benefit here is that workers will have more time for strategic tasks. But will they remain engaged with their job and the goals of their organization? Will they feel responsible for their work? And will they care about skill development? If AI is doing the bulk of the tasks, human supervisors may become disconnected.  

 Laukyte highlighted the choice not to use AI. “Nobody should oblige us to use anything,” she noted. In certain contexts, particularly where fundamental rights are at stake, restraint may be as important as innovation. 

A question of control 

The crux of the discussion was not about AI’s capabilities, but about control. As organizations delegate more decisions to machines, they must consider how responsibility is defined and maintained. 

Preserving human judgment and accountability is essential if people are to remain meaningfully in charge of the AI systems they govern.  

All written content is licensed under a Creative Commons Attribution 4.0 International license.