The hidden cost of AI: What’s the price of technological progress?
The hidden cost of AI: What’s the price of technological progress?
Innovation & technology 12 March 2025
AI-driven decisions shape critical areas like welfare, labor, and democracy, often amplifying existing inequalities. Without inclusive governance and corporate accountability, its pitfalls may outweigh its benefits.
As artificial intelligence becomes increasingly embedded in our daily lives, concerns about its hidden costs are rising. While AI has the potential to drive economic growth and innovation, it also raises questions about fairness and governance. At the 4YFN event during MWC Barcelona, Irene Unceta, Academic Director of the Bachelor in Artificial Intelligence for Business at Esade, moderated a discussion on the social and economic implications of AI with leading experts in the field.
The expert panel featured Marta Poblet, Senior Research Lead at The Data Tank and Adjunct Professor at RMIT University; Anna Colom, Senior Policy Lead at The Data Tank; and Julia Wallner, Chief of Staff at appliedAI Initiative GmbH. Together, they explored the consequences of AI-driven decision-making for inequality, labor and democracy, and the ethical responsibilities of companies and governments in ensuring fair technological development and implementation.
Inequality and exclusion in AI development
As noted by Unceta, AI is often framed as a force for progress—but is it equally positive for everyone? Who truly benefits, and who bears the cost? So far, AI development is concentrated in the hands of a few powerful companies and countries. And as AI systems mirror and amplify existing societal inequalities, those who are already marginalized risk being left even further behind.
“It’s not that emerging technologies inherently come at the cost of progress,” Anna Colom explained. “But if they are not correctly developed, implemented, and governed, they do.” She emphasized that issues become particularly critical when AI is applied to public services such as healthcare, financial access, and social welfare. When algorithms determine who qualifies for a loan, insurance, or government support, biases in these systems can result in unjust outcomes.
These are not just technical issues; they have real-life consequences for millions of people
Colom referenced a real-world case where the Dutch tax authority used an algorithm to spot suspected welfare fraud. Parents with foreign backgrounds were wrongly targeted by the algorithm and accused of making fraudulent childcare benefit claims. In some cases, the demanded repayments sum up to tens of thousands of dollars. Thousands of families were pushed into poverty, hundreds of children were taken into foster care, and some victims even committed suicide.
It is a stark example of how an AI-driven welfare allocation system can cause significant harm due to flawed implementation. “These are not just technical issues; they have real-life consequences for millions of people,” she stressed.
AI governance: who is heard?
The expert panel raised another pressing concern: the global governance of AI. The race for AI dominance is often framed as a competition between major powers, but what about those left out of the conversation?
“The voices of the Global South have been largely overlooked,” Marta Poblet noted. “While we often talk about global convergence on AI principles, the reality is that fairness, transparency, and accountability are often defined by Western institutions. Yet, there are valuable initiatives emerging from the Global South that we rarely hear about.”
If you lack technical expertise, you’re excluded from the conversation. That shouldn’t be the case
Concepts like Kaitiakitanga—a Māori term for stewardship that embodies guardianship, responsibility, and interconnectedness—or Ubuntu, a Nguni Bantu term from Southern Africa often translated as “humanity to others” or “I am what I am because of who we all are,” could be of the utmost value in developing AI governance that accounts for the deep and diverse ramifications of such technology.
Poblet highlighted the need for inclusive policymaking, where affected communities have a say in the regulatory process. “If you lack technical expertise, you’re excluded from the conversation. That shouldn’t be the case.” Poblet advocated for integrating collective intelligence into AI governance, ensuring that public input extends beyond legislation into implementation and monitoring.
The corporate responsibility in AI ethics
What role do companies play in addressing these gaps? “Companies need to define what fairness means in their AI ambition,” Unceta said. Understanding how AI works is crucial, and businesses should prioritize designing equitable systems from the start.
According to Julia Wallner, fairness in AI isn’t something that can be tested at deployment—it must be embedded from the very beginning. “It’s not just about regulatory compliance but about transparency and responsibility,” she said.
Beyond compliance, Wallner emphasized the need for companies to proactively address labor displacement. “You are transforming not only your company but also people’s lives,” she said. It’s not just about staying competitive but also about investing in upskilling and reskilling their workforce. The World Economic Forum estimates that while AI will generate 11 million new jobs, it will replace 9 million. “But these aren’t one-to-one skill transitions—there’s a significant gap to bridge,” she noted.
Colom agreed, adding that a major challenge is not just about upskilling but also about critically assessing which skills are prioritized and which are lost in the process. She warned that over-reliance on AI-driven decision-making in labor markets, such as algorithmic management in the gig economy, can exacerbate worker vulnerabilities.
Algorithmic management and worker autonomy
Poblet raised concerns about the rise of algorithmic management, a practice that intensified with remote work and the pandemic.
“Now AI can make decisions without humans in the loop,” she explained. There are examples of algorithmic firing where workers lose their jobs due to opaque AI decisions. “This is just the tip of the iceberg in a broader industry of algorithmic management.”
Poblet pointed out that while AI can increase productivity, it comes with significant risks to worker autonomy and mental health. “AI-driven surveillance at work can misinterpret emotions, impact credit scores within companies, and create an unfair work environment,” she added. An increasing trend that needs to be addressed.
AI and democracy: a growing risk?
Beyond inequality and labor concerns, AI also has profound implications for democracy. Colom warned that the concentration of AI power within a handful of tech giants risks shifting from technocapitalism to techno-authoritarianism. According to her, figures like Elon Musk shape AI narratives in ways that could be harmful to democracy.
Other risks include atomization, increased individualism, and a loss of shared social understanding. “For democracy to work, it is important that we understand each other, recognize our interconnectedness, and build bridges”, she stated. AI itself is not the problem, but it needs to be built on democratic principles and consider human connections.
If we regulate AI as one big concept, we risk creating legal monsters. It’s like regulating football by only focusing on the ball
Poblet also stressed that AI governance must take a holistic approach. “The first thing when talking about AI governance is that we need to understand the object of the governance,” she explained. “If we regulate AI as one big concept, we risk creating legal monsters. It’s like regulating football by only focusing on the ball. The ball is essential, but what about the rest of the elements?”
She emphasized that AI is just one part of a larger digital ecosystem with deep implications in democracy, which also includes data governance and digital services. A sector-specific and layered approach is necessary to address the complexity of AI governance, and bringing the right stakeholders into the conversation is crucial. “It takes time, but I remain optimistic about the regulatory space,” she concluded.
Should we pause AI development?
One of the most debated questions in AI ethics is whether some AI developments should be slowed down or even halted. Poblet argued that while an outright ban on AI is unrealistic, certain autonomous systems should be paused until proper regulatory frameworks are in place.
“Some systems shouldn’t be fully banned but postponed for a while,” she said. That’s the case for fully autonomous systems since “we don't fully understand their implications and we need to establish a proper regulatory framework”. “Some things need to slow down while we reflect on the world we want to live in”, Wallner added.
Colom agreed: “Temporary bans can be a legitimate legal tool when necessary. The food and aviation industries don’t allow untested products to enter the market—why should AI be any different?”.
The road ahead: accountability and action
Wallner concluded that AI regulation and corporate responsibility must go hand in hand. “Companies are increasingly aware of these issues, but in the end, ethical AI is about what you do when no one is watching. It shouldn’t just be a marketing strategy—it should be embedded in the company’s values.”
As AI continues to evolve, the key challenge is ensuring that technological progress does not come at the cost of fairness, inclusion and democracy. With thoughtful regulation, ethical corporate practices, and an inclusive approach to governance, AI can truly serve the interests of all—not just a privileged few.
- Compartir en Twitter
- Compartir en Linked in
- Compartir en Facebook
- Compartir en Whatsapp Compartir en Whatsapp
- Compartir en e-Mail
Do you want to receive the Do Better newsletter?
Subscribe to receive our featured content in your inbox.