Why AI is only as safe as the people who implement it
Human involvement is essential to the design, implementation and oversight of AI — but how can we ensure that mistakes are minimized?
In July, a UK school was reprimanded by the Information Commissioner’s Office for using facial recognition technology without the consent of parents or students.
The school, for pupils aged between 11 and 18, introduced the technology to take cashless payments at its cafeteria. But students were not given the opportunity to agree to the use of the technology, and parents were only asked to opt out of consent rather than give their express permission.
UK law requires any organization using facial recognition software to have a data protection impact assessment (DPIA) in place. The assessment should identify and manage the risks of processing sensitive data such as biometric information and requires explicit consent from parents of minors or students aged 18.
There is a large gap between the people who design the technology and the people using it
“A DPIA is required by law — it’s not a tick-box exercise,” the ICO’s head of privacy innovation Lynne Currie said in a statement. “It’s a vital tool that protects the rights of users, provides accountability and encourages organizations to think about data protection at the start of a project.”
It’s a stark reminder that the use of AI is only as safe as the people who implement it.
Preserving dignity
“The fact that there is a person between the AI and the final decision is not a guarantee of anything,” says Irene Unceta, Esade’s Assistant Professor of the Department of Operations, Innovation and Data Science. Speaking to Retina magazine, Unceta continues: “There is a large gap between the people who design the models and the people who use them.”
Facial recognition technology is just one example that illustrates this gap. AI software with multiple uses can be easily purchased with no license or pre-vetting required, and regulations can only be enforced if an organization is found to have breached them.
“The challenge of preserving human autonomy and dignity in the face of increasingly sophisticated systems must be highlighted,” says Esade Professor of Business Strategy Alberto Núñez Fernández.
Writing in Ethics magazine last year, the pair continue: “AI is no longer restricted to science fiction; it’s an integral part of our daily lives. Priority should be given to the ethical treatment of users, respect for cultural particularities and the broader social impact of these technologies.”
Invading our space
As the use of AI becomes integral to our business and personal affairs — whether it’s for facial recognition, job candidate selection or insurance quotes — it increasingly replaces human decision-making. And according to Unceta, this is an area we must watch closely.
Delegating decision-making to AI thinking that it will make better decisions than humans is a complete mistake
“AI has been automating tasks for decades,” she says. “The problem is that it is now being used to make decisions. That is an inherently human space. Delegating decision-making to AI thinking that it will make better decisions than humans is a complete mistake. The model only knows what we convey to it.”
And as Esade’s Associate Professor in the Department of Law Anna Ginès i Fabrellas points out, the teams who create and train these models are mired in a diversity crisis.
“There’s an absence of a racial or gender perspective in the design or programming of technological products and artificial intelligence systems,” Ginès explains.
“The industry no longer reflects the percentage of women or Black people in technological studies. It is therefore necessary to guarantee the racial and gender literacy of the people who work in the sector and to attack the unequal power structures that currently characterize the artificial intelligence industry.”
Taking responsibility
As Unceta points out, the end user also has a responsibility for fair implementation: “These models cannot have a universal will. The question is whether humans are trained to understand how the model works, why it gives the result and whether it is correct.”
Núñez Fernández agrees: “Who takes responsibility when an AI system makes a decision?” they ask.
“We need a policy of transparency in terms of the operation of algorithms that protect the common good and ensure the control of AI systems, without breaching the intellectual property rights of their developers. These same developers must be responsible, partially or totally, for the results and consequences of their creations.”
The development of AI technology remains shrouded in secrecy
To create accountability, they suggest, collaboration is key. “Finding the right balance between innovation and responsibility requires a collaborative effort involving technologists, philosophers and policy makers.”
False economies
But according to Ginès, this level of transparency isn’t just lacking — the development of AI technology remains shrouded in secrecy.
“We have to remember that this technology is the result of business innovations that are placed on the market without scientific scrutiny or democratic control,” she says. “The absence of transparency prevents scientific and journalistic research on the social impact of these systems.”
And it’s not just a lack of transparency — the propensity for mistakes can create more work than it reduces. Unceta explains: “There’s a reason why algorithms haven’t replaced radiologists — they’re good, and they’re fast, but they make mistakes. I’ll be the first to defend transparent models that automate repetitive tasks. But there are many cases where any extra performance would be negated by the complexity they add.”
Proceed with caution
The hesitance of the medical sector to entrust AI with decision making will come as a relief to many. But how can this caution be extended to apply the brakes to the rapid roll-out of technology that has a significant impact on people’s lives? Regulations exist, more laws are in development, and yet accountability remains lacking.
“True ethical governance goes beyond legal compliance,” says Núñez Fernández. “It is crucial to foster a culture of accountability within the community of AI developers.
“We stand at a crossroads, and the decisions we make today will shape the future of humanity’s relationship with technology. But instead of seeing it as a threat to human dignity, we should embrace the potential for a symbiotic relationship.”
- Compartir en Twitter
- Compartir en Linked in
- Compartir en Facebook
- Compartir en Whatsapp Compartir en Whatsapp
- Compartir en e-Mail
Do you want to receive the Do Better newsletter?
Subscribe to receive our featured content in your inbox.