While AI has allowed for groundbreaking advancements, we are sometimes unable to explain how it works. And though opaque AI can have very negative consequences, full transparency may not be the best alternative.
“Any sufficiently advanced technology is indistinguishable from magic” — Arthur C. Clarke
When ChatGPT was released at the end of 2022, it conjured in most of us the feeling of dealing with something magic. As it often happens, the technology was not radically new: generative AI (Artificial Intelligence) is widely used in mobile or web applications, and the first chatbot (Eliza) dates back to the 60s. Nonetheless, this event has triggered the imagination of entrepreneurs and companies alike: the dawn of a new industrial revolution harbouring both great opportunities and grand challenges.
Several commentators predict that even creative jobs, thought secure from automatization, are now at risk of being supplanted by AI, lawmakers and industry associations call for new rules, and universities are scrambling on how to assess students. But what is behind this technology? And why does understanding it matters to practitioners and professors?
Dispelling the magic
The story of ChatGPT is exemplary of a larger trend. Since its inception in the mid-50s, the quest for AI to create machines able to mimic, and even surpass human capabilities, has led to technological breakthroughs and great deceptions. Research on AI got stuck when classic computer programs, executing a series of understandable logical orders, were deemed incapable of truly perform advanced tasks – like creating texts from prompts or recognising images.
We are largely unable to explain or interpret how deep learning derives outputs from specific inputs
In response, a different stream of AI regained traction: deep learning. The main difference of this approach is its use of complex statistical calculations to generate predictions based on earlier samples, using clustered layers of calculations. While this has allowed for groundbreaking advancements, it has come to a great cost: we are largely unable to explain or interpret how deep learning derives outputs from specific inputs, that is to reconstruct the full meaning of said calculations made to reach conclusions. This problem is known as ‘(un)explainable AI’ or ‘opacity’.
A technology that can be trusted?
At the Esade Entrepreneurship Institute, and along with Bilgehan Uzunca, I have been working on the challenge of digital platforms using or dealing with opaque technologies. Our research shows that opaque AI can have very negative consequences, i.e. when workers do not trust orders whose rationale is incomprehensible, or when customers prefer being catered to by humans than by obscure software.
For instance, AI can help doctors identify diseases in patients, but surgeons are sceptical about using diagnoses that come from machines whose reasoning is opaque – as it can’t be verified. Another example is HR managers using AI to sort candidates: they can stumble into hidden, embedded biases that damage all parties involved.
Any computer-mediated program is opaque to some extent to ensure its usability
These factors have contributed to a shared view that opacity in technology is perilous, and we should strive for complete transparency — which sounds appealing and intuitive at face value. However, any computer-mediated program is opaque to some extent to ensure its usability. For instance, when you use Microsoft Word, you are unaware of its inner functioning. The same goes for most apps on smartphones. However. So, if not all opacity is negative, what makes it a problem in AI?
Three pieces of advice for dealing with AI
We have developed a framework of opacity that can help managers and policymakers to better understand the problem and dispel the impression that AI is akin to magic and manage its adoption. By studying how opacity is used in technology, we have been able to draft a short guide for its use.
Decision makers should understand if, when, and why some features of technology should be opaque for users. Hence, what can managers and policymakers do to manage the impact of adopting AI? We suggest three main actions:
1. Clearly distinguish between types of AI being used
Not all products fully rely on unexplainable AI based on machine, or deep, learning. While these are the most impressive and potentially disruptive programs, most software is based on perfectly explainable AI and mundane coding. Hence, a first suggestion is to do away with the assumption that any AI application is extremely complex and akin to a magic black box.
This confusion can lead to negative social consequences, like organisations engaging in machinewashing, where corporate misconduct is justified by asserting that questionable decisions were taken by computers. This also means that managers shall be aware of any outsourced product or service developed by third parties, to avoid being entangled in scandals. In sum, the question is where and why ‘deep learning’ is being used? And how does this affect your organisation?
2. Understand that AI opacity is essential for strategic decision-making
Deciding who has access to the rationale behind AI choices, and when, is crucial. While organisations must ensure that workers are fully informed about their labour rights, performance measurements, and working conditions, a key issue is deciding which information to retain.
Crucially, upper management should have a clear understanding of what programmers or software engineers are doing, and which elements can be made transparent or opaque. For example, creating accessible databases with detailed explanations can simultaneously keep user interfaces simple and intuitive while increasing trust in AI-mediated decision-making.
3. Craft clear and consequential regulations
Spain and the EU are at the forefront of regulating AI and digital labour – a much-welcome endeavour. Policymakers and regulators should engage actively with software engineers and software companies to develop a shared vocabulary. This step zero would avoid using umbrella terms that confuse or disenfranchise professionals, shrugging off these attempts as gibberish.
In our research, we have observed how divergent uses of words like ‘AI’, ‘opacity’, or ‘transparency’ can lead to ineffective or inefficient laws. Achieving a strong regulation of new technologies ensuring safety and workers’ rights hinges upon being able to differentiate between unexplainable AI and information purposefully withheld from end users – so to avoid unintended consequences.
How much transparency we want?
While we are all marvelled and frightened by ‘the rise of the machines’, the sheer scale of innovation brought about by AI should not cloud our judgment. Companies adopting AI may feel between a rock and a hard place: either choosing a technology they do not understand or missing a pivotal opportunity for growth.
Our advice can also inform democratic forms of organising work, by showing that complete transparency may be undesirable even in egalitarian or alternative platforms, like cooperatives. Understanding that just a part of AI is truly unexplainable, and that opacity can be necessary in some cases, can ensure that its adoption does not lead to nefarious social, organisational, and reputational consequences.
Join the Do Better community
Register for free and enjoy our recommendations and personalised content.