The pandemic afflicting us today raises one of the transcendental issues of our time: deciding if we want to reorganise ourselves as a society with a framework for coexistence based on either humanistic or dehumanising technology.
Do we want technology with an underlying ethos founded on human dignity or do we want technology grounded in nihilism and the absence of moral limits, a technology which seeks the efficient maximisation of digital capacities in utilitarian terms and sees human beings as a means and not as an end in themselves?
I am aware that this dilemma is not new.
It has been with us since humanity began to reflect on the use of technology as something supranatural which allowed us to organise the world and transform it for material gain.
This question has intensified over the last few years due to the power of change that technology has favoured in both economic and social terms. This transformation has been possible with the advent of so-called cognitive capitalism developed through the economy of data-driven platforms.
The exorbitant flow of data unleashed by the pandemic is now driving us towards a type of biggest data
In terms of this dilemma, the difference between past and present is that the critical tension provoked originally has accelerated and intensified abruptly in recent times. This is evident with the propagation of the Covid-19 pandemic and the profound changes in both culture and mindset. We can no longer talk about a digital revolution, transformation or transition; we are now living a true "digital consummation."
In fact, we put the digital transition behind us with a transformation so intense that the digital revolution is the new status quo.
The challenge today is knowing how to manage this new state of affairs and according to what narrative.
In short, today’s reality underscores that the planet’s structure and the globalisation moulding it are technology-driven.
As mentioned, the digital revolution is now history, just like the digital transition.
Digital transformation occurred and was consumed the moment analogical freedoms became essentially digital experiences with the generalised states of emergency and lockdowns.
Further still, a substantial part of our identity is also now digital.
We work, have fun and communicate online.
We generate our own digital footprints which we expand every day. These footprints are so intense and precise that they almost perfectly replicate who we are.
In fact, those with the right algorithmic skills can use our footprints to know what we think, what we feel or what we need as we live trapped under the cloud of anxiety and urgency, a cloud which has been hanging over our heads for months.
The exorbitant flow of data unleashed by the pandemic is now driving us towards a type of biggest data.
First, we are stirring the waters of network traffic with a mega-tsunami of data about ourselves. Based on our digital interactions, our data have an incalculable aggregate value due to both their quantity and quality.
Spending so many hours online and working or seeing our faces while talking through screens in all manner of contexts create an extraordinary registry of nuances which exponentially accelerate machine-learning processes.
Using the justified primacy of health over privacy during our states of emergency, we are conferring extraordinary powers to watch over and monitor our identities in a context of need
This repository of information and data about the psychological depths of human beings will lay the groundwork, without doubt, for artificial intelligence to continue down the path in its process to achieve general intelligence. Never before will we have helped machines know so much about us and learn from us.
And, second, we are conferring extraordinary powers to watch over and monitor our identities in a context of need. In addition, we have stripped ourselves of our privacy without any reservations or any sense of mistrust.
Using the justified primacy of health over privacy during our states of emergency as an excuse, we are contributing to the creation of an extraordinary power which we then cede to those who register, store and manage our information, all without taking any legal precautions whatsoever.
Today, the panopticon is more real than ever before. I don’t refer to the prison system imagined by Bentham and reinterpreted by Foucault in his essay, Discipline and punish, to explain the capitalist normalisation model.
Rather, I refer to something more mythical and symbolic, something which lurks beneath normativity to enter our subconscious.
We are sculpting a technological god reminiscent of the one the ancient Greeks called Argus Panoptes, an ever-watchful god with eyes all over its body
In fact, we are sculpting a technological god reminiscent of the one the ancient Greeks called Argus Panoptes, a god described as all-seeing and ever-watchful, with eyes all over its body to register everything it saw. It could do this even as it slept, closing only one eyelid while the rest remained open.
The other gods used Argus Panoptes to monitor each other, turning Olympus into a moral police regime, similar to that described in the film The lives of others.
In the end, Zeus took matters into his own hands and ordered Apollo to kill Argus Panoptes using art, interestingly enough. Apollo managed to lull the all-seeing god into deep sleep thanks to the sound of a flute.
Apollo killed Argus Panoptes when the latter was completely asleep. It’s worth mentioning, once more, that he did so using art.
Without realising, we have created another panoptic god today with the consummation of a technological era replete with eyes that never close and never rest.
It’s surprising that, while the State demonstrates its analogical power by stopping reality, the cyberworld seems to be freeing itself from the State’s control and sovereignty
It is an era which introduces a new time and resets the previous one.
This new era uses algorithms controlled by technological giants whose balance sheets continue to grow and achieve astronomical figures.
It’s surprising that, while the State demonstrates its analogical power by stopping reality, confining all its citizenry to their homes and paralysing the economy without disturbing social harmony, the cyberworld seems to be freeing itself from the State’s control and sovereignty.
Spending so many hours online, a new parallel reality has emerged which is substituting physical reality.
Large technological firms are increasingly dominating this virtual reality which continues to expand, while data traffic, which substitutes our corporeal individuality, is also growing.
In fact, we are witnessing an extraordinary phenomenon in which human beings are evolving into a type of Homo digitalis that dematerialises as we come into contact with screens.
Given the exceptional nature of the Covid-19 pandemic, algorithmic governance has been imposed erga omnes and without opposition, substituting our physical identity with a digital one, an identity which, in addition, implies no citizenry or online rights to protect citizens.
I refer here to an identity which dilutes us as people.
Given the exceptional nature of the Covid-19 pandemic, algorithmic governance has been imposed without opposition
It functions based on what we were and creates the conditions to be able to define us as online workers, digital content consumers and app users. Some of these applications serve to track infected individuals and, as occurs with those fomented by the Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) consortium, they aim to neutralise the coronavirus’ propagation and stop any new outbreaks.
For this, they develop capacities which group together similar social interactions. The latter’s storage and centralisation facilitate the apps' monitoring capabilities, potentially transcending those programmes' primary epidemiological functions.
Indeed, those apps could conceivably keep track of our movements 24 hours a day, 365 days a year and, while they’re at it, identify who was with us and what our connections are. Associated to GPS data, this could imply an architecture likely to carry out "matching," revealing our atomised data and putting a name and surname to our digital footprints.
The consequences of promoting biggest data in this exceptional context are troubling.
The first concern is that, without democratic control over this technological empowerment process, we run the risk of our liberal democracy becoming a technological dictatorship.
Transitioning from a foundation based on cooperative freedom, we could see the establishment of a technological order which monitors and tracks our mobility and our health at the service of a privatised Cyber-Leviathan. We will only be able to impede this dystopian scenario with digital rights and guarantees which protect us and our privacy.
This requires reflection, an essential part of technological humanism, and also implies great effort, imagining a catalogue of new rights and guarantees which give shape to and establish a digital citizenship upon which to found a cyber-democracy that reconciles technology and freedom.
This connects to the second consequence I just described, given that it is related to the spread of inequality promoted by the world's digital consummation.
Unless the burdens produced by the economic recession are shared equally by all and redressed by public policies and actions which mitigate socially harmful effects, this growing inequality will stem not only from generalised impoverishment. Rather, it can also be amplified by the aggregate effects produced by our data which platforms monetise without any solidarity-based compensation.
The spread of inequality promoted by the world's digital consummation can be amplified by the aggregate effects produced by our data
To avoid this inequity, we have to enact regulations to govern algorithms and artificial intelligence, ensuring these tools work at the service of human beings by establishing trustworthy ethical guidelines.
To achieve this we have to defend humanistic values as well as public policies centred on all that's human.
The Covid-19 pandemic cannot impoverish the many or serve to allow the few to disproportionately benefit from the needs of the many. This is not ethically admissible because it stems from a digital hyperactivity which we ourselves generate as a result of the pain provoked by the coronavirus and the anxiety fomented by our confinement.
Lastly, we also have to become aware that a neo-reactionary virus has taken root, circulating through social media and undermining trust in democratic governments.
The Covid-19 pandemic cannot impoverish the many or serve to allow the few to disproportionately benefit from the needs of the many
I refer here to an authoritarian strain which promulgates disinformation-based dynamics to undermine our freedom even more.
The inability of so many to distinguish truth from “fake news” leads to the propagation of an ordinary digital man who is the ideal subject, as Hannah Arendt warned, of technological totalitarianism. In Hitler’s Germany or Stalin’s Soviet Union, that subject was not an utterly convinced Nazi or Communist but, rather, "people for whom the distinction between fact and fiction (i.e., the reality of experience) and the distinction between true and false (i.e., the standards of thought) no longer exist."
This unsettling reality can likely come to fruition as Michiko Kakutani warns in her book, The death of truth.
I have no doubt whatsoever that liberal democracy will overcome the Covid-19 pandemic, but it will have to convince us that it did the best it could and for the good of all.
We have a lot on the line in this bloodless battle centred on communication. The cyberworld cannot accommodate lies as a structure of daily propaganda. This is especially true if, based on that structure, cyber-populism has space to grow and propagate the nihilistic dehumanisation process it defends and which is supported on technology and the mass consumption of digital communication favoured by social media.
Precisely, the risk of dehumanisation to which the digital consummation we are currently undergoing may contribute is one of the key challenges liberal democracy faces.
Liberal democracy will overcome the Covid-19 pandemic, but it will have to convince us that it did the best it could and for the good of all
We cannot obviate that nihilism relativises truth, condemns empathy, mocks tolerance, shuns responsible freedom and questions the rights of those who don’t think like its proponents.
If we are incapable of distinguishing lies from truth, the epistemological and moral foundation on which democracy rests will come apart. As Thomas Jefferson explained, democracy can only be based on the assumption that human beings "may be governed by reason and truth." However, this requires governments to guarantee that the doors are always open to them: "to open the doors of truth, and to fortify the habit of testing every thing by reason, are the most effectual manacles we can rivet on the hands of our successors to prevent their menacing the people with their own consent."
Contrarily, fear and anger will join forces and organise themselves and make us pay as a society with the propagation of a political authoritarianism looking for culprits based on lies.
This possibility exists: accumulating negativity and rancour every day through social media and a cyber-populism expanding its conspiratorial toxicity in order to weaken democracy’s institutionality to undermine it and favour the rise of a dictatorship.
In this respect, we cannot discard this outcome based on the 21st century’s results thus far. In fact, this century is profoundly anti-liberal. Democracy’s credibility has taken hit after hit. Seen as a series of successive crises, these blows have made democracy increasingly vulnerable.
For example, 9/11 took away our sense of security and spawned populism.
The 2008 financial crisis took away our prosperity and threw us into the arms of populists.
Today, the pandemic is robbing us of our health and throwing us at the feet of a Cyber-Leviathan which is consummating an authoritarian project based on surveillance, control and inequality founded on digital dehumanisation.
Thinking critically about the future is beginning to be as urgent as fighting against the pandemic.
This is a question we have to address through technological humanism, helping us to avoid the imposition of using technology in such a way that we end up in a dystopian future. This would confirm Paul Virilio's warning in the 1990s when he described the cyberworld as the politics of the worst.
To avoid this, we have to resolve the tensions I have just described, adopting a narrative that demands public policies centred on what's human.
We need policies that re-instil our faith in democracy's hopeful and exciting potentiality. This can only be inspired by a humanism that puts, in practice, as much responsibility in human hands as technical power available.
We must channel the digital consummation we're undergoing within a design centred around an ethos founded on humanity and for humanity
Consequently, we're talking about a narrative of justice.
It is a narrative that provides coherence to and harmoniously integrates, as Hans Jonas envisioned, a relation between human beings and technology, a relation that establishes a strategic alliance between human dignity and technology.
In this sense, it is essential we focus and channel the digital consummation we're undergoing within a design centred around an ethos founded on humanity and for humanity.
This implies an ethos rooted in human beings’ fragility and vulnerability. It should prioritise their defence and protection against the adversity possibly implied by an unfocused technology which reveals itself as the irresistible desire for power.
Ortega y Gasset can help us here.
This is especially the case because he felt that technology only makes sense if it serves human imagination. Technical capacity can only be fomented in beings for whom intelligence is a creator of life projects.
For Ortega y Gasset, in fact, technology arose from the fantasy of trying to respond with that technology to the human need for wellbeing.
From this stems Ortega y Gasset’s insistence on technological intelligence within human beings, allowing them to live their lives "with" technology and not "from" it or based on it.
The starting point when defining technological humanism is defence of an ethical horizon based on human dignity
Technology’s instrumentality is fundamental for a humanistic view of today’s digital consummation, as if it were a type of destiny.
Thus, this implies seeing a horizon of possibilities for humans in this technology. Amongst other things, this is because it confers a "supranatural" sense to wellbeing, allowing it to determine the content of life by providing meaning and sense within the agonising context we’re traversing as a result of the pandemic.
That’s why the starting point when defining technological humanism is defence, both in terms of technological design and praxis, of an ethical horizon based on human dignity, its axiological centrality and the possession of some fundamental rights which preserve that dignity.
The EU’s High-Level Expert Group on Artificial Intelligence (AI HLEG) pointed to the following in its ethical guidelines to achieve trustworthy AI:
"We believe in an approach to AI ethics based on the fundamental rights enshrined in the EU Treaties, the EU Charter and international human rights law. Respect for fundamental rights, within a framework of democracy and the rule of law, provides the most promising foundations for identifying abstract ethical principles and values, which can be operationalised in the context of AI."
This “promising” framework is the foundation upon which technological humanism has to undertake its work based on a triad of actions that enable promoting a collective narrative. The latter implies:
- First, a structure of digital rights.
- Second, the ethical use of data and the design of algorithms to manage them.
- Third, a digital innovation architecture which, in keeping with the Vienna Manifesto on Digital Humanism, ensures the co-evolution of technology and humanity based on moral parameters which avoid the automatism of technological evolution and choice.
Legally and politically ensuring human control over this evolution and choice and subordinating them to ethical principles which make human dignity recognisable and the centre of its design represent the purpose. This purpose encourages technological humanism when it comes to contributing to the narrative defining the cyberworld or, paraphrasing Virilio, the politics of what’s best for humanity.
The 21st century needs a pact between technology and humanity based on creation and limits
Technological humanism has to respond to the question raised by Heidegger when he indicated that technology was the mode and way that the human figure had successfully mobilised and transformed the world.
With respect to this planetary dominion Heidegger describes, there cannot be a response, as proposed by Jünger in The worker, based on a desire for power which maximises the planet’s subjugation to the transformative capacity of human beings seen in utilitarian terms.
Contrarily, we need a different human attitude.
We need an attitude born of a creative impulse and not power for the sake of power. The latter, associated to technology, projects a longing to transcend limits, leading to the “hubris” the ancient Greeks warned us of.
The 21st century needs a pact between technology and humanity based on creation and limits.
It needs a pact which seeks harmony, not utility.
It needs a pact founded on a new metaphysics which gives sense to that desire for action which is unavoidable. As we’re witnessing in terms of the pandemic’s digital experience, our technology-driven being in the world is also provoking an ontological revolution in terms of our identity. We have to govern this from a new humanism.
If humanity does not think about the meaning it wants to give technology, technology will make us the victims of its irresistible desire for power
This humanism cannot make us forget about our need for care, empathy, generosity and solidarity towards others. It has to continue to demand our privacy and the freedom to choose in terms of technological advances, seeking respect for our wellbeing and our moral autonomy.
This humanism has to respond to the urgency to imagine the world to which we are heading thanks to digital consummation. If we cannot imagine it, technology will imagine us.
As Jacques Ellul lucidly foresaw, this is especially true because, if humanity does not think about the meaning it wants to give technology, technology will make us the victims of its irresistible desire for power and its inevitable desire to transcend limits.
In sum, we need a technological humanism that gives us the serenity to cultivate patience and take a needed pause, allowing us to define and identify the limits and thus ensure that technology is a human product and doesn’t become inhuman.
I’d like to end by citing Heidegger again. In this case, thanks to Aristotle, he includes an anecdote about Heraclitus in one of his works which I now reproduce here:
"And just as Heraclitus is said to have spoken to the visitors who wanted to meet him and who stopped as they were approaching when they saw him warming himself by the oven—he urged them to come in without fear, for there were gods there too."
Join the Do Better community
Register for free and enjoy our recommendations and personalised content.