Will artificial intelligence take over the world?

Debunking the myths of artificial intelligence

This article is based on research by Marc Torrens

In his book Artificial Intelligence: the Road to Ultra Intelligence, computer science engineer and PhD in Artificial Intelligence Marc Torrens unlocks some of the myths, expectations, and challenges surrounding artificial intelligence (AI) and what may lie ahead.

Do Better: How frightening is artificial intelligence?

Marc Torrens: Some people get very passionate about artificial intelligence and believe that machines will solve all of the problems facing humanity. At the other extreme, there are those who are overly pessimistic and believe that machines will harm the society in many ways. AI is like any other technological disruption and is neither good nor bad, it all depends on how we apply it. This is why we must start a philosophical and ethical conversation on AI that goes beyond the technical possibilities.

Some people believe that machines will solve all of the problems facing humanity

What's your position?

Nothing is black and white. 'Techno pessimists' should lose some of their fears and see the advantages of artificial intelligence and 'techno optimists' should control their enthusiasm because there are still many problems and challenges to be solved. I am generally optimistic because humanity has always overcome challenges related to technological disruptions, although wasted time and damage can often be avoided with key ethical discussions.

Is there too much hype surrounding AI?

A journalist from the NY Times once wrote: "the upheavals of artificial intelligence can escalate quickly and become scarier and even cataclysmic. For example, a medical robot originally programmed to rid cancer could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to disease". The mass media have also said things such as: "we will be immortal by 2045".

AI future

There is too much hype around AI! And the problem is that this huge expectation can lead to an AI winter similar to that we experienced in the 80s. I prefer less hype and more realism because this will strengthen the discipline in the future.

These ideas sell a lot of newspapers

Of course, but the reality is that these ideas are exaggerations without any serious scientific foundation. Some people have this image of artificial intelligence as a human-like robot that can talk, understand emotions, be aware of itself, use common sense, and even establish emotional relationships. From a scientific point of view, we still have no idea how to make this happen.

There is a lot of hype around artificial intelligence

There is a lot of hype around artificial intelligence. Stephen Hawking once said that the development of full artificial intelligence could spell the end of the human race. Humans, who are limited by slow biological evolution, would become what dogs are to humans today. We would have no control over what happens to us and we would no longer be in charge of making decisions because there would be a far more superior intelligence in the room who would see anything we do as ridiculous. However, we have no idea how to develop this full or strong AI. Moreover, we have no rigorous scientific agenda that enables us to work in that direction with any certainty.

Stephen Hawking once said that the development of full artificial intelligence could spell the end of the human race

We have to demystify the fears surrounding artificial intelligence. It's absurd to worry about these future scenarios – we are very far away from something like this happening. Movies about AI are entertaining and great business, but the truth is that we have no idea about how to develop this type of strong artificial intelligence.

How advanced is artificial intelligence?

Artificial intelligence was invented 70 years ago, but is still in its infancy. Clarke's third law states that "any sufficiently advanced technology is indistinguishable from magic". If we could bring Einstein to 2018 and show him Amazon's Alexa, his right mind would be incapable of guessing its technology and he would think it's magic.

Current AI algorithms are based purely on statistics – they don't have much mystery

When we see things like a computer identifying a face, we may think it is very smart, but current AI algorithms are based purely on statistics – they don't have much mystery. A computer may identify a face in a picture, but the computer does not know what is a face, or that humans have faces.

A computer can beat any chess player but it does not know what is a game, or what it means to win or lose a game. Currently, a computer is capable of taking decisions without understanding anything about the domain.

What is singularity?

The 'singularians' believe that the day when machines will overcome human intelligence is approaching. This prophecy is based on the exponential growth of the two ingredients necessary for machine learning: namely, computing capacity and data availability. In his book The Singularity is Near, Ray Kurzweil (Google) writes that in 2029 artificial intelligence will reach a level that is a billion times more powerful than all human intelligence today.

The 'singularians' believe that the day when machines will overcome human intelligence is approaching

Huh, how did he calculate this?

His over-optimistic calculations are based on the premise that computational capacity and data grow exponentially. It is a fact that the accumulation of data grows exponentially every year and we are advancing with giant steps. In the last two years alone, we have generated 90% of all the data we have accumulated throughout the human history. It is also true that computational capacity is growing exponentially as shown by Moore's empirical law. But predictions by Kurzweil and his advocates miss a crucial aspect of the equation.

Which one?

Many researchers and practitioners, including myself, believe that this prediction about 2029 has no scientific foundation and that the moment in which artificial intelligence overcomes human intelligence is far away. This is because basic research and science is progressing linearly and not exponentially – humans are slow in making scientific discoveries – and we still need a lot more science to reach this stage.

We cannot expect to model things such as common sense, empathy, and the realm of emotions very soon. We are still in the very early stages of AI. Kurzweil may say 2029, but we do not know if we can ever produce strong AI.

So, singularity is not near...

To paraphrase Andrew Ng from Stanford University, worrying about singularity and super AI is like worrying about overpopulation and pollution on Mars before we arrive. It is impossible to predict and ridiculous to worry about Mars because we haven't even set foot there yet.

Designing machines that can learn or act intelligently in any domain – as we humans do – is still very far away

'What if's' can be disturbing

Artificial intelligence enables us to analyse data and understand reality in a new way and make more informed decisions about any domain. This alone will transform the world because machines will take over many tasks and this will affect all sectors and jobs. But AI is still very narrow and specific. Machines are still pretty dumb and are designed to carry out specific tasks in specific domains. Designing machines that can learn or act intelligently in any domain - as we humans do - is still very far away.

We can design an algorithm to detect cats in an image based on a training set of millions of pictures. However, if we then train the same system to recognise dogs, it will forget about cats (catastrophic forgetting). We do not know how to build systems that learn ANYTHING as we humans do.

Our common sense and intelligence are very hard to model because we do not really understand how they work. We do not yet even know how we make decisions! There is a recent consensus among neuro-scientists that we cannot take any decision without emotions. Thus, whenever rationality is not enough (as in most cases), emotional processes drive our decisions. And this type of reasoning is much harder than just analysing data.

All written content is licensed under a Creative Commons Attribution 4.0 International license.