Join us
Enrique Rueda-Sabater

This article is part of EsadeGeo's series on transformational dynamics

Artificial intelligence (AI) has been considered the “next big thing” for a long time, but until recently its advent has proven disappointing. The AI transformative promise had to do with its potential for customization of goods and services and to increase the productivity in their production and delivery. The transformative threat had mostly to do with its impact on jobs—replacing or even eliminating a wide range of them—and with the opacity of AI-driven decisions.

We are finally in a situation in which we can already refer to AI in the present tense in many spheres and may realistically expect it to spread to many more activities and domains. Hence, a significant challenge for business and all kinds of other organizations is how much and how fast to invest in what kinds of AI applications—internally or by relying on AI-expert suppliers of services.

A key factor triggering the crossing of this threshold between promise and reality has been the availability of data in massive quantities, which for a short while was fashionably called “Big Data” (peak popularity for this term was reached in 2018, according to Google Trends). A major source of data, of course, has been the connectivity fueling Internet usage and a facilitator of data aggregation and analysis has been the expansion of storage capacity in the “cloud” and the shift to cloud-based computing.

data algorithms

A striking thing about Internet usage is how it has spread in breadth (number of users globally going from about one billion to four billion over the last 15 years) and depth—as measured by IP traffic. Throughout the very rapid growth over the last decade (from 20 to 200 exabytes per month), the share of consumer and business-driven traffic has remained roughly constant—with consumer traffic representing about 80% of the total. Business traffic has increased through a variety of applications and consumer traffic has exploded due to the prevalence of streaming and connection speed and latency have improved steadily.

Data on the most recent Internet usage is not yet available, but there is every indication that pandemic confinements in 2020 and 2021 have generated an additional burst of online activity—with readings by Nielsen, for instance, suggesting that streaming minutes may have more than doubled from pre-pandemic levels. Since much of the traffic from homes has increased because of remote work and education, it is also likely that the boundary between consumer and business traffic has blurred, and data are overstating the share of consumer traffic. In any case, data generation is likely to continue growing rapidly—maybe after a brief post-pandemic slowdown.

Increasingly, data is being generated and used through mobile connectivityand the spread of 5G will accelerate this dynamic

Further growth will come from additional sources of data, including prominently those related to the Internet of Things (including sensors of all kinds, connected home appliances and automated business provisioning) and to applications ranging from voice commands to autonomous vehicles. But the critical mass of data that was required for AI to take off has probably already been achieved, and while greater amounts of data will create new opportunities as well as management and regulatory challenges, they are not likely to have the AI threshold effect that recent data growth has had.

Increasingly, data are being generated and used through mobile connectivity (and the spread of 5G will accelerate this dynamic). The current pattern is quite varied across countries—with those that were earlier adopters of Internet still relying more on laptops and other personal computers while many emerging market countries (and younger generation everywhere) have leapfrogged into largely mobile Internet usage and avoiding the purchase of computers. The gaps are likely to be much smaller for younger people but for adults (aged 18 and above) the differences are striking: in Germany and the US, for instance, around 80% of Internet users rely both on computers and mobile terminals while, at the other extreme, in India it is mobile-only users that account for 80% of adults accessing Internet.

Algorithms are nothing new—the term has centuries-old Arabic, Central Asian roots and refers to computational procedures. Computer programming essentially relies on algorithms and in the broadest sense a culinary recipe could be considered an algorithm. What has changed with the availability of data and computer power is that algorithms no longer have to be predetermined and, instead, are derived from the subset of AI called Machine Learning (ML) which, by sorting through huge amounts of data on inputs and outcomes, can “learn” and generate algorithms that keep adapting from the original ones or even those that have not been explicitly programmed.

The empowering connection between data and algorithms has been underpinned by a massive expansion of cloud storage and computing capacity

A wide range of adaptive algorithms has already entered our lives through eCommerce (Amazon, Netflix, etc.) and advertisement and targeted promotions. ML-driven algorithms are also entering preventive decision making in health and the judiciary and will become even more robust as data accumulates. Among notable examples, the insurance industry that had traditionally relied on a primitive precursor of algorithms (actual analysis) is in the process of being revolutionized by data-driven ML. The next frontier in this dynamic is “Deep Learning” a topic well-worth separate treatment as is the fact as the use of algorithms and opaque reliance on ML grows there is also cause for concern that the “black box” nature of these information-filtering and decision-support mechanisms will incorporate biases and other unintended patterns.

The empowering connection between data and algorithms has been underpinned by a massive expansion of cloud storage and computing capacity. The reliance on cloud computing has been a quiet phenomenon that is behind much of the transformational power of what is sometimes called the Algorithmic Economy. Impressive as the growth in Internet traffic has been in recent years, the capacity of cloud date centers has dramatically outraced it, as data gathered from Internet traffic and other sources is regurgitated for all kinds of commercial and national security purposes.

data algorithms

This has not been accidental or coincidental. The expansion of cloud data center capacity may have been unavoidable but the speed at which is has developed has a lot to do with the visionary massive investments made by Amazon through its Web Services (AWS) business unit. The positional success and huge profitability of AWS sparked efforts by other major corporations to catch up. While AWS’s leading market position (with about a third of total revenues) has so far proven unassailable, the entry of other big players like Microsoft, Google and IBM has accelerated the expansion of cloud capacity.

data algorithms

While cloud services initially were an alternative to investing in on-premise equipment and technical capacity (essentially allowing organizations to shift much of their IT-related activities from lumpy capital expenses to use-based payments) they are quickly evolving from offering infrastructure as a service to providing algorithm (and more broadly AI) services, including access to external data. The leading cloud service providers have hence positioned themselves to become spreaders (and beneficiaries) of AI. The range of AI-related services they offer is expanding rapidly along predictive, cognitive and interactive lines that could end up supporting most business and even public service activities.

The adoption of AI applications is already widespread—especially in countries like Japan and the US and in certain sectors and activities, like IT service, bank risk management and automotive product development, but there is plenty of room for much greater adoption and intensity of use—with most organizations indicating in surveys their intention to rely more on AI applications. At the same time, public awareness of the role of algorithms is still low: a recent Bertelsmann Stiftung survey in the EU showed that full awareness in 8% of the respondents while 15% were not at all aware.

At a macroeconomic level, we could be entering into a new era of productivity growth related to the effect of the data-fueled general-purpose technologies around AI. As was the case in the past, for instance, with electricity we may have seen so far little impact on productivity because companies needed to make necessary investments and adjustments in management and operational practices. At the micro level, it will all depend on atomized decisions made by corporations and other organizations and the current widespread emphasis on digitalization is well taken but could also lead to complacent or unambitious moves.

The stakes are high: so far it is a small number of companies that have made the investments and adjustments, those that do not catch up promptly will find themselves at a crippling competitive disadvantage. But there is also the risk of investing and adjusting too soon (AI after all has had a lot of promise for a long time), so it will require alertness and decisiveness beyond what most corporate leaders have ever experienced.

All written content is licensed under a Creative Commons Attribution 4.0 International license.