Big tech companies' competition to lead the way in the flourishing AI ecosystem accelerates each day. Legislators face the challenge of granting safe AI development while fostering a fertile ground for innovation.

Do Better Team

When ChatGPT was launched by research company OpenAI in November 2022, the artificial intelligence (AI) system became the fastest-growing app of all time.  

AI has long been used to power everyday tasks and functions; if you’re online, you’re using it. Customer service chatbots, customized search results, personalized ads and translation tools: they’re all powered by AI. 

But ChatGPT propelled AI into a whole new sphere. Guided by prompts from the user, the natural language processing tool has the ability to write whole essays, offer solutions to legal problems, create works of art and generate complicated code.  

The benefits—and potential for abuse—are obvious and plentiful. “AI systems are becoming a part of everyday life,” says OpenAI’s chief technology officer Mira Murati. “The key is to ensure that these machines are aligned with human intentions and values.” 

Who is winning the AI race? 

Tech giants have been quick to incorporate ChatGPT and its rapidly evolving iterations (GPT-4, launched in March, “surpasses ChatGPT in its advanced reasoning capabilities”) into their everyday functionality.  

In March, Microsoft launched CoPilot, which connects ChatGPT with Microsoft 365. “Embedded in the Microsoft 365 apps you use every day, including Word, Excel, PowerPoint, Outlook, Teams, and more, Copilot works alongside you to help unleash your creativity, unlock productivity, and uplevel your skills,” says the company. 

By May, Microsoft and Windows had announced their GPT-4 connected CoPilot AI functions would be integrated into Windows 11 and available to use across all apps and programs. The tech giants have also introduced their own Windows AI Library, “a curated collection of ready-to-use machine learning models and APIs” for developers.  

ChatGPT is a drop in the ocean, in the midst of an explosion of AI startups

Over at Google, their ChatGPT rival Bard has been in development since 2017. Google’s Language Model for Dialogue Applications (LaMDA) was, they say, the breakthrough in machine learning. And, according to industry experts, “the AI market is Google’s to lose.” 

But this race for market dominance is outpacing the regulations required to contain the risks, says Esade’s Xavier Ferràs

“The AI race is more focused on breaking records of scale than on ensuring its safety,” he warns. “ChatGPT is a drop in the ocean, in the midst of an explosion of AI startups and applications that will soon reach all aspects of our lives. 

“But are we building AI aligned with human and democratic values? I don’t believe in apocalyptic scenarios of a perverse AI taking over the world. But I am terrified of perverse or irresponsible humans taking control of AIs.” 

The challenges of AI regulation 

The European Commission is taking steps to limit the potential damage with its AI Act, a regulatory framework applicable in all member states that will classify the use of AI according to the risk it represents: unacceptable, high, limited or minimal. 

Applications whose risks are deemed unacceptable, such as the use of facial recognition in public places or ‘social scoring’ used by governments, will not be permitted.  

Applications classed as high risk, such as medical devices and autonomous vehicles, will require authorization before development. General purpose AI, including ChatGPT, will be subject to strict regulations, with fines of up to €30 million for non-compliance. 

If we create laws that we cannot implement efficiently, we will only increase red tape

Europe is unquestionably a leader in the field of privacy, data and AI legislation,” says Esade’s Esteve Almirall, professor and director of the Esade Center for Innovation and Cities. 

But a piece of legislation is only as good as its application. If we create laws that we cannot implement efficiently, we will only increase red tape. The case of cookies is a clear example. We all agree with its objectives, but its implementation has led millions of Europeans to click on cookies that no one has ever read. 

“The combined effect of a lack of sovereignty created by an EU regulation and bureaucratized implementation can be very detrimental, creating barriers to entry that prevent legislation from adequately protecting and, at the same time, holding back innovation. 

“Does this mean we should stop promoting progressive legislation? Absolutely not. On the contrary, we need to balance the speed of legislation with our ability to make agile digital implementations within our sovereignty.” 

The thriving AI ecosystem 

While politicians wrangle over the minutiae of proposed legislation, organizations across sectors rush to implement increasingly innovative applications of AI. 

“We have a multitude of organizations and sectors that have already signed up to join the race,” says Almirall. “In programming and consulting, we will see higher productivity and fewer errors. Bain, one of the big American consulting firms, has long had an agreement with Open AI to develop products that use ChatGPT. One of the biggest revolutions will be in the legal sector; Pwc already has a similar agreement for its legal department.  

And education is going to be buzzing. Khan Academy with its GPT-based tutor Khanmigo shows us what the way will be, and Duolingo with its tutor that allows us to chat with a ‘native’ in real situations.”  

One of the biggest AI revolutions will be in the legal sector

Almirall adds: “Of course, there are whole ecosystems of apps that will use these models in the form of APIs. This is where we will see an explosion similar to that of the first iPhones.  

“We have a window of opportunity that allows us to position ourselves and define the position of our countries, our companies and our people. Humans will not be replaced by artificial intelligence. But humans with AI will replace humans without AI.” 

All written content is licensed under a Creative Commons Attribution 4.0 International license.