Science

IA: what future for Europe? – FrenchWeb.fr

After the GDPR, the Digital Services Act (DSA) and the Digital Market Act (DMA), the European Union presented a project to regulate artificial intelligence systems. This new text definitely establishes Europe as an avant-garde regulator of the digital space on an international scale. A legitimate role insofar as digital technology raises many fears and questions, especially in terms of artificial intelligence, a sector at the heart of many fantasies and excesses. From autonomous cars to facial recognition, including voice assistants, this highly strategic sector is set to revolutionize our daily lives. According to a 2017 study by PwC, AI is expected to contribute $ 15.7 trillion to the global economy in 2030, more than the current combined GDP of China and India.

However, if the economic potential of the sector is colossal, it is the Americans and the Chinese, led by GAFA (Google, Apple, Facebook and Amazon) and BATX (Baidu, Alibaba, Tencent and Xiaomi) who largely make the race ahead in artificial intelligence. In 2020 alone, the US government has dedicated a budget of 4.5 billion euros to research and development in the sector, and according to data from PitchBook, between 2012 and 2018, the country of Uncle Sam has injected twenty times more money than Europe into AI and big data.

However, Europe believes that it has a card to play in certain sectors undergoing digital transformation to distinguish itself. “Whether it’s precision farming, more reliable medical diagnostics, or safe autonomous driving, artificial intelligence will open up new worlds for us. But these worlds also need rules”, Declared in September 2020 the President of the European Commission, Ursula von der Leyen, in her speech on the State of the Union.

An “AI Richter scale” to regulate the sector

These rules were therefore presented in April in Brussels. The project, led by Competition Commissioners Margrethe Vestager and Internal Market Commissioners Thierry Breton, draws up a precise list of sensitive applications around AI. In particular, systems of “widespread surveillance“Of the population, those”used to manipulate behavior, opinions or decisions” citizens. Faced with fears fueled by technologies that have not yet reached maturity, such as autonomous vehicles, and the possibilities of population control offered to authoritarian regimes, citizen rating systems, like those used in China, will thus be banned in Europe. .

To avoid these abuses, the European text introduces four levels of risk to regulate the sector: minimal, limited, high and unacceptable. Is this new “AI Richter scale” the right approach?

Gwendal Bihan, CEO of Axionable, a consulting firm specializing in sustainable and responsible AI, enlightens us on the subject:

While some, such as Philippe Silberzahn, professor of entrepreneurship, strategy and innovation at EM Lyon and expert FrenchWeb, believe that this European project to regulate artificial intelligence systems “is based on a misconception of technological innovation and poses a serious danger to European industry by compromising the chances that we have major players in the field“, Gwendal Bihan, CEO of Axionable, is more nuanced on the subject:”I see it above all as a political act by Europe to try to regain leadership on the world stage of AI with an axis of defense of consumers, citizens, and more generally democracy.»Will this European approach, through the prism of regulation, towards responsible AI be saving? The next few years will tell.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker