Science

Draft regulation on AI (I): broad concepts adopted by the Commission

The eagerly awaited proposal “ Artificial Intelligence Act »Is the result of work spanning several years (see not. Eur. Comm., White Paper, 24 Feb. 2020, COM (2020) 65 final, Dalloz actualité, 28 Feb. 2020, obs. C. Crichton ; Communication, 25 Apr. 2018, COM (2018) 237 final). Both the aspects to be regulated and the way of understanding the concepts are indeed a major challenge. As such, the European Commission proposes to broadly embrace the concept of artificial intelligence, while choosing not to regulate it entirely. As the proposal is vast, this first article will focus on developing the concept of artificial intelligence adopted by the Commission and the players concerned by the regulations. Since only certain artificial intelligence systems are covered by the proposal, a second article will specify their nature and regime (see Draft regulation on AI (II): a risk-based approach, Dalloz news, forthcoming) .

Artificial intelligence systems concerned

Unsurprisingly, the European Commission maintains the expression “artificial intelligence systems” (see already “Ethical guidelines for trustworthy AI”, 8 Apr. 2019, § 143) instead of risk the only term “artificial intelligence” which refers, at best, to a discipline. The artificial intelligence system is thus defined as software developed from one or more techniques listed in the appendix (art. 3, 1). This Annex I lists three categories of systems: self-learning systems (a), logical systems (b) and statistical systems (c). Self-learning systems are what is commonly referred to as the ” machine learning “. Annex I, a), specifies in this respect the three main methods used – supervised learning, deep learning and learning by reinforcement – without limiting the list thanks to the adverb “including” since there are currently hybrid methods. Logical systems, also called symbolic or deterministic, follow a predefined logical framework to achieve a result. Commonly used since the 1970s, in particular under the name of “expert systems” or KBS (knowledge-based system), they were considered a form of artificial intelligence before the rise of machine learning of the last decade. Finally, the statistical systems include, according to c) of Annex I, Bayesian estimates, which continue to develop, as well as research and optimization methods, which are well established in practice.

It is fortunate that the definition is not limited to machine learning, which was already foreseen from reading the Commission’s White Paper which stated that machine learning techniques “constitute one of the branches of AI” (COM (2020) 65 final, prec., p. 19 ). The high-level expert group also included in its definition systems “applying reasoning to knowledge”, ie logical systems (Ethical Guidelines for Trusted AI, supra, § 143). Indeed, the wide media coverage of machine learning during the 2010s must not have the effect of eliminating previous practices which were well established and which continue to develop. Before that, the techniques of machine learning only knew very specific applications, such as reading checks, in particular due to a lack of computing power. In this way the old techniques continue to be exploited and it is quite possible to conceive that a system can use several techniques, or even that new techniques be discovered in the future. Limit AI systems to only the techniques of machine learning would then be too reductive and too anchored in our contemporary meaning, risking an undesirable rapid obsolescence.

On the other hand, the definition proposed in Article 3, 1) adds some clarifications which do not seem appropriate. Thus these AI systems “can, for a given set of objectives defined by a human being, generate output results (output) such as content, predictions, recommendations or decisions influencing the environments with which they interact ”. It seems curious to have added the fact that the objectives must be defined by a human being whereas the proposed regulation is intended to be prospective – especially since a clearly defined objective is not necessarily specified (not. clustering) -. Likewise, specifying that a decision influences an environment seems appropriate. Finally, adding a list of examples might confuse the whole thing.

More generally, certain superfluous distinctions have been abandoned, such as the distinction between interaction with a physical or virtual environment. By removing this mention, the European Commission is choosing to regulate any form of AI system, whether integrated in a dedicated physical body (robot) or in a more versatile machine such as a computer or a smartphone (bot). In any event, the Commission defines AI systems as ‘software’ (software) without expressly referring to Directive 2009/24 / EC of 23 April 2009 on the legal protection of computer programs, thus leaving to intellectual property law the task of qualifying the protection regime where appropriate.

Finally, it should be noted that it is not because the Commission adopts a broad definition of AI systems that it intends to regulate all AI systems. On the one hand, the regulation does not apply to AI systems developed or used exclusively for military purposes (art. 2, § 3). On the other hand, it only applies to certain AI practices which will be developed in a second part (see Draft regulation on AI (II): a risk-based approach, D. actu. May 4, 2021). Practices deemed unacceptable are prohibited (art. 5), those deemed high-risk subject to a strict compliance regime (art. 6 to 51), and some deemed low-risk governed by a principle of transparency (art. 52).

Stakeholders concerned

The old projects opened a breach that seemed dangerous. Questioning the heated question of the distribution of responsibilities, the Commission mentioned in its White Paper the following actors: “the developer, the deployer (the stakeholder who uses an AI-based product or service) and potentially of other stakeholders (the producer, the distributor or the importer, the service provider, and the professional or private user) ”(COM (2020) 65 final, prec., p. 26). The proposed regulation presents a return to orthodoxy by abandoning new concepts in favor of pre-existing terms. Thus the actors mentioned by the text remain simply supplier, user, agent, importer and distributor (art. 3, 8).

The supplier of an AI system becomes the central actor. It is defined as the natural or legal person, agency or other body which develops an AI system or which has an AI system already developed with a view to its marketing or commissioning, under his own name or brand, against payment or free of charge (art. 3, 2). The importer, for his part, is the person established in the Union who places on the market an AI system to which is affixed the name or the brand of a person established outside the Union (art. 3, 6). Finally, the distributor is a natural or legal person other than the supplier or importer, who in the supply chain makes an AI system available without affecting its properties (art. 3, 7).

This simplicity can only be welcomed. The terms “developer” and “deployer”, unknown to the lawyer, would have undermined the application of any related regulations because of the need to qualify the parties beforehand, leading before any substantive debate to discussions on the subject. the precise interpretation of these terms. In addition to the question of qualification, these terms would have increased the burden of the litigant in determining the proof of this quality as well as in the question of the competent judge in the presence of several developers or deployers. And these obstacles risk dissuading many actions against them. Coupled with the documentation obligations extremely detailed by the proposed regulation (see Draft regulation on AI (II): a risk-based approach, Dalloz news, May 4, 2021, forthcoming), the regulation proposal prevents everything risk of an operator guarding against the dreaded black boxes.

In this way and to summarize, only the person who has affixed his name or his brand on an AI system, facilitating his designation, is responsible. By affixing any distinctive sign, the producer thus assumes the responsibility of complying with the obligations of the regulations. This simplified designation of the person in charge is reminiscent of the responsibility of the producer due to a defective product, since the “person who presents himself as a producer by affixing his name, his brand or another product to the product is also designated as a producer. distinctive sign ”(Dir. 85/374 / EEC, 25 July 1985, art. 3, § 1), in order to facilitate the victim’s action.

The user, finally, is the natural or legal person, public authority, agency or other body, which uses an AI system, excluding however use for personal purposes (art. 3, 4). In other words, any user acting for purposes other than professional is excluded from the scope of the regulation.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker