Home> News from EUROGIP and occupational risks in Europe>
The challenges posed by Artificial Intelligence (AI) are numerous. They are linked in particular to problems of explainability of algorithms, opacity of operation or their complexity. In order to supervise them, horizontal European regulations were necessary. This new regulation, the draft of which was published on April 21, 2021, is intended to be a strong political act.
New AI rules follow a risk-based approach
- Unacceptable risk (going against European values): AI systems seen as a clear threat to security, livelihoods and human rights will be banned. These include AI systems or applications that manipulate human behavior to deprive users of their free will (for example, toys using voice assistance that induce minors to engage in unsafe behavior) and systems that allow users to engage in unsafe behavior. social rating by states.
- High risk : AI systems that will be subject to strict evaluation rules, including the AI technologies used in:
- critical infrastructure (eg transport) and likely to endanger the life and health of citizens;
- education or vocational training, which can determine access to education and a person’s career path (marking exam papers, for example);
- product safety components (application of AI in robot-assisted surgery, for example);
- the field of employment, workforce management and access to self-employment (CV sorting software for recruitment procedures, for example);
- essential private and public services (credit risk assessment, which deprives some citizens of the possibility of obtaining a loan, for example);
- the field of law and order, which are likely to interfere with the fundamental rights of individuals (checking the reliability of evidence for example);
- the field of management of migration, asylum and border controls (verification of the authenticity of travel documents, for example);
- the areas of the administration of justice and democratic processes (application of the law to a concrete set of facts, for example).
- Limited risk : AI systems to which specific transparency obligations apply.
- Minimal risk : The legislative proposal allows the free use of applications such as video games or AI-based spam filters. The vast majority of AI systems fall into this category. The draft regulation does not provide for intervention in this area, as these systems represent only minimal or no risk to the rights or security of citizens.
Finally, it should be noted that certain issues specific to AI will also be introduced in certain sectoral regulations, such as the introduction of requirements related to AI in the draft revision of the machinery directive.
Draft regulation on Artificial Intelligence (pdf, in English)
Appendices (pdf, in English)