Science

The EU does not want AI to spy on us: they will be classified into 4 levels of risk

The European Commission has presented a proposal to regulate artificial intelligence systems for civilian use (thus excluding those for military purposes), which should be classified into four levels of risk: minimal, limited, high and unacceptable. The latter in particular should be regarded as banned throughout the territory of the European Union, and even outside it if they affect EU citizens. So what are these unacceptable systems? Those that pose “a clear threat to the security, livelihoods and rights of people, including systems that manipulate human behavior to bypass the free will of users (for example, toys that use voice assistance to encourage dangerous behavior by minors) and the systems that allow governments to give social scores ”.

Those who fall into high risk, on the other hand, will be subject to strict obligations before they can be placed on the market. This category includes artificial intelligence systems for mass surveillance, using remote biometric identification methods such as facial recognition. They are therefore totally prohibited in public spaces, with the possibility of exceptions, “strictly defined and regulated, with time limits, geographical scope and databases sought”, for public security purposes and for which authorization is required. from a judicial body. Some examples of exceptions are the search for missing children, the prevention of terrorism, the search for the perpetrators of serious crimes.

.

Back to top button