Technology

Why each country should forge its own definition of what ethical AI would be

In France, as elsewhere, each country must now decide what it considers an acceptable use of artificial intelligence (AI), to know, for example, whether the use of facial recognition in public spaces should be accepted or prohibited. For Ieva Martinkenaite, director of analytics and artificial intelligence at Scandinavian operator Telenor, public debate is the key to finding a balance between market opportunities and ensuring the ethical use of artificial intelligence.

For the leader, governments have a tough task: ensuring that AI regulations are tailored to their local population. This is what the Norwegian operator is trying to do, which applies artificial intelligence and machine learning models to deliver more personalized and specific sales campaigns to customers, achieve better operational efficiency and optimize its network resources. As the operator’s management knows, these technologies can help fight global warming, for example by turning off antennas when their usage is low.

For Ieva Martinkenaite, also chair of the working group on artificial intelligence of the GSMA (the organization that brings together the main global operators), regulators must take more into account the commercial impact of technologies and the laws that affect them. AI ethics and governance frameworks may look good on paper, but we also need to make sure they are usable in terms of adoption, he notes.

Finding the right balance

In fostering the meaningful adoption of AI in our daily lives, nations must strive to find a “balance” between exploiting market opportunities and ethical use of technology. Noting that technology is constantly evolving, the leader admitted at a symposium in Singapore that it was not always possible for regulations to keep up.

In developing AI regulation in the Old Continent, EU legislators faced several challenges, including how laws governing the ethical use of AI could be introduced without affecting the flow of talent and innovation. he explained. This turned out to be a major hurdle, as some feared the regulations would create too much red tape for businesses. The increasing reliance on IT infrastructures and machine learning frameworks developed by a handful of internet giants, including Amazon, Google and Microsoft or Tencent, Baidu and Alibaba, is also a cause for concern, notes Ieva Martinkenaite.

And remember the perplexity of EU officials about how the region could maintain its sovereignty and independence amid this emerging landscape. Discussions in Brussels focus more specifically on the need to create key technologies for AI in the region, such as data architectures, computing power, storage and machine learning, explains the leader. According to her, in order to create greater technological independence in AI, it is essential that EU governments create incentives and stimulate local investments in the ecosystem.

Benefits at stake

Beyond sovereignty issues, Ieva Martinkenaite calls not to lower our guard against the possible misuse of AI for political purposes. Recently, Michelle Bachelet, the United Nations human rights chief, called for the use of AI to be banned if it violates international human rights law. And he underscored the urgency of assessing and addressing the human rights risks that AI could pose, noting that stricter legislation on its use should be implemented when it poses greater risks to human rights.

“AI can be a force for good, helping companies overcome some of the great challenges of our time. But AI technologies can have negative and even catastrophic effects if they are used without sufficiently taking into account how they affect people’s human rights, ”said the latter, thus agreeing with the position expressed by Ieva. Martinkenaite.

According to the latter, it is now up to each country to determine what ethical AI means. And to note that until the veracity issues related to the analysis of different skin colors and facial features have not been adequately addressed, the use of this artificial intelligence technology should not be implemented without human intervention, without adequate governance or without guarantee. quality. What is the key to this public debate and in-depth reflection on AI? A profit mess for authorities, companies and citizens, promises the head of research at Telenor.

Source: .com

Back to top button