Technology

Ethical and responsible AI: How to move from theory to action?

Less than half (48%) of companies that have been successful in deploying AI across the organization have done so with responsible AI, according to a recent study by the Boston Consulting Group. This issue appears to be a priority for a good number of organizations, which choose to equip themselves with labels or dedicated ethical councils.

This is the case of Orange which, last month, created a data and AI ethics board. Composed of 11 recognized personalities in the field, and chaired by Stéphane Richard, this “consultative and independent” body will have to lay down the main ethical principles governing the use of data and artificial intelligence technologies by the company. This is not Orange’s first step towards responsible AI, since last year the company obtained the GEEIS-AI label (Gender Equality Diversity European & International Standard – AI) rewarding its actions to promote diversity. and avoid the risks of discrimination in AI-based systems.

Beyond the symbolic scope of this type of initiative, are companies ready to move from principles to action? For Dr Léa Samarji, there is no doubt: more and more companies are putting this ethical dimension applied to AI to the forefront. “It is not enough to develop efficient algorithms, we must pay attention to the notion of ethics,” says the specialist in ethical AI, in charge of the AI ​​and IoT offer for the EMEA region at Avanade to .

More inclusive and diverse datasets

For this specialist, the culprit is not the programming of the agent but the underlying data. “It’s always better to ask good questions from the start, and make sure that we are in an ethical development. According to the manager at Avanade, two key factors must be taken into account upstream. In the development phase, “it’s important that more people participate. I think this team should be diverse, made up of men and women, with a diversity of social backgrounds, and business and tech profiles. Unconsciously, man would have to give his own bias to the machine, ”says Léa Samarji.

The other factor is the training dataset. According to the specialist, it is essential that this dataset be “as inclusive and as diverse as possible”. To better represent societal nuances and allow the development of ethical and inclusive virtual agents, she recommends using external data sources, such as newspapers, magazines and Wikipedia, in addition to internal data sources of organizations. organizations. “The extraction of past data is insufficient, since this data can be biased according to the culture of the company. We must open the training datasets to open the points of view, ”she recommends.

Within Avanade, a joint venture between Accenture and Microsoft, and in partnership with Universidad Francisco de Vitoria in Madrid, Léa Samarji helped design a solution aimed at eliminating reported issues of bias, misunderstandings related to language or more unethical decision-making in AI-based conversation tools, such as voice agents and assistants. This solution is already being used by a client in Singapore, who needed to improve the understanding of the local “singlish” dialect by their virtual agents, to help residents access key services. “The client wanted to build virtual agents to chat with citizens who speak several official languages ​​including English, Tamil and Mandarin. Singlish is the English spoken in Singapore, a mixture of several local languages. This dialect poses difficulties of comprehension. However, if we develop a virtual agent who masters the other official languages, we exclude some of the citizens. So we went to train the machine on the singlish dialect, and collect words and phrases to allow learning about this dialect. It’s a way of promoting ethical AI, ”explains Léa Samarji.

Natural biases are linked to the corporate culture

The models formed on the data sets of companies and organizations can include natural biases linked to the culture of the company, to the history or to the ratio of diversity of the workforce, underlines Léa Samarji. In this, the specialist believes that “it is easier for a start-up to build ethical AI”, even if “large groups have the means to act, to go and collaborate with third parties to collect this data. -the “.

In short, adopting “ethical” behavior is not straightforward. “It evolves over time, there is no magic button to become ethical,” says Léa Samarji. It considers it fundamental to take advantage of a “training phase” during which the performance of the tool is measured by users. “If we find that we do not have enough inclusive dataset, we will continue to train this virtual agent. With this demand for satisfaction, the user co-constructs the solution. “

The Boston Consulting Group report shed light on a real gap between leaders’ perception of the maturity of their organization and the reality of their progress, and this for almost every stage of development. Companies also tend to overestimate their maturity in this area, with almost half (55%) of companies who believe they have successfully deployed responsible AI are, in fact, lagging behind.

Companies have appointed chief ethics or data officers

BCG observes that nearly 70% of leading companies in responsible AI now have a chief ethics or data officer and a strategic committee to guide implementation.

Responsible AI is now seen as a business benefit. “Today, the most advanced organizations in responsible AI no longer do it just to better anticipate and manage potential risks. They perceive it as a real competitive advantage, whether it is to differentiate their brand or to accelerate their recruitments and develop the loyalty of their employees. Implementing responsible AI is proof of a culture of responsible innovation – a culture often supported by the objectives and values ​​of the company, ”says Sylvain Duranton, world director of BCG GAMMA and co-author of the study.

Tech leaders are still far from being exemplary, as evidenced by the departure of the two co-directors of Google’s ethical AI unit, which has cast a great deal of cold on the employees of the American giant. After terminating the functions of Timnit Gebru, co-director of this cell, Margaret Mitchell, the other head of this unit, was also dismissed by the American giant in February. This ethical artificial intelligence expert who has already worked on machine learning biases in terms of diversity and on linguistic models for image capture, had yet been hired by Google to co-lead the company’s ethical AI team. with Timnit Gebru only two years ago.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker