Technology

Who is your algorithm going to harm? Why businesses need to think about malicious AI now

From Google’s commitment to never create AI applications that could cause harm, to Microsoft’s “AI principles”, to IBM’s defense of fairness and transparency in all matters. Algorithmic issues: Big tech companies are promoting responsible AI. And it looks like other businesses, big and small, are following suit.

The statistics speak for themselves. While in 2019 barely 5% of organizations had drawn up an ethics charter that defined how AI systems should be developed and used, the proportion jumped to 45% in 2020. Keywords such as “layout human “,” governance “,” responsibility “or” non-discrimination “are becoming central elements of the AI ​​of many companies. The concept of responsible technology, it seems, is slowly making its way from the boardroom to the boardroom.

This renewed interest in ethics, despite the complex and often abstract dimensions of the subject, was largely motivated by the various pressures exerted by governments and citizens to regulate the use of algorithms. But according to Steve Mills, head of machine learning and artificial intelligence at the Boston Consulting Group (BCG), responsible AI could also work for businesses in a number of ways. “The past 20 years of research have shown us that companies that embrace corporate goals and values ​​improve long-term profitability,” Mills tells . “Customers want to be associated with brands that have strong values, and that is no different. It is a real chance to build a relationship of trust with customers.”

AI bad buzz hits big business

The challenge is big. Over the past few years, it appears that carefully crafted AI principles haven’t stopped algorithms from damaging the reputations of high-profile companies. Facebook’s advertising algorithm, for example, has been repeatedly criticized for its targeting, after the AI ​​system was found to disproportionately show advertisements on credit cards and loans to men, while women were presented with job and housing advertisements.

Likewise, Apple and Goldman Sachs have recently come under fire after receiving complaints that women were offered lower Apple Card credit card limits than men, as one company’s algorithm study of who would benefit the most from additional care has been found to favor white patients.

These examples should not deter companies that are ready to invest in AI, says Mills. “Many leaders see responsible AI as a way to mitigate risk,” he says. “They are motivated by the fear of damage to their reputation. But that’s not the right way to look at it. The right way to look at it is a great opportunity to differentiate the brand, to customer loyalty and, ultimately, long-term financial benefits “.

“Most people think companies are trying to take advantage of you”

According to a recent study by consulting firm Capgemini, nearly half of clients say they are confident in interactions with organizations that use AI, but expect these systems to clearly explain their decisions and organizations are responsible if the algorithms don’t work.

For Lasana Harris, a researcher in experimental psychology at University College London (UCL), how a company publicly presents its algorithmic goals and values ​​is key to winning favor with customers. Being wary of the practices of for-profit companies is a default position for many people, he explained in a recent webinar; and the intrusive potential of AI tools means that companies should redouble their ethical efforts to reassure their customers.

“Most people think that companies are trying to take advantage of you, and the common perception of AI tends to flow from that,” says Harris. “People fear that AI is being used to exploit their data, invade their privacy or get too close to them.”

The developer sling

“These are the goals of the company,” he continued. “If the customer perceives good intentions from the business, then AI will be viewed positively. So you need to make sure your business goals are aligned with your customers’ interests.”

It’s not just customers that businesses can win over with strong AI values ​​and practices. In recent years, algorithm makers have also come to realize that software developers are concerned about bearing the brunt of the responsibility for unethical technologies. If programmers aren’t fully confident that their employers will use their inventions responsibly, they might quit. In the worst case, they might even make a documentary out of it.

There is no major tech player who has not experienced some form of developer dissent over the past five years on the subject. Google employees, for example, rebelled against the search giant when it was revealed the company was providing the Pentagon with object recognition technology for use in military drones. After some of the rebels decided to quit, Google abandoned the contract.

Go from a “nice to have” to a competitive advantage

That same year, a group of Amazon employees wrote to Jeff Bezos asking him to stop selling facial recognition software to the police. More recently, software engineer Seth Vargo pulled one of his personal projects from GitHub after discovering that one of the companies using it had signed a contract with the US Immigration and Customs Enforcement (ICE). .

Programmers don’t want their algorithms to be used for harmful purposes, and top talent will be attracted to employers who have the right safeguards in place to ensure their AI systems remain ethical. “Technicians are very concerned about the ethical implications of their work,” says Mills. “It will be very important to focus on this issue if you, as a business, are going to attract and retain the digital talent that is so essential today.”

From a “nice to have”, technological ethics could therefore become a competitive advantage; and judging by the recent proliferation of ethical charters, most companies understand the concept. Unfortunately, writing press releases and emails at scale won’t be enough, says Mills. Bridging the gap between theory and practice is easier said than done.

Ethics, of course – but how?

Capgemini’s research described as “disappointing” the progress of organizations in the field of ethics, marked by uneven actions. Only half of organizations, for example, have appointed an ethics executive for AI systems.

Mills draws a similar conclusion. “We saw that there are a lot of principles in place, but very little change in the way AI systems are actually built,” he says. “There is a growing awareness, but companies don’t know how to act. It seems like a big and thorny problem, and they kind of know they have to do something, but they don’t know what.”

Fortunately, there are examples of good behavior. Mills recommends following Salesforce’s practices, which date back to 2018, when the company created an AI service for CRM called Einstein. By the end of the year, the company had defined a set of AI principles, created an office of ethical and human use, and appointed an ethics and human officer, as well as an architect of ethical practice of the artificial intelligence.

Hire and empower an employee who will lead the implementation of responsible AI throughout the organization

In fact, one of the first steps for any ethical CIO to take is to hire and empower an employee who will drive the implementation of responsible AI throughout the organization, and empower them one seat on the Comex. “An internal champion should be appointed to lead any responsible AI initiative,” Detlef Nauck, head of AI and data science research at BT Global, told .

Mr. Nauck adds that this role should require that an employee specifically trained in AI ethics work across the company and throughout the product lifecycle, anticipating unintended consequences of AI systems. and discussing these issues with leaders.

It is also essential to ensure that employees understand the values ​​of the organization, for example by communicating ethical principles through mandatory training sessions. “The sessions should educate employees on how to meet the organization’s ethical commitments to AI, and ask them the essential questions needed to spot potential ethical issues, such as whether an AI application can lead to excluding certain groups of people or causing social or environmental damage, ”says Nauck.

Test new AI products throughout their lifecycle

The training should be accompanied by practical tools to test new products throughout their life cycle. Salesforce, for example, created a “consequence sweep” tool that asks participants to imagine the unintended outcomes of a new feature they’re working on, and how to mitigate them.

The company also has a dedicated board that assesses, from prototype to production, whether teams are eliminating bias in AI training data. According to the company, this is how Einstein’s marketing team succeeded in eliminating bias in ad targeting.

Mills mentions similar practices at the Boston Consulting Group. The company has created a simple web tool, which comes in the form of a “yes or no” questionnaire that teams can use for any project they are working on. Adapted from BCG’s ethical principles, the tool can help identify risks on an ongoing basis.

Ethics is about instilling a state of mind within teams, and does not require sophisticated technology or expensive tools

“Teams can use the questionnaire from the first stage of the project until deployment,” says Mills. “As you go along, the number of questions increases, and it becomes more of a conversation with the team. It gives them the opportunity to step back and think about the implications of their work and the potential risks.”

So ethics is ultimately about instilling a mindset into teams and does not require sophisticated technology or expensive tools. Therefore, thinking about it now could be essential to stay ahead of the competition.

To go further on the subject of AI in business

Source: “.com”

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker