Science

Zoom urged to stop the development of AI capable of analyzing emotions – L’Éclaireur Fnac

About thirty human rights and digital rights organizations sent a letter to the firm’s CEO asking them to stop these projects.

In April, Protocol reported that Zoom wants to integrate artificial intelligence (AI) into its virtual meeting software to detect and analyze users’ emotions. These projects are the subject of criticism from human rights and digital rights organizations. On May 11, Fight for the Future and 27 others sent a letter to Eric S. Yuan, the company’s CEO, asking them to stop it. “This decision to extract emotional data from users, based on the misconception that artificial intelligence can track and analyze human emotions, is a violation of privacy and human rights,” they explain.

Organizations believe that this technology should not be developed, as it is based on pseudoscience. In other words, AI emotion analysis doesn’t work. Not only can facial expressions vary drastically, they are also often unrelated to underlying emotions. It even happens that people are not able to accurately read strangers.

Emotional AI, dangerous technology

Such technology would also be dangerous because it is discriminatory and punitive. “If Zoom goes ahead with these plans, this feature will discriminate against people of certain nationalities and people with disabilities by hard-coding stereotypes into millions of devices (…) This technology could have much more sinister and punitive applications. It’s not hard to imagine that employers and academic institutions are using emotion analytics to discipline workers and students who are perceived to be “expressing the wrong emotions” due to faulty AI,” said Caitlin Seeley George, director of campaigns and operations at Fight for The Future.

Organizations also see this emotional AI as a data security threat as organizations adopting this type of technology become a target for government agencies and malicious hackers. Hoping that their letter will force Zoom to abandon its plans, they have asked the CEO to publicly respond to their request and commit to “not implement emotional AI” by May 20th.

Back to top button