Technology

Infox online: CSA wants platforms to analyze their own algorithms

The Superior Audiovisual Council (CSA) reported this Tuesday on the practices implemented in 2020 by the large platforms to fight against disinformation.

This is the second year that platforms have to report to the CSA about their practices in France, since the law of December 22, 2018 on the fight against information manipulation. These obligations in terms of systemic regulation are part of a broader legislative landscape, with the focus, among others, on the forthcoming Digital Services Law of the European Union.

Although more detailed than the previous year, the reports submitted by the platforms to the CSA still provide very little information on the algorithmic aspect, on which, however, much of the regulation that is carried out is based. According to the CSA, moderation work must be supervised by humans, as the effects of its automation have not yet been clearly analyzed and content recommendations remain opaque.

“Operators do not explain, or do very partially, how these specific systems work, what their returns are (false positives and negatives) as well as their concrete results,” the CSA writes in its report. At Microsoft, LinkedIn, and Verizon Media, the authority notes that this information does not even exist. What’s more, “no indication has been given of the performance of the systems used,” adds the CSA, which acknowledges, however, that the “complexity” of certain technologies, such as machine learning, used by Facebook, Google or even Twitter , it can be a drag.

How to account for the annoyance of algorithmic biases?

However, the CSA was able to observe that algorithmic systems are increasingly being exploited in the detection of false information. Some operators have developed specific algorithmic moderation systems to identify problematic content. Dailymotion, Snapchat, and Twitter, for example, use algorithms to scale reports, which once scaled are processed by systems to reduce investigation times. In the case of Twitter, this process allows to identify the content before human moderation.

Other platforms, such as Facebook and Microsoft, use generative confrontational networks (GANs) to “create hyperframes, which can then train deep learning algorithms to better detect them,” the CSA explains. Microsoft said in its report that it provides access to deepfake detection tools like Microsoft Video Authenticator, which can then be integrated into its Azure service.

While these automated systems can be used to remove problematic content on a large scale, the CSA advocates for greater transparency by platforms about the intentions, mechanisms, and consequences of these systems. On the more specific topic of content recommendation algorithms, advisers also warn of the damaging potential of algorithmic biases, which are likely to affect content search and recommendation.

The health crisis accentuates the phenomenon of massive misinformation

Given the overabundance of false information related to the context of the health crisis, platforms have been able to be particularly proactive there, the CSA warns. Facebook announced in particular the removal of 12 million content related to the health crisis, since March 2020, on Facebook and Instagram, and Google announced that it had removed 9.3 million videos related to the health crisis, on the YouTube platform , in the first quarter of 2020..

But beyond the numbers, publishers are often criticized for their lack of accountability and candor in terms of regulation. The Wall Street Journal recently reported how harmful Facebook and Instagram are to their users, after Facebook made changes to its algorithms in the past. Facebook defended itself earlier this week, arguing that the arguments made in this journalistic work are based on a restrictive selection of documents that have been leaked.

Platform liability is also becoming an issue to protect younger audiences. With social networks like TikTok appealing to teens, efforts to restrict access or control content are increasing. A few days ago, ByteDance, the parent company of TikTok, announced on its own that it was going to limit the time of use of the Chinese version of TikTok among those under 14 years of age. A rather surprising initiative on the part of a publisher, but one that also occurs in a context of the Chinese state taking control of the tech giants, as Le Monde reminds us.

Back to top button