Technology

Google caught hacking its child-criminal content detection system

Detection of images and videos representing acts Child abuse is a noble cause that major tech companies have been doing for over a decade. However, new automated detection methods are more intrusive to users and pose a real risk of false positives.

An August 21 New York Times article reported two such anecdotes in which a father sends a pediatrician a photograph of his young child’s genitals to diagnose an infection. During the height of the Covid-19 pandemic, the consultation took place remotely. Photos taken with an Android phone are then automatically saved to Google Drive, the company’s cloud storage service. During this action, the files are analyzed and illegal content is detected by Google. The user account is then blocked and the information is passed on to the police.

Cleaned up by police, banned by Google

In both cases, after investigation, the police concluded that no criminal act had taken place and closed the case. But Google didn’t restore the accounts, even after they were contacted. Unfortunately, like many people, they relied heavily on Google’s tools: email, calendars, file storage… and even a mobile subscription to one of them, Google fi. Google used to be a private company, they can terminate an account for any reason, and today they don’t have any remedy.

These two scenarios illustrate potential derivatives of illegal content detection automation. The historical reference system was PhotoDNA, co-created by Microsoft and made public in 2009. Used in particular by Facebook, it works by generating signatures for content files. pedophile, the latter being then automatically compared against the user’s files. This allows known content to be discovered without actually “watching” what people are doing.

Minor use of automated systems

But in 2018, Google developed a new AI-based technique for automatic recognition through visual analysis. The goal was to be able to detect new images, not just those already present in the database. The company has also made the technology available to the rest of the ecosystem. A commendable initiative, but, as we can see, not 100% reliable. For reference, Google issued 600,000 reports and deactivated 270,000 accounts after discovering child abuse content in 2021.

This raises the issue of human moderation and the possibility of appealing against a decision that is otherwise entirely subject to an automated and opaque system. A problem that goes beyond this particular area and will continue to grow as automation grows. For example, someday this may affect the recognition of passports at the borders.

Possible privacy violations

She may also ask about privacy protection. Apple tried to develop an automatic file scanning system in the same vein in 2021 and was heavily criticized by experts in the field. One of the problems mentioned is that, once configured, it would be very easy to change the parameters searched for by this type of system to search for other files. This, for example, would make it easier for some non-democratic states to identify their political opponents.

More recently, the European Commission introduced legislation aimed at forcing messaging and hosting providers to install backdoors in their services, again under the guise of combating child pornography. An even more controversial project, because it involves the reading of all messages by governments. European Cnil also heavily criticized him.

Child crime, like terrorism, is often used to justify the establishment of surveillance systems by governments, and this is why care must be taken to limit the risk of these systems being misused for other purposes.

Julien Bergounhoux @JBergounhoux

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.