Science

AI and profiling: Jules-Michel Morin conference “Artificial intelligence: games with data and games with power”

We recently dedicated an article by Céline Caste-Renard to the AI ​​and Profiling: Ethical and Legal Risks conference as part of the CICC Science Week 2022-2023. She emphasized the importance of quality data for training automated systems. Today we bring you the second conference of this event: “Artificial Intelligence: Games with Data and Games with Power” by Julie-Michel Morin, which also examines the role of data in the context of profiling.

Julie-Michel Maurin is a doctoral student in French-language literature at the University of Montreal. Her dissertation focuses on robotics in the performing arts. It mobilizes a technofeminist approach to thinking about the political issues that arise from the collision of art, digital cultures, and technoscientific devices. She is also a drama consultant and specializes in supporting journalists.

Data sets and their impact

Jules-Michel Morin first emphasizes that AI systems are always the result of collaboration between humans (programmers, mathematicians, algorithmists, computer scientists, etc.) and non-human agents (computer protocols, statistics, mathematical formulas, various automated or semi-automated learning applications, etc.). They are always initiated by people who can influence recommendations.

Coding human values

The speaker addresses the myth of autonomous AIs capable of surpassing human capabilities, responsible for erroneous results, which, therefore, cannot be attributed to humans. This reinforces the concept of so-called technological neutrality: companies or authorities will think they will be more impartial than people, allowing new forms of profiling and discrimination to be simplified.

Technologization of oppression

Julie-Michel Morin refers to facial recognition technology, its inability to authenticate people in the same way based on their skin color.

This technology is especially striking for blacks: without recognizing them, AI makes people invisible. In the case of identification searches, for example at airports in the United Kingdom, it makes them too visible.

Stages of the AI ​​manufacturing process: when are biases introduced?

The speaker decides to turn to the design of the ML algorithm: biases, stereotypes and human biases can be encoded during data selection, labeling, creating a model for training them.

Lack of data can be a source of discrimination, too much data about a group can be a source of stereotypes. Julie-Michel Morin cites the case of AI trained on large datasets collected online: women were identified as housewives, black men were 10% more likely than white men to be criminals…

Biased labeling of data

Data marked up manually by a person can be marked up subjectively, it must be formatted. When they are automated or semi-automated, they can renew historical prejudices. The example provided is a recommendation algorithm used by Amazon for hiring, which only kept applications from males because in the past, the remaining resumes were mostly male, although this may be motivated by the low candidacy of females.

Biased data analysis

Analysis bias occurs when people create illusory or biased correlations between datasets and algorithm targets. Confirmation bias reflects the pre-existing view that AI will amplify…

Discriminatory biases can also arise from a combination of different biases that occur at different stages of the process.

PredPol, a case of predictive fairness

PredPol is software that has been in use since 2012 and discontinued this year to determine where patrols should be done the most. Trained on the basis of police records, he recommended that they be carried out primarily in disadvantaged areas with a black majority. A study about him showed that he used “dirty data”.

Like Celine Caste-Renard, Jules-Michel Morin then considers as an example of predictive fairness the COMPAS case, an estimate of recidivism in relation to certain categories of the population, which gives the same error rate, whether for black or white people, but it was not specified that the same bet was in favor of the latter and against the former …

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.