Science

The AI ​​can detect the Omicron variant by analyzing patients’ voices.

⇧ [VIDÉO] You may also like this affiliate content (after ads)

Listening to YouTubers who are working on the detection of the coronavirus… This is an amazing method that was applied by a team of researchers from different universities. They trained AI to detect voice changes due to the micron variant of SARS-CoV-2. Their results reach an accuracy of about 80%.

“COVID-19 is systematically detected and confirmed by polymerase chain reaction (PCR) using nasal or throat swabs,” the scientists recall in their paper, previously published on medRxiv. “However, the turnaround time and cost of resources pose challenges for testing in some environments.” Therefore, the researchers wanted to experiment with an alternative method. To do this, they went to the use of artificial intelligence. However, the decision did not come naturally. Indeed, the machine learning programs that cover the term “artificial intelligence” most of the time require a large amount of data.

However, in the medical environment, obtaining data that is sufficient and well-preserved is not so easy, the researchers remind. Therefore, they simply chose to move away from the data provided by the medical community. “In this study, we used YouTube to collect voice data from people who self-reported positive tests for COVID-19 at a time when Omicron was the predominant option,” they explain.

:: T-SHIRT THAT SUPPORTS SCIENCE! ::

Show the world your passion for space and that you also support the fight against global warming.

Of course, these data are not as reliable as if the registered people were tested in a laboratory. However, they also have other benefits for scientists… Starting with their numbers and accessibility: “Worldwide, various social media platforms have over 3.6 billion users and are expected to exceed 4.4 billion by 2025. Over 500 hours of video uploaded to YouTube every minute. Therefore, these data, which are very often freely available to researchers, can form a valuable basis. “This data more accurately reflects noisy, unencrypted ‘real world’ data,” the scientists add.

Intensive learning based on YouTube videos

Therefore, it was from 93 hours of YouTube recordings that they undertook to “train” their artificial intelligence. In these samples, 183 carriers reported being infected with the coronavirus at a time when the Omicron variant was dominant. 120 people said they were infected at a different time. Another 138 people said they had a respiratory tract infection not related to COVID-19. Finally, 192 people did not report a respiratory infection.

The audio samples have been processed to preserve only those moments when YouTubers express themselves. They were then divided into 2.5 second segments. Some of these fragments served as a training base for AI, and the rest were later used to test it. The results published on medRxiv are preliminary but of interest. “The performance of the model was 85% specific and 80% sensitive to classify Omicron subjects and asymptomatic healthy subjects,” the scientists say. The sensitivity of a test corresponds to its ability to give a positive result when testing a hypothesis. Specificity, in contrast, measures the ability of a test to give a negative result when the hypothesis is not confirmed.

In other words, the AI ​​did not achieve perfect accuracy in its detection, but got off with flying colors. The results also showed that it didn’t work as well when it wasn’t a specific Omicron variant. This suggests that it is this one that causes laryngitis, which in a certain way worsens the voice. Further research is likely to be required before this technique can be used in real life, but the researchers say it will have the advantage of being non-invasive and producing instant results.

medRxiv

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.