Technology

AI’s next big threat may already be lurking online

Artificial intelligence (AI) and machine learning experts warn of the risk of data poisoning attacks that can undermine the large datasets used to train models for the deep learning capabilities of many AI services.

Data poisoning occurs when attackers spoof the training data used to build deep learning models. This action means that it is possible to influence the decisions made by the AI ​​in a way that is difficult to track.

By modifying the initial information used to train machine learning algorithms, data poisoning attacks can be extremely powerful. Because AI learns from bad data and can therefore make “bad” decisions that have important consequences.

Split vision poisoning, slight but strong

However, there is currently no evidence of actual attacks involving the poisoning of web-scale datasets. But a group of artificial intelligence and machine learning researchers from Google, ETH Zurich, NVIDIA and Robust Intelligence claim to have demonstrated the possibility of poison attacks that “guarantee” malicious examples in web-scale datasets used to train the largest machine learning systems. models.

“While large deep learning models are robust, even a small amount of ‘noise’ in the training sets (i.e. a poison attack) is enough to introduce targeted errors into the behavior of the model,” the researchers warn.

The researchers said that by using methods they developed to exploit how the datasets work, they could poison 0.01% of the largest deep learning datasets with little effort and low cost. While 0.01% doesn’t seem like a lot of datasets, the researchers warn that it’s “enough to poison the model.”

This attack is known as “split-view poisoning”. If an attacker manages to gain control over a web resource indexed by a certain set of data, he can poison the collected data, making it inaccurate, which can negatively affect the entire algorithm.

Always traffic from expired domain names

One way attackers achieve this goal is to buy expired domain names. Domains expire regularly and then someone else can buy them, providing a great opportunity for data poisoners. “The adversary does not need to know the exact time when clients will download the resource in the future: by owning the domain, the adversary guarantees that any future downloads will collect poisoned data,” the researchers said.

The researchers note that buying a domain and using it for malicious purposes is not a new idea. Cybercriminals use it to spread malware. But attackers with other intentions could potentially poison a large data set.

Frontal poisoning, plague for Wikipedia

In addition, the researchers detailed a second type of attack, which they called “preemptive poisoning.”

In this case, the attacker does not have full control over a particular dataset, but can accurately predict when a web resource will be accessed to include a dataset in a snapshot. Knowing this, an attacker can poison the data set just before information is collected.

Even if after a few minutes the information returns to its original form without manipulation, the data set will still be incorrect on a snapshot taken during an active malware attack.

One of the most widely used resources for finding training data for machine learning is Wikipedia. But the nature of Wikipedia is such that anyone can edit it, and according to the researchers, an attacker “could poison the Wikipedia training set by making malicious changes.”

Snapshot prediction is the key to defeating infection

Wikipedia datasets are not based on the current page, but on snapshots taken at a specific time, which means attackers tampering at the right time can maliciously change the page and force the model to collect inaccurate data that will be stored in the dataset forever.

“An attacker who can predict when a Wikipedia page will be used for inclusion in the next snapshot can perform poisoning just before deletion. Even if the edit is quickly undone on an active page, the snapshot will contain malicious content forever. researchers wrote.

The way Wikipedia uses a well-documented protocol for taking snapshots means that it is possible to predict the timing of article snapshots with great accuracy. The researchers suggest that this protocol could be used to poison Wikipedia pages with a 6.5% success rate.

This percentage may seem low, but the number of Wikipedia pages and how they are used to train machine learning datasets means that models can be fed inaccurate information.

The researchers note that they did not modify any Wikipedia pages in real time and reported to Wikipedia about the attacks and possible ways to protect against them as part of the responsible disclosure process. ZDNET has contacted Wikipedia for comment. The researchers also note that the purpose of publishing the article is to encourage other security professionals to conduct their own research on how to protect AI and machine learning systems from intruders, malicious attacks.

“Our work is only a starting point for a better community understanding of the risks associated with creating patterns from webscale data,” the document says.

Source: “.com”

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.