Back to Articles
AI Isn't Secure, Says America's NIST

iTnews

SKIPPED

Details

Date Published
8 Jan 2024
Priority Score
3
Australian
No
Created
8 Mar 2025, 12:37 pm

Authors (1)

Description

Claims otherwise should be treated with scepticism.

Summary

The article highlights the warning from the US National Institute for Standards and Technology (NIST) regarding the security vulnerabilities of AI systems. NIST emphasizes that no foolproof defenses exist against AI attacks, which can result in severe failures with significant consequences. The report categorizes various AI attacks such as evasion, poisoning, privacy, and abuse, illustrating their potential impacts and ease of execution. This insight is crucial for understanding the risks associated with AI systems and underscores the pressing need for improved security measures. While primarily focused on American contexts, this information holds significant relevance worldwide, including in Australia, given the global nature of AI deployment and potential risks.

Body

The US National Institute for Standards and Technology (NIST) has warned against accepting vendor claims about artificial intelligence security, saying that at the moment “there’s no foolproof defence that their developers can employ”. The NIST gave the warning late last week, when it published a taxonomy of AI attacks and mitigations. The institute points out that if an AI program takes inputs from websites or interactions with the public, for example, it’s vulnerable to attackers feeding it untrustworthy data. “No foolproof method exists as yet for protecting AI from misdirection, and AI developers and users should be wary of any who claim otherwise,” the NIST stated. The document said attacks “can cause spectacular failures with dire consequences”, warning against “powerful simultaneous attacks against all modalities” (that is, images, text, speech, and tabular data). “Fundamentally, the machine learning methodology used in modern AI systems is susceptible to attacks through the public APIs that expose the model, and against the platforms on which they are deployed,” the report said. The report focuses on attacks to AI rather than against platforms. The report highlights four key types of attack: evasion, poisoning, privacy, and abuse. Evasion refers to manipulating the inputs to an AI model to change its behaviours – for example, adding markings to stop signs so an autonomous vehicle interprets them incorrectly. Poisoning attacks occur in the AI model’s training phase; for example, an attacker might insert inappropriate language into a chatbot’s conversation records, to try and get that language used towards customers. In privacy attacks, the attacker crafts questions designed to get the AI model to reveal information about its training data. The aim is to learn what private information the model might hold, and how to reveal that data. Finally, abuse attacks “attempt to give the AI incorrect pieces of information from a legitimate but compromised source to repurpose the AI system’s intended use.” “Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” said co-author Alina Oprea, a professor at Northeastern University. “Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set.” Oprea’s co-authors for the 106-page tome [pdf] were NIST computer scientist Apostol Vassilev, and Alie Fordyce and Hyrum Anderson of Robust Intelligence.