NIST slams AI makers’ ‘snake oil’ security promises – be cautious!

January 6, 2024
1 min read

Key Points:

  • AI and machine learning technologies are vulnerable to attacks that can have dire consequences.
  • NIST has identified four specific security concerns for AI systems: evasion, poisoning, privacy, and abuse attacks.

Artificial intelligence (AI) and machine learning (ML) technologies have made significant progress in recent years, but they are still vulnerable to attacks that can cause catastrophic failures, warns the US National Institute of Standards and Technology (NIST). NIST computer scientist Apostol Vassilev stated that there are still theoretical problems with securing AI algorithms that have not been solved. Anyone who claims otherwise is selling “snake oil.”

NIST has co-authored a paper that categorizes the security risks posed by AI systems. The paper, titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” explores various adversarial machine learning techniques and focuses on four specific security concerns: evasion, poisoning, privacy, and abuse attacks.

In an evasion attack, an adversary’s goal is to generate adversarial examples that can cause misidentifications in computer vision systems. For example, stop signs can be manipulated to be misidentified by autonomous vehicles. Poisoning attacks involve adding unwanted data to the training of a machine learning model, causing the model to respond in undesirable ways. Privacy attacks involve accessing and extracting sensitive, protected data from AI models. Lastly, abuse attacks involve repurposing generative AI systems for malicious purposes, such as promoting hate speech or generating media that incites violence.

The paper aims to suggest mitigation methods for these attack types, provide guidance to AI practitioners, and promote the development of better defenses. It concludes that achieving trustworthy AI currently requires a tradeoff between security, fairness, and accuracy.

Overall, the paper highlights the need for further research and development to address the security vulnerabilities inherent in AI and machine learning systems. It emphasizes the importance of considering these security concerns during the training and deployment of AI models to mitigate the risks associated with adversarial attacks.

Latest from Blog

MediSecure hacked with massive ransomware data breach

Summary of ‘MediSecure hit by large-scale ransomware data breach’ TLDR: MediSecure, an Australian prescriptions provider, was hit by a large-scale ransomware attack. The incident is believed to have originated from one of

Equalizing cybersecurity for all

TLDR: A discussion on how organizations can enhance their cybersecurity posture with Blumira’s automated threat monitoring, detection, and response solutions. Blumira is working to lower the barrier to entry in cybersecurity for

Big cyber-attacks cost less now

Summary of Unexpectedly, the cost of big cyber-attacks is falling TLDR: Cybercrime costs are expected to rise to $23 trillion by 2027, according to Anne Neuberger Data shows that the economic impact