NIST slams AI makers’ ‘snake oil’ security promises – be cautious!

January 6, 2024
1 min read

Key Points:

  • AI and machine learning technologies are vulnerable to attacks that can have dire consequences.
  • NIST has identified four specific security concerns for AI systems: evasion, poisoning, privacy, and abuse attacks.

Artificial intelligence (AI) and machine learning (ML) technologies have made significant progress in recent years, but they are still vulnerable to attacks that can cause catastrophic failures, warns the US National Institute of Standards and Technology (NIST). NIST computer scientist Apostol Vassilev stated that there are still theoretical problems with securing AI algorithms that have not been solved. Anyone who claims otherwise is selling “snake oil.”

NIST has co-authored a paper that categorizes the security risks posed by AI systems. The paper, titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” explores various adversarial machine learning techniques and focuses on four specific security concerns: evasion, poisoning, privacy, and abuse attacks.

In an evasion attack, an adversary’s goal is to generate adversarial examples that can cause misidentifications in computer vision systems. For example, stop signs can be manipulated to be misidentified by autonomous vehicles. Poisoning attacks involve adding unwanted data to the training of a machine learning model, causing the model to respond in undesirable ways. Privacy attacks involve accessing and extracting sensitive, protected data from AI models. Lastly, abuse attacks involve repurposing generative AI systems for malicious purposes, such as promoting hate speech or generating media that incites violence.

The paper aims to suggest mitigation methods for these attack types, provide guidance to AI practitioners, and promote the development of better defenses. It concludes that achieving trustworthy AI currently requires a tradeoff between security, fairness, and accuracy.

Overall, the paper highlights the need for further research and development to address the security vulnerabilities inherent in AI and machine learning systems. It emphasizes the importance of considering these security concerns during the training and deployment of AI models to mitigate the risks associated with adversarial attacks.

Latest from Blog

Apache’s OFBiz gets new fix for RCE exploits

TLDR: Apache released a security update for OFBiz to patch vulnerabilities, including a bypass of patches for two exploited flaws. The bypass, tracked as CVE-2024-45195, allows unauthenticated remote attackers to execute code