NIST slams AI makers’ ‘snake oil’ security promises – be cautious!

January 6, 2024
1 min read

Key Points:

  • AI and machine learning technologies are vulnerable to attacks that can have dire consequences.
  • NIST has identified four specific security concerns for AI systems: evasion, poisoning, privacy, and abuse attacks.

Artificial intelligence (AI) and machine learning (ML) technologies have made significant progress in recent years, but they are still vulnerable to attacks that can cause catastrophic failures, warns the US National Institute of Standards and Technology (NIST). NIST computer scientist Apostol Vassilev stated that there are still theoretical problems with securing AI algorithms that have not been solved. Anyone who claims otherwise is selling “snake oil.”

NIST has co-authored a paper that categorizes the security risks posed by AI systems. The paper, titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” explores various adversarial machine learning techniques and focuses on four specific security concerns: evasion, poisoning, privacy, and abuse attacks.

In an evasion attack, an adversary’s goal is to generate adversarial examples that can cause misidentifications in computer vision systems. For example, stop signs can be manipulated to be misidentified by autonomous vehicles. Poisoning attacks involve adding unwanted data to the training of a machine learning model, causing the model to respond in undesirable ways. Privacy attacks involve accessing and extracting sensitive, protected data from AI models. Lastly, abuse attacks involve repurposing generative AI systems for malicious purposes, such as promoting hate speech or generating media that incites violence.

The paper aims to suggest mitigation methods for these attack types, provide guidance to AI practitioners, and promote the development of better defenses. It concludes that achieving trustworthy AI currently requires a tradeoff between security, fairness, and accuracy.

Overall, the paper highlights the need for further research and development to address the security vulnerabilities inherent in AI and machine learning systems. It emphasizes the importance of considering these security concerns during the training and deployment of AI models to mitigate the risks associated with adversarial attacks.

Latest from Blog

Bridging the cyber talent gap: tips for CISOs

TLDR: – Global cyber threats have increased twofold in recent years, leading to a talent gap of nearly 4 million cyber professionals worldwide. – Existing cyber staff are under strain, with vacancies

North Korean hackers pivot to ransomware attacks

TLDR: North Korean hackers from APT45 have shifted from cyber espionage to ransomware attacks APT45 has targeted critical infrastructure and is linked to ransomware families SHATTEREDGLASS and Maui A North Korea-linked threat

Cyber insurance evolves to cover all your online needs

TLDR: Cyber insurance coverage is evolving to help raise security baselines across businesses. Only one-quarter of companies have a standalone cyber insurance policy. In today’s evolving cybersecurity landscape, cyber insurance coverage is