NIST Alerts AI Developers: Beware of Poisoning Methods and Cyber Threats

January 16, 2024
1 min read

In a new guideline paper, the National Institute of Standards and Technology (NIST) has highlighted the potential cyber threats that AI developers may face during the development and deployment of their models. The paper focuses on “poisoning” methods, where training data is tainted to manipulate the learning model, as well as “evasion” attempts to confuse AI already in use and prompts used by cyber threats to “jailbreak” the models. The paper also highlights the vulnerability of AI models during the learning phase, as they rely on large volumes of public data, which may contain misinformation. The paper suggests that while developers cannot completely secure their models, they should carefully consider potential attack sources and approaches to make trade-offs between capability and security.

The paper also discusses the risks of altering the source code of AI models, as many developers use open source components or third-party libraries. It highlights the difficulty of curating the large volume of information required for training AI models and the risk of unintentional self-poisoning as models generate synthetic content. The paper acknowledges that there are no foolproof methods for curbing these threats and recommends mapping out anticipated attack sources and approaches.

Latest from Blog

Apache’s OFBiz gets new fix for RCE exploits

TLDR: Apache released a security update for OFBiz to patch vulnerabilities, including a bypass of patches for two exploited flaws. The bypass, tracked as CVE-2024-45195, allows unauthenticated remote attackers to execute code