Balancing innovation and security is the new AI imperative

September 6, 2024
1 min read

TLDR: The new AI imperative is about balancing innovation and security

AI technologies introduce new cyber risks such as training data poisoning and prompt injection, which amplify existing risks like data leakage. Organizations need to consider the full cost of AI solutions, including security-related expenses. Cyber leaders must effectively communicate AI-security risks to business leaders to make the case for investment in mitigating these risks. A holistic approach to understanding cyber risk exposure and implementing appropriate controls is crucial for realizing the full benefits of AI adoption.

The new AI imperative is about balancing innovation and security

In the fast-paced world of AI adoption, business leaders often overlook the cyber risks associated with AI systems. To ensure the benefits of AI technologies are maximized, organizations need robust systems of cyber risk governance. Focusing solely on the opportunities AI brings without addressing the associated cyber risks can leave organizations vulnerable. It is crucial to integrate security by design, implement cybersecurity controls, and account for security-related expenses to preserve business value.

AI technologies broaden the attack surface

AI technologies introduce new risks such as training data poisoning and prompt injection, amplifying existing risks like data leakage. Evaluating AI-driven cyber risks may require considering factors like model output reliability and explainability in addition to traditional cybersecurity properties.

A holistic approach to controls is key

The marketplace for AI security tools is expanding, but many controls are still challenging to implement. Organizations need a diverse array of AI-security controls, including tools for explainability, security monitoring, recovery from compromised systems, and rollback procedures. With regulatory requirements for AI security varying across jurisdictions, compliance challenges for multi-jurisdictional organizations are already apparent.

Delivering effective risk communication

Cyber leaders must focus on effectively communicating AI-security risks to business leaders to secure investment in mitigating these risks. Developing a toolkit to support organizations in understanding their risk exposure and clear guidance on communicating this information to relevant audiences is essential. The World Economic Forum’s Centre for Cybersecurity and the Global Cyber Security Capacity Centre at the University of Oxford are leading initiatives to provide guidance on responsible AI design, development, and deployment.

Latest from Blog

EU push for unified incident report rules

TLDR: The Federation of European Risk Management Associations (FERMA) is urging the EU to harmonize cyber incident reporting requirements ahead of new legislation. Upcoming legislation such as the NIS2 Directive, DORA, and