TLDR:
- Experts warn that cyber risk is increasing due to rapid AI tool evolution.
- AI tools are being used by cybercriminals and nation-state operators, leading to smaller-scale and more frequent cyber losses.
Experts are concerned about the rapid evolution of artificial intelligence tools, warning that these advancements pose significant cyber risks. Cybersecurity officials and insurance experts predict that cybercriminals and nation-state actors will increasingly leverage AI tools to enhance attack techniques. The overall timeline for these shifts suggests a rise in small, frequent, and more severe cyber losses over the next 12 to 24 months.
According to a recent report from Lloyd’s of London, generative AI and large language models (LLMs) are expected to transform the cyber risk landscape. The National Cyber Security Center in Britain also reported that different threat actors, ranging from less-skilled cybercriminals to sophisticated nation-state groups, are harnessing AI to varying degrees for malicious purposes.
Criminal and nation-state interests in AI are growing, with phishing campaigns becoming more sophisticated and challenging to detect. The NCSC projects that AI advancements will complicate email authentication and increase attackers’ capabilities to exploit software vulnerabilities before patches are applied. As AI tools improve reconnaissance and exfiltration capabilities, the impact of cyberattacks, including ransomware, is expected to escalate.
Overall, Lloyd’s outlines several imminent cyber risks, including automated vulnerability discovery, heightened potential for cyber espionage, lower barriers to entry for cybercriminals, increased phishing campaign optimization, as well as single points of failure and shifting risk-reward dynamics. The commoditization of AI-powered cyber tools is likely to democratize access to advanced capabilities, making the cybercrime ecosystem more threatening across various attack vectors.
While there have been existing hurdles to the illicit use of AI, new developments are expected to eliminate these barriers, facilitating malicious activities. As attackers explore alternatives to cloud-based AI solutions, including through open-source channels, the expertise required to execute sophisticated cyberattacks will likely become more widespread.