TLDR: North Korean hackers have been found using generative AI for planning purposes and not for conducting cyberattacks. South Korea’s intelligence service plans to closely monitor North Korea’s use of generative AI for cyberattacks. UK intelligence also predicts that generative AI will aid cybercriminals and state-sponsored hackers in the next two years.
South Korea’s National Intelligence Service (NIS) has confirmed that North Korean hackers are utilizing generative AI to fuel their hacking schemes. The NIS did not provide specific details, but it appears that the North Korean hackers are using generative AI models for planning purposes rather than performing actual cyberattacks.
In response to this development, South Korea intends to closely monitor North Korea’s efforts to leverage generative AI for cyberattacks. Earlier this week, South Korea’s intelligence service issued an alert warning that North Korean hackers might attempt to disrupt elections in South Korea and the US by disseminating fake news and AI-generated deepfakes. Another concern is that North Korean hackers may employ generative AI to enhance their phishing messages by utilizing voice cloning techniques.
The UK’s National Cyber Security Centre predicts that generative AI will benefit cybercriminals and state-sponsored hackers in the next two years. Although the UK does not foresee AI programs becoming capable of orchestrating cyberattacks independently, the threat lies in generative AI’s ability to analyze and learn from vast amounts of information to identify significant insights. This could provide valuable tools for hackers attempting to steal data or perfect social engineering attacks.
The use of AI in cyberattacks is deemed evolutionary rather than revolutionary, according to the CEO of the National Cyber Security Centre, Lindy Cameron. The emergence of AI enhances existing threats but does not dramatically transform the risk landscape in the near future.