TLDR:
The US tech giant Microsoft has reported that North Korea and Iran are using artificial intelligence (AI) for hacking purposes. Microsoft announced that it had detected threats from foreign countries that had attempted to exploit generative AI technology developed by the company and its partner OpenAI. The company stated that these techniques were “early-stage” and not particularly unique, but it was important to expose them publicly to highlight how US rivals were using AI to breach networks and carry out influence operations. Microsoft also warned that generative AI could enhance malicious social engineering, leading to more advanced deepfakes and voice cloning, posing a threat to democracy during election seasons.
Key points:
- North Korea’s Kimsuky group has used large-language models to conduct research on foreign think tanks and generate spear-phishing content for hacking campaigns.
- Iran’s Revolutionary Guard has employed large-language models for social engineering, troubleshooting software errors, and studying techniques to evade detection in compromised networks. The AI helps their production of phishing emails.
- The Russian military intelligence unit Fancy Bear has used the models to study satellite and radar technologies related to the conflict in Ukraine.
- The Chinese cyber-espionage groups Aquatic Panda and Maverick Panda have interacted with the models, indicating an exploration of how large-language models can enhance their operations.
- Cybersecurity experts are concerned about the use of generative AI in hacking and have called for more secure models and AI development practices.
Microsoft’s announcement highlights the increasing use of AI by state-sponsored hacking groups to exploit vulnerabilities and advance their offensive cyber capabilities. The company’s collaboration with OpenAI helps shed light on these threats and emphasizes the need for greater security in AI development. As AI technology continues to evolve, addressing the potential risks and countering the malicious use of AI becomes crucial for protecting networks, information, and democratic processes.