Artificial Intelligence (AI) has significantly impacted software engineering, with advanced AI tools like ChatGPT and GitHub Copilot boosting developers’ productivity. However, recent studies reveal that these AI-powered coding assistant tools are vulnerable to poisoning attacks, leading to a potential increase in application hack attacks.
- Coding assistant tools like ChatGPT and GitHub Copilot have enormously enhanced developers’ efficiency, but these tools are vulnerable to poisoning attacks.
- Cybersecurity researchers from Sungkyunkwan University, Republic of Korea, and University of Tennessee, USA have found that poisoned AI coding assistant tools open the application to hack attack.
- Poisoning attacks involve injecting malicious code snippets into training data, which leads to insecure suggestions and increases the vulnerability of developed applications.
- AI coding assistants are vulnerable to generic backdoor poisoning attacks on code-suggestion deep learning models, making it harder to detect malicious code.
- Mitigation strategies include improved code review, secure coding practices, and the use of fuzzing and static analysis tools.
The researchers conducted a study involving 238 participants and 30 professional developers, which highlighted widespread tool adoption, but also showed an underestimation of poisoning risks. They found that poisoned tools can potentially influence developers to include insecure code in applications, emphasizing the need for enhanced coding practices and further education on the risks.
Increased model complexity makes detection more challenging, particularly given attackers often source AI model’s dataset from open repositories like GitHub. Several recommendations have been put forward to mitigate this risk. From the developers’ perspective, more stringent code review practices and security precautions are necessary. Meanwhile, software companies should enhance protective measures, and security researchers should focus on improving vulnerability detection.