TLDR:
- ShadowRay vulnerability exposes hidden assumptions in AI security and internet-exposed assets.
- Anyscale disputes the vulnerability but researchers have identified potential risks.
ShadowRay is an exposure of the Ray AI framework infrastructure, actively under attack by security researchers. The vulnerability, identified as CVE-2023-48022, allows unauthenticated users with network access to launch jobs or execute arbitrary code. Anyscale, the developers of Ray, dispute this as a vulnerability, stating that it is intended for controlled environments. However, this dispute raises significant issues regarding AI security, internet-exposed assets, and vulnerability scanning.
Lessons learned from ShadowRay include:
- A lack of security expertise among AI professionals
- A need for encryption of AI data
- Importance of tracking the source of AI models
- Not assuming secure environments for AI
- Proper handling of disputed vulnerabilities in vulnerability scanning tools
Organizations need to continuously scan for vulnerabilities, address issues before exploitation occurs, and collaborate across teams to enhance security. Implementing a system for monitoring potential vulnerabilities through research, threat feeds, vulnerability scanners, and penetration testing is crucial to mitigating risks in AI and cybersecurity.