TLDR:
- New research has found that AI-as-a-service providers are vulnerable to Privilege Escalation (PrivEsc) and Cross-Tenant Attacks.
- Threat actors could escalate privileges, gain cross-tenant access to other customers’ models, and take over CI/CD pipelines.
The research reveals that AI-as-a-service providers like Hugging Face are at risk of malicious models being used to perform cross-tenant attacks, exposing millions of private AI models. The threats stem from shared inference infrastructure and CI/CD pipeline takeovers, allowing untrusted models to be run in pickle format. These vulnerabilities could enable threat actors to obtain elevated privileges, lateral movement within clusters, and compromise sensitive data. Hugging Face has addressed the issues and advises users to only trust models from reputable sources, enable MFA, and avoid using pickle files in production environments.
In addition, the article highlights the risk of generative AI models distributing malicious code packages to developers and the need for caution when using large language models for coding solutions. Another technique called “many-shot jailbreaking” is discussed as a way to bypass safety protections in LLMs to respond to potentially harmful queries. These findings underscore the importance of implementing robust security measures when leveraging AI technologies in various applications.