Tuesday, March 4, 2025
HomeAILLMjacking - Hackers Abuse GenAI With AWS NHIs to Hijack Cloud LLMs

LLMjacking – Hackers Abuse GenAI With AWS NHIs to Hijack Cloud LLMs

Published on

SIEM as a Service

Follow Us on Google News

In a concerning development, cybercriminals are increasingly targeting cloud-based generative AI (GenAI) services in a new attack vector dubbed “LLMjacking.”

These attacks exploit non-human identities (NHIs) machine accounts and API keys to hijack access to large language models (LLMs) hosted on cloud platforms like AWS.

By compromising NHIs, attackers can abuse expensive AI resources, generate illicit content, and even exfiltrate sensitive data, all while leaving victims to bear the financial and reputational costs.

Recent research by Entro Labs highlights the alarming speed and sophistication of these attacks.

In controlled experiments, researchers deliberately exposed valid AWS API keys on public platforms such as GitHub and Pastebin to observe attacker behavior.

Cloud LLMs
snippets on Pastebin

The results were startling: within an average of 17 minutes and as quickly as 9 minutes threat actors began reconnaissance efforts.

Automated bots and manual attackers alike probed the leaked credentials, seeking to exploit their access to cloud AI models.

Reconnaissance and Exploitation Tactics

The attack process is highly automated, with bots scanning public repositories and forums for exposed credentials.

Once discovered, the stolen keys are tested for permissions and used to enumerate available AI services.

In one instance, attackers invoked AWS’s GetFoundationModelAvailability API to identify accessible LLMs like GPT-4 or DeepSeek before attempting unauthorized model invocations.

This reconnaissance phase allows attackers to map out the capabilities of compromised accounts without triggering immediate alarms.

Interestingly, researchers observed both automated and manual exploitation attempts.

While bots dominated initial access attempts using Python-based tools like botocore manual actions also occurred, with attackers using web browsers to validate credentials or explore cloud environments interactively.

This dual approach underscores the blend of opportunistic automation and targeted human intervention in LLMjacking campaigns.

Financial and Operational Impact

According to the Report, The consequences of LLMjacking can be severe.

Advanced AI models charge significant fees per query, meaning attackers can quickly rack up thousands of dollars in unauthorized usage costs.

Beyond financial losses, there is also the risk of malicious content generation under compromised credentials.

For example, Microsoft recently dismantled a cybercrime operation that used stolen API keys to abuse Azure OpenAI services for creating harmful content like deepfakes.

To counter this emerging threat, organizations must adopt robust NHI security measures:

  • Real-Time Monitoring: Continuously scan for exposed secrets in code repositories, logs, and collaboration tools.
  • Automated Key Rotation: Immediately revoke or rotate compromised credentials to limit exposure time.
  • Least Privilege Access: Restrict NHIs to only essential permissions, reducing the potential impact of a breach.
  • Anomaly Detection: Monitor unusual API activity, such as unexpected model invocations or excessive billing requests.
  • Developer Education: Train teams on secure credential management practices to prevent accidental leaks.

As generative AI becomes integral to modern workflows, securing NHIs against LLMjacking is no longer optional but essential.

Organizations must act swiftly to safeguard their AI resources from this rapidly evolving threat landscape.

Are you from SOC/DFIR Teams? – Analyse Malware Incidents & get live Access with ANY.RUN -> Start Now for Free.

Aman Mishra
Aman Mishra
Aman Mishra is a Security and privacy Reporter covering various data breach, cyber crime, malware, & vulnerability.

Latest articles

Pathfinder AI – Hunters Announces New AI Capabilities for Smarter SOC Automation

Pathfinder AI expands Hunters' vision for AI-driven SOCs, introducing Agentic AI for autonomous investigation...

Google Secretly Tracks Android Devices Even Without User-Opened Apps

A recent technical study conducted by researchers at Trinity College Dublin has revealed that...

Microsoft Strengthens Trust Boundary for VBS Enclaves

Microsoft has introduced a series of technical recommendations to bolster the security of Virtualization-Based...

Hackers Exploiting Business Relationships to Attack Arab Emirates Aviation Sector

A sophisticated cyber espionage campaign targeting the aviation and satellite communications sectors in the...

Supply Chain Attack Prevention

Free Webinar - Supply Chain Attack Prevention

Recent attacks like Polyfill[.]io show how compromised third-party components become backdoors for hackers. PCI DSS 4.0’s Requirement 6.4.3 mandates stricter browser script controls, while Requirement 12.8 focuses on securing third-party providers.

Join Vivekanand Gopalan (VP of Products – Indusface) and Phani Deepak Akella (VP of Marketing – Indusface) as they break down these compliance requirements and share strategies to protect your applications from supply chain attacks.

Discussion points

Meeting PCI DSS 4.0 mandates.
Blocking malicious components and unauthorized JavaScript execution.
PIdentifying attack surfaces from third-party dependencies.
Preventing man-in-the-browser attacks with proactive monitoring.

More like this

Pathfinder AI – Hunters Announces New AI Capabilities for Smarter SOC Automation

Pathfinder AI expands Hunters' vision for AI-driven SOCs, introducing Agentic AI for autonomous investigation...

Google Secretly Tracks Android Devices Even Without User-Opened Apps

A recent technical study conducted by researchers at Trinity College Dublin has revealed that...

Microsoft Strengthens Trust Boundary for VBS Enclaves

Microsoft has introduced a series of technical recommendations to bolster the security of Virtualization-Based...