In a concerning development, cybercriminals are increasingly targeting cloud-based generative AI (GenAI) services in a new attack vector dubbed “LLMjacking.”
These attacks exploit non-human identities (NHIs) machine accounts and API keys to hijack access to large language models (LLMs) hosted on cloud platforms like AWS.
By compromising NHIs, attackers can abuse expensive AI resources, generate illicit content, and even exfiltrate sensitive data, all while leaving victims to bear the financial and reputational costs.
Recent research by Entro Labs highlights the alarming speed and sophistication of these attacks.
In controlled experiments, researchers deliberately exposed valid AWS API keys on public platforms such as GitHub and Pastebin to observe attacker behavior.
The results were startling: within an average of 17 minutes and as quickly as 9 minutes threat actors began reconnaissance efforts.
Automated bots and manual attackers alike probed the leaked credentials, seeking to exploit their access to cloud AI models.
The attack process is highly automated, with bots scanning public repositories and forums for exposed credentials.
Once discovered, the stolen keys are tested for permissions and used to enumerate available AI services.
In one instance, attackers invoked AWS’s GetFoundationModelAvailability
API to identify accessible LLMs like GPT-4 or DeepSeek before attempting unauthorized model invocations.
This reconnaissance phase allows attackers to map out the capabilities of compromised accounts without triggering immediate alarms.
Interestingly, researchers observed both automated and manual exploitation attempts.
While bots dominated initial access attempts using Python-based tools like botocore
manual actions also occurred, with attackers using web browsers to validate credentials or explore cloud environments interactively.
This dual approach underscores the blend of opportunistic automation and targeted human intervention in LLMjacking campaigns.
According to the Report, The consequences of LLMjacking can be severe.
Advanced AI models charge significant fees per query, meaning attackers can quickly rack up thousands of dollars in unauthorized usage costs.
Beyond financial losses, there is also the risk of malicious content generation under compromised credentials.
For example, Microsoft recently dismantled a cybercrime operation that used stolen API keys to abuse Azure OpenAI services for creating harmful content like deepfakes.
To counter this emerging threat, organizations must adopt robust NHI security measures:
As generative AI becomes integral to modern workflows, securing NHIs against LLMjacking is no longer optional but essential.
Organizations must act swiftly to safeguard their AI resources from this rapidly evolving threat landscape.
Are you from SOC/DFIR Teams? – Analyse Malware Incidents & get live Access with ANY.RUN -> Start Now for Free.
Pathfinder AI expands Hunters' vision for AI-driven SOCs, introducing Agentic AI for autonomous investigation and…
A recent technical study conducted by researchers at Trinity College Dublin has revealed that Google…
Microsoft has introduced a series of technical recommendations to bolster the security of Virtualization-Based Security…
A sophisticated cyber espionage campaign targeting the aviation and satellite communications sectors in the United…
Microsoft has announced the removal of the Data Encryption Standard (DES) encryption algorithm from Kerberos…
Security researchers have uncovered sophisticated obfuscation techniques employed by APT28, a Russian-linked advanced persistent threat…