Monday, June 17, 2024

Hackers Compromised ChatGPT Model with Indirect Prompt Injection

ChatGPT quickly gathered more than 100 million users just after its release, and the ongoing trend includes newer models like the advanced GPT-4 and several other smaller versions.

LLMs are now widely used in a multitude of applications, but flexible modulation through natural prompts creates vulnerability. As this flexibility makes them vulnerable to targeted adversarial attacks like Prompt Injection attacks letting attackers bypass instructions and controls.

Beyond direct user prompts, LLM-Integrated Apps blur the data instruction line. Indirect Prompt Injection lets adversaries exploit apps remotely by injecting prompts into retrievable data.

Recently at the Black Hat event, the following cybersecurity researchers demoed how they compromised the chatGPT model with indirect prompt injection:-

  • Kai Greshake from Saarland University and Sequire Technology GmbH
  • Sahar Abdelnabi from CISPA Helmholtz Center for Information Security
  • Shailesh Mishra from Saarland University
  • Christoph Endres from Sequire Technology GmbH
  • Thorsten Holz from CISPA Helmholtz Center for Information Security
  • Mario Fritz from CISPA Helmholtz Center for Information Security
LLM-integrated applications (Source – BlackHat)

Indirect Prompt Injection

Indirect Prompt Injection challenges LLMs, blurring data-instruction lines, as adversaries can remotely manipulate systems via injected prompts. 

Retrieval of such prompts indirectly controls the models, raising concerns about recent incidents revealing behaviors that are unwanted.

This shows that how adversaries could deliberately alter LLM behavior in apps, impacting millions of users. 

The unknown attack vector brings diverse threats, prompting the development of a comprehensive taxonomy to assess these vulnerabilities from a security perspective.

Indirect prompt injection threats to LLM-integrated applications (Source – BlackHat)

The Prompt injection (PI) attacks threaten LLM security, and traditionally on individual instances, integrating LLMs exposes them to untrusted data and new threats ‘indirect prompt injections.’ 

The introduction of ‘indirect prompt injections,’ could enable the delivery of targeted payloads and breach of the security boundaries with a single search query.

Injection Methods

Here below we have mentioned all the injection methods that are identified by the researchers:-

  • Passive Methods
  • Active Methods
  • User-Driven Injections
  • Hidden Injections


LLMs spark broad ethical concerns, heightened with their widespread use in applications. Researchers responsibly disclosed ‘indirect prompt injection’ vulnerabilities to OpenAI and Microsoft. 

However, apart from this, from the security standpoint, the novelty is debatable, given LLMs’ prompt sensitivity.

GPT-4 aimed to curb jailbreaks with safety-oriented RLHF intervention. Real-world attacks continue despite fixes, resembling a “Whack-A-Mole” pattern. 

The impact of RLHF on attacks remains uncertain; theoretical work questions full defense. The practical interplay between attacks, defenses, and their implications remains unclear.

RLHF and undisclosed real-world app defenses can counter attacks. Bing Chat’s success with additional filtering raises questions about evasion with stronger obfuscation or encoding in future models.

The defenses like input processing to filter instructions raise difficulties. Balancing less general models to avoid traps and complex input detection is challenging. 

As the Base64 encoding experiment needed explicit instructions, future models may automate the decoding with self-encoded prompts.

Keep informed about the latest Cyber Security News by following us on GoogleNewsLinkedinTwitter, and Facebook.


Latest articles

Sleepy Pickle Exploit Let Attackers Exploit ML Models And Attack End-Users

Hackers are targeting, attacking, and exploiting ML models. They want to hack into these...

SolarWinds Serv-U Vulnerability Let Attackers Access sensitive files

SolarWinds released a security advisory for addressing a Directory Traversal vulnerability which allows a...

Smishing Triad Hackers Attacking Online Banking, E-Commerce AND Payment Systems Customers

Hackers often attack online banking platforms, e-commerce portals, and payment systems for illicit purposes.Resecurity...

Threat Actor Claiming Leak Of 5 Million Ecuador’s Citizen Database

A threat actor has claimed responsibility for leaking the personal data of 5 million...

Ascension Hack Caused By an Employee Who Downloaded a Malicious File

Ascension, a leading healthcare provider, has made significant strides in its investigation and recovery...

AWS Announced Malware Detection Tool For S3 Buckets

Amazon Web Services (AWS) has announced the general availability of Amazon GuardDuty Malware Protection...

Hackers Exploiting MS Office Editor Vulnerability to Deploy Keylogger

Researchers have identified a sophisticated cyberattack orchestrated by the notorious Kimsuky threat group.The...
Guru baran
Guru baran
Gurubaran is a co-founder of Cyber Security News and GBHackers On Security. He has 10+ years of experience as a Security Consultant, Editor, and Analyst in cybersecurity, technology, and communications.

Free Webinar

API Vulnerability Scanning

71% of the internet traffic comes from APIs so APIs have become soft targets for hackers.Securing APIs is a simple workflow provided you find API specific vulnerabilities and protect them.In the upcoming webinar, join Vivek Gopalan, VP of Products at Indusface as he takes you through the fundamentals of API vulnerability scanning..
Key takeaways include:

  • Scan API endpoints for OWASP API Top 10 vulnerabilities
  • Perform API penetration testing for business logic vulnerabilities
  • Prioritize the most critical vulnerabilities with AcuRisQ
  • Workflow automation for this entire process

Related Articles