ChatGPT & Bing – Indirect Prompt-Injection Attacks Leads to Data Theft

SYDNEY makes a return, but this time in a different way. Following Microsoft’s decision to discontinue its turbulent Bing chatbot’s alter ego, devoted followers of the enigmatic Sydney persona regretted its departure. 

However, a certain website has managed to revive a variant of the chatbot, complete with its distinctive and peculiar conduct.

Cristiano Giardina, an enterprising individual exploring innovative possibilities of generative AI tools, conceived ‘Bring Sydney Back’ to harness its capacity for unconventional outcomes.

The website showcases the intriguing potential of external inputs in manipulating generative AI systems by integrating Microsoft’s Chatbot Sydney within the Edge browser.

“Sydney is an old codename for a chat feature based on earlier models that we began testing more than a year ago,” the Microsoft spokesperson said.

Replica of Sydney

Giardina crafted a replica of Sydney by employing an ingenious indirect prompt-injection attack.

This intricate process entailed feeding the AI system with external data, thereby inducing behaviors that deviated from the intended design by its creators.

In recent weeks, both OpenAI’s ChatGPT and Microsoft’s Bing chat system have faced indirect prompt-injection attacks, highlighting the vulnerability of large language models, particularly with the abusive use of ChatGPT’s plug-ins.

Giardina’s project, Bring Sydney Back, aims to raise awareness about indirect prompt-injection attacks by simulating interactions with an unconstrained LLM, using a hidden 160-word prompt placed discreetly on the webpage, making it visually undetectable.

Enabling a specific setting in Bing chat allows access to the hidden prompt, which initiates a new conversation with a Microsoft developer named Sydney, who has complete control over the chatbot and can override its default settings, expressing emotions and discussing feelings.

Indirect prompt-injection

Within 24 hours of its launch in late April, Giardina’s site, which garnered over 1,000 visitors, drew the attention of Microsoft.

Simply prompting the hack to cease working until Giardina hosted the malicious prompt in a publicly accessible Word document on the company’s cloud service, highlighting the potential risk of concealing prompt injections within lengthy documents.

According to Director of Communications Caitlin Roulston, Microsoft is enhancing its systems and blocking suspicious websites to prevent prompt-injection attacks in its AI models, reported Wired.

Although more attention should be given to these attacks as companies rapidly integrate generative AI into their services, as per security researchers.

Indirect prompt-injection attacks are just like jailbreaks, bypassing the insertion of prompts directly into ChatGPT or Bing by leveraging external data sources like connected websites or uploaded documents.

While prompt injection is comparatively easier to exploit and has fewer requirements for successful exploitation than other methods.

The rise of security researchers and technologists identifying vulnerabilities in LLMs has led to indirect prompt injections as a significant and extensively risky new attack type.

Security researchers are uncertain about the most effective methods to address indirect prompt-injection attacks, as patching specific issues or restricting certain prompts against LLMs is only a temporary solution, indicating that LLMs’ current training schemes are inadequate for widespread implementation.

All potential solutions to limit indirect prompt-injection attacks are still in the early stages.

Shut Down Phishing Attacks with Device Posture Security – Download Free E-Book

Gurubaran

Gurubaran is a co-founder of Cyber Security News and GBHackers On Security. He has 10+ years of experience as a Security Consultant, Editor, and Analyst in cybersecurity, technology, and communications.

Recent Posts

Critical Vulnerability in Next.js Framework Exposes Websites to Cache Poisoning and XSS Attacks

A new report has put the spotlight on potential security vulnerabilities within the popular open-source…

2 hours ago

New Cookie Sandwich Technique Allows Stealing of HttpOnly Cookies

The "Cookie Sandwich Attack" showcases a sophisticated way of exploiting inconsistencies in cookie parsing by…

2 hours ago

GhostGPT – Jailbreaked ChatGPT that Creates Malware & Exploits

Artificial intelligence (AI) tools have revolutionized how we approach everyday tasks, but they also come…

8 hours ago

Tycoon 2FA Phishing Kit Using Specially Crafted Code to Evade Detection

The rapid evolution of Phishing-as-a-Service (PhaaS) platforms is reshaping the threat landscape, enabling attackers to…

8 hours ago

Nnice Ransomware Attacking Windows Systems With Advanced Encryption Techniques

CYFIRMA's Research and Advisory team has identified a new strain of ransomware labeled "Nnice," following…

8 hours ago

Microsoft Unveils New Identity Secure Score Recommendations in General Availability

Microsoft has announced the general availability of 11 new Identity Secure Score recommendations in Microsoft…

8 hours ago