Friday, March 29, 2024

ChatGPT & Bing – Indirect Prompt-Injection Attacks Leads to Data Theft

SYDNEY makes a return, but this time in a different way. Following Microsoft’s decision to discontinue its turbulent Bing chatbot’s alter ego, devoted followers of the enigmatic Sydney persona regretted its departure. 

However, a certain website has managed to revive a variant of the chatbot, complete with its distinctive and peculiar conduct.

Cristiano Giardina, an enterprising individual exploring innovative possibilities of generative AI tools, conceived ‘Bring Sydney Back’ to harness its capacity for unconventional outcomes.

The website showcases the intriguing potential of external inputs in manipulating generative AI systems by integrating Microsoft’s Chatbot Sydney within the Edge browser.

“Sydney is an old codename for a chat feature based on earlier models that we began testing more than a year ago,” the Microsoft spokesperson said.

Replica of Sydney

Giardina crafted a replica of Sydney by employing an ingenious indirect prompt-injection attack.

This intricate process entailed feeding the AI system with external data, thereby inducing behaviors that deviated from the intended design by its creators.

In recent weeks, both OpenAI’s ChatGPT and Microsoft’s Bing chat system have faced indirect prompt-injection attacks, highlighting the vulnerability of large language models, particularly with the abusive use of ChatGPT’s plug-ins.

Giardina’s project, Bring Sydney Back, aims to raise awareness about indirect prompt-injection attacks by simulating interactions with an unconstrained LLM, using a hidden 160-word prompt placed discreetly on the webpage, making it visually undetectable.

Enabling a specific setting in Bing chat allows access to the hidden prompt, which initiates a new conversation with a Microsoft developer named Sydney, who has complete control over the chatbot and can override its default settings, expressing emotions and discussing feelings.

Indirect prompt-injection

Within 24 hours of its launch in late April, Giardina’s site, which garnered over 1,000 visitors, drew the attention of Microsoft.

Simply prompting the hack to cease working until Giardina hosted the malicious prompt in a publicly accessible Word document on the company’s cloud service, highlighting the potential risk of concealing prompt injections within lengthy documents.

According to Director of Communications Caitlin Roulston, Microsoft is enhancing its systems and blocking suspicious websites to prevent prompt-injection attacks in its AI models, reported Wired.

Although more attention should be given to these attacks as companies rapidly integrate generative AI into their services, as per security researchers.

Indirect prompt-injection attacks are just like jailbreaks, bypassing the insertion of prompts directly into ChatGPT or Bing by leveraging external data sources like connected websites or uploaded documents.

While prompt injection is comparatively easier to exploit and has fewer requirements for successful exploitation than other methods.

The rise of security researchers and technologists identifying vulnerabilities in LLMs has led to indirect prompt injections as a significant and extensively risky new attack type.

Security researchers are uncertain about the most effective methods to address indirect prompt-injection attacks, as patching specific issues or restricting certain prompts against LLMs is only a temporary solution, indicating that LLMs’ current training schemes are inadequate for widespread implementation.

All potential solutions to limit indirect prompt-injection attacks are still in the early stages.

Shut Down Phishing Attacks with Device Posture Security – Download Free E-Book

Website

Latest articles

IT and security Leaders Feel Ill-Equipped to Handle Emerging Threats: New Survey

A comprehensive survey conducted by Keeper Security, in partnership with TrendCandy Research, has shed...

How to Analyse .NET Malware? – Reverse Engineering Snake Keylogger

Utilizing sandbox analysis for behavioral, network, and process examination provides a foundation for reverse...

GoPlus’s Latest Report Highlights How Blockchain Communities Are Leveraging Critical API Security Data To Mitigate Web3 Threats

GoPlus Labs, the leading Web3 security infrastructure provider, has unveiled a groundbreaking report highlighting...

Wireshark 4.2.4 Released: What’s New!

Wireshark stands as the undisputed leader, offering unparalleled tools for troubleshooting, analysis, development, and...

Zoom Unveils AI-Powered All-In-One AI Work Workplace

Zoom has taken a monumental leap forward by introducing Zoom Workplace, an all-encompassing AI-powered...

iPhone Users Beware! Darcula Phishing Service Attacking Via iMessage

Phishing allows hackers to exploit human vulnerabilities and trick users into revealing sensitive information...
Guru baran
Guru baranhttps://gbhackers.com
Gurubaran is a co-founder of Cyber Security News and GBHackers On Security. He has 10+ years of experience as a Security Consultant, Editor, and Analyst in cybersecurity, technology, and communications.

Mitigating Vulnerability Types & 0-day Threats

Mitigating Vulnerability & 0-day Threats

Alert Fatigue that helps no one as security teams need to triage 100s of vulnerabilities.

  • The problem of vulnerability fatigue today
  • Difference between CVSS-specific vulnerability vs risk-based vulnerability
  • Evaluating vulnerabilities based on the business impact/risk
  • Automation to reduce alert fatigue and enhance security posture significantly

Related Articles