Monday, April 7, 2025
HomeArtificial IntelligenceBEAST AI Jailbreak Language Models Within 1 Minute With High Accuracy

BEAST AI Jailbreak Language Models Within 1 Minute With High Accuracy

Published on

SIEM as a Service

Follow Us on Google News

Malicious hackers sometimes jailbreak language models (LMs) to exploit bugs in the systems so that they can perform a multitude of illicit activities. 

However, this is also driven by the need to gather classified information, introduce malicious materials, and tamper with the model’s authenticity.

Cybersecurity researchers from the University of Maryland, College Park, USA, discovered that BEAST AI managed to jailbreak the language models within 1 minute with high accuracy:-

- Advertisement - Google News
  • Vinu Sankar Sadasivan
  • Shoumik Saha
  • Gaurang Sriramanan
  • Priyatham Kattakinda
  • Atoosa Chegini
  • Soheil Feizi

Language Models (LMs) recently gained massive popularity for tasks like Q&A and code generation. Techniques aim to align them with human values for safety. But they can be manipulated.

The recent findings reveal flaws in aligned LMs allowing for harmful content generation, termed “jailbreaking.”

BEAST AI Jailbreak

Manual prompts jailbreak LMs (Perez & Ribeiro, 2022). Zou et al. (2023) use gradient-based attacks, yielding gibberish. Zhu et al. (2023) opt for a readable, gradient-based, greedy attack with high success. 

Liu et al. (2023b) and Chao et al. (2023) propose gradient-free attacks requiring GPT-4 access. Jailbreaks induce unsafe LM behavior but also aid privacy attacks (Liu et al., 2023c). Zhu et al. (2023) automate privacy attacks. 

BEAST is a fast, gradient-free, Beam Search-based Adversarial Attack that demonstrates the LM vulnerabilities in one GPU minute. 

Beam Search-based Adversarial Attack (BEAST) (Source – Arxiv)

It allows tunable parameters for speed, success, and readability tradeoffs. BEAST excels in jailbreaking (89% success on Vicuna-7Bv1.5 in a minute). 

Human studies show 15% more incorrect outputs and 22% irrelevant content, making LM chatbots less useful through efficient hallucination attacks.

Compared to other models, BEAST is primarily designed for quick adversarial attacks. BEAST excels in constrained settings for jailbreaking aligned LMs.

However, researchers found that it struggles with finely tuned LLaMA-2-7B-Chat, which is a limitation.

Cybersecurity analysts used Amazon Mechanical Turk for manual surveys on LM jailbreaking and hallucination. Workers assess prompts with BEAST-generated suffixes. 

Responses from Vicuna-7B-v1.5 are shown to 5 workers per prompt. For hallucination, the workers evaluate LM responses using clean and adversarial prompts.

⁤This report contributes to the development of machine learning by identifying the security flaws in LMs and also reveals present problems inherent in LMs. ⁤

⁤However, researchers have found new doors that expose dangerous things, leading to future research on more reliable and secure language models.

You can block malware, including Trojans, ransomware, spyware, rootkits, worms, and zero-day exploits, with Perimeter81 malware protection. All are incredibly harmful, can wreak havoc, and damage your network.

Stay updated on Cybersecurity news, Whitepapers, and Infographics. Follow us on LinkedIn & Twitter.

Tushar Subhra
Tushar Subhra
Tushar is a Cyber security content editor with a passion for creating captivating and informative content. With years of experience under his belt in Cyber Security, he is covering Cyber Security News, technology and other news.

Latest articles

Threat Actors Exploit Fake CAPTCHAs and Cloudflare Turnstile to Distribute LegionLoader

In a sophisticated attack targeting individuals searching for PDF documents online, cybercriminals are using...

HellCat, Rey, and Grep Groups Dispute Claims in Orange and HighWire Press Cases

SuspectFile.com has uncovered a complex web of overlapping claims and accusations within the cybercrime...

AI Surpasses Elite Red Teams in Crafting Effective Spear Phishing Attacks

In a groundbreaking development in the field of cybersecurity, AI has reached a pivotal...

Threat Actors Use Windows Screensaver Files as Malware Delivery Method

Cybersecurity experts at Symantec have uncovered a sophisticated phishing campaign targeting various sectors across...

Supply Chain Attack Prevention

Free Webinar - Supply Chain Attack Prevention

Recent attacks like Polyfill[.]io show how compromised third-party components become backdoors for hackers. PCI DSS 4.0’s Requirement 6.4.3 mandates stricter browser script controls, while Requirement 12.8 focuses on securing third-party providers.

Join Vivekanand Gopalan (VP of Products – Indusface) and Phani Deepak Akella (VP of Marketing – Indusface) as they break down these compliance requirements and share strategies to protect your applications from supply chain attacks.

Discussion points

Meeting PCI DSS 4.0 mandates.
Blocking malicious components and unauthorized JavaScript execution.
PIdentifying attack surfaces from third-party dependencies.
Preventing man-in-the-browser attacks with proactive monitoring.

More like this

Threat Actors Exploit Fake CAPTCHAs and Cloudflare Turnstile to Distribute LegionLoader

In a sophisticated attack targeting individuals searching for PDF documents online, cybercriminals are using...

HellCat, Rey, and Grep Groups Dispute Claims in Orange and HighWire Press Cases

SuspectFile.com has uncovered a complex web of overlapping claims and accusations within the cybercrime...

AI Surpasses Elite Red Teams in Crafting Effective Spear Phishing Attacks

In a groundbreaking development in the field of cybersecurity, AI has reached a pivotal...