Tuesday, April 22, 2025
HomeAIMalicious AI Tools See 200% Surge as ChatGPT Jailbreaking Talks Increase by...

Malicious AI Tools See 200% Surge as ChatGPT Jailbreaking Talks Increase by 52%

Published on

SIEM as a Service

Follow Us on Google News

The cybersecurity landscape in 2024 witnessed a significant escalation in AI-related threats, with malicious actors increasingly targeting and exploiting large language models (LLMs).

According to KELA’s annual “State of Cybercrime” report, discussions about exploiting popular LLMs such as ChatGPT, Copilot, and Gemini surged by 94% compared to the previous year.

Jailbreaking Techniques Proliferate on Underground Forums

Cybercriminals have been actively sharing and developing new jailbreaking techniques on underground forums, with dedicated sections emerging on platforms like HackForums and XSS.

- Advertisement - Google News

These techniques aim to bypass the built-in safety limitations of LLMs, enabling the creation of malicious content such as phishing emails and malware code.

One of the most effective jailbreaking methods identified by KELA was word transformation, which successfully bypassed 27% of safety tests.

This technique involves replacing sensitive words with synonyms or splitting them into substrings to evade detection.

Massive Increase in Compromised LLM Accounts

The report revealed a staggering rise in the number of compromised accounts for popular LLM platforms.

ChatGPT saw an alarming increase from 154,000 compromised accounts in 2023 to 3 million in 2024, representing a growth of nearly 1,850%.

Similarly, Gemini (formerly Bard) experienced a surge from 12,000 to 174,000 compromised accounts, a 1,350% increase.

These compromised credentials, obtained through infostealer malware, pose a significant risk as they can be leveraged for further abuse of LLMs and associated services.

As LLM technologies continue to gain popularity and integration across various platforms, KELA anticipates the emergence of new attack surfaces in 2025.

Prompt injection is identified as one of the most critical threats against generative AI applications, while agentic AI, capable of autonomous actions and decision-making, presents a novel attack vector.

The report emphasizes the need for organizations to implement robust security measures, including secure LLM integrations, advanced deepfake detection technologies, and comprehensive user education on AI-related threats.

As the line between cybercrime and state-sponsored activities continues to blur, proactive threat intelligence and adaptive defense strategies will be crucial in mitigating the evolving risks posed by AI-powered cyber threats.

Investigate Real-World Malicious Links & Phishing Attacks With Threat Intelligence Lookup – Try for Free

Aman Mishra
Aman Mishra
Aman Mishra is a Security and privacy Reporter covering various data breach, cyber crime, malware, & vulnerability.

Latest articles

Hackers Exploit Cloudflare Tunnel Infrastructure to Deploy Multiple Remote Access Trojans

The Sekoia TDR (Threat Detection & Research) team has reported on a sophisticated network...

Threat Actors Leverage npm and PyPI with Impersonated Dev Tools for Credential Theft

The Socket Threat Research Team has unearthed a trio of malicious packages, two hosted...

Hackers Exploit Legitimate Microsoft Utility to Deliver Malicious DLL Payload

Hackers are now exploiting a legitimate Microsoft utility, mavinject.exe, to inject malicious DLLs into...

Cybercriminals Exploit Network Edge Devices to Infiltrate SMBs

Small and midsized businesses (SMBs) continue to be prime targets for cybercriminals, with network...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

Hackers Exploit Cloudflare Tunnel Infrastructure to Deploy Multiple Remote Access Trojans

The Sekoia TDR (Threat Detection & Research) team has reported on a sophisticated network...

Threat Actors Leverage npm and PyPI with Impersonated Dev Tools for Credential Theft

The Socket Threat Research Team has unearthed a trio of malicious packages, two hosted...

Hackers Exploit Legitimate Microsoft Utility to Deliver Malicious DLL Payload

Hackers are now exploiting a legitimate Microsoft utility, mavinject.exe, to inject malicious DLLs into...