Sunday, May 25, 2025
HomeAIMalicious AI Tools See 200% Surge as ChatGPT Jailbreaking Talks Increase by...

Malicious AI Tools See 200% Surge as ChatGPT Jailbreaking Talks Increase by 52%

Published on

SIEM as a Service

Follow Us on Google News

The cybersecurity landscape in 2024 witnessed a significant escalation in AI-related threats, with malicious actors increasingly targeting and exploiting large language models (LLMs).

According to KELA’s annual “State of Cybercrime” report, discussions about exploiting popular LLMs such as ChatGPT, Copilot, and Gemini surged by 94% compared to the previous year.

Jailbreaking Techniques Proliferate on Underground Forums

Cybercriminals have been actively sharing and developing new jailbreaking techniques on underground forums, with dedicated sections emerging on platforms like HackForums and XSS.

- Advertisement - Google News

These techniques aim to bypass the built-in safety limitations of LLMs, enabling the creation of malicious content such as phishing emails and malware code.

One of the most effective jailbreaking methods identified by KELA was word transformation, which successfully bypassed 27% of safety tests.

This technique involves replacing sensitive words with synonyms or splitting them into substrings to evade detection.

Massive Increase in Compromised LLM Accounts

The report revealed a staggering rise in the number of compromised accounts for popular LLM platforms.

ChatGPT saw an alarming increase from 154,000 compromised accounts in 2023 to 3 million in 2024, representing a growth of nearly 1,850%.

Similarly, Gemini (formerly Bard) experienced a surge from 12,000 to 174,000 compromised accounts, a 1,350% increase.

These compromised credentials, obtained through infostealer malware, pose a significant risk as they can be leveraged for further abuse of LLMs and associated services.

As LLM technologies continue to gain popularity and integration across various platforms, KELA anticipates the emergence of new attack surfaces in 2025.

Prompt injection is identified as one of the most critical threats against generative AI applications, while agentic AI, capable of autonomous actions and decision-making, presents a novel attack vector.

The report emphasizes the need for organizations to implement robust security measures, including secure LLM integrations, advanced deepfake detection technologies, and comprehensive user education on AI-related threats.

As the line between cybercrime and state-sponsored activities continues to blur, proactive threat intelligence and adaptive defense strategies will be crucial in mitigating the evolving risks posed by AI-powered cyber threats.

Investigate Real-World Malicious Links & Phishing Attacks With Threat Intelligence Lookup – Try for Free

Aman Mishra
Aman Mishra
Aman Mishra is a Security and privacy Reporter covering various data breach, cyber crime, malware, & vulnerability.

Latest articles

Zero-Trust Policy Bypass Enables Exploitation of Vulnerabilities and Manipulation of NHI Secrets

A new project has exposed a critical attack vector that exploits protocol vulnerabilities to...

Threat Actor Sells Burger King Backup System RCE Vulnerability for $4,000

A threat actor known as #LongNight has reportedly put up for sale remote code...

Chinese Nexus Hackers Exploit Ivanti Endpoint Manager Mobile Vulnerability

Ivanti disclosed two critical vulnerabilities, identified as CVE-2025-4427 and CVE-2025-4428, affecting Ivanti Endpoint Manager...

Hackers Target macOS Users with Fake Ledger Apps to Deploy Malware

Hackers are increasingly targeting macOS users with malicious clones of Ledger Live, the popular...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

Zero-Trust Policy Bypass Enables Exploitation of Vulnerabilities and Manipulation of NHI Secrets

A new project has exposed a critical attack vector that exploits protocol vulnerabilities to...

Threat Actor Sells Burger King Backup System RCE Vulnerability for $4,000

A threat actor known as #LongNight has reportedly put up for sale remote code...

Chinese Nexus Hackers Exploit Ivanti Endpoint Manager Mobile Vulnerability

Ivanti disclosed two critical vulnerabilities, identified as CVE-2025-4427 and CVE-2025-4428, affecting Ivanti Endpoint Manager...