Cyber Security News

Malicious AI Tools See 200% Surge as ChatGPT Jailbreaking Talks Increase by 52%

The cybersecurity landscape in 2024 witnessed a significant escalation in AI-related threats, with malicious actors increasingly targeting and exploiting large language models (LLMs).

According to KELA’s annual “State of Cybercrime” report, discussions about exploiting popular LLMs such as ChatGPT, Copilot, and Gemini surged by 94% compared to the previous year.

Jailbreaking Techniques Proliferate on Underground Forums

Cybercriminals have been actively sharing and developing new jailbreaking techniques on underground forums, with dedicated sections emerging on platforms like HackForums and XSS.

These techniques aim to bypass the built-in safety limitations of LLMs, enabling the creation of malicious content such as phishing emails and malware code.

One of the most effective jailbreaking methods identified by KELA was word transformation, which successfully bypassed 27% of safety tests.

This technique involves replacing sensitive words with synonyms or splitting them into substrings to evade detection.

Massive Increase in Compromised LLM Accounts

The report revealed a staggering rise in the number of compromised accounts for popular LLM platforms.

ChatGPT saw an alarming increase from 154,000 compromised accounts in 2023 to 3 million in 2024, representing a growth of nearly 1,850%.

Similarly, Gemini (formerly Bard) experienced a surge from 12,000 to 174,000 compromised accounts, a 1,350% increase.

These compromised credentials, obtained through infostealer malware, pose a significant risk as they can be leveraged for further abuse of LLMs and associated services.

As LLM technologies continue to gain popularity and integration across various platforms, KELA anticipates the emergence of new attack surfaces in 2025.

Prompt injection is identified as one of the most critical threats against generative AI applications, while agentic AI, capable of autonomous actions and decision-making, presents a novel attack vector.

The report emphasizes the need for organizations to implement robust security measures, including secure LLM integrations, advanced deepfake detection technologies, and comprehensive user education on AI-related threats.

As the line between cybercrime and state-sponsored activities continues to blur, proactive threat intelligence and adaptive defense strategies will be crucial in mitigating the evolving risks posed by AI-powered cyber threats.

Investigate Real-World Malicious Links & Phishing Attacks With Threat Intelligence Lookup – Try for Free

Aman Mishra

Aman Mishra is a Security and privacy Reporter covering various data breach, cyber crime, malware, & vulnerability.

Recent Posts

Exim Use-After-Free Vulnerability Enables Privilege Escalation

A significant security threat has been uncovered in Exim, a popular open-source mail transfer agent…

12 minutes ago

OpenAI Offers Up to $100,000 for Critical Infrastructure Vulnerability Reports

OpenAI has announced major updates to its cybersecurity initiatives. The company is expanding its Security…

22 minutes ago

Splunk RCE Vulnerability Enables Remote Code Execution via File Upload

A severe vulnerability in Splunk Enterprise and Splunk Cloud Platform has been identified, allowing for…

39 minutes ago

12 Cybercriminals Arrested After Ghost Communication Platform Shutdown

Law enforcement agencies have successfully dismantled a clandestine communication platform known as "Ghost," which was…

58 minutes ago

Threat Actors Use “Atlantis AIO” Tool to Automate Credential Stuffing Attacks

In a concerning development for cybersecurity professionals, threat actors are increasingly utilizing a powerful tool…

13 hours ago

Hackers Exploit COM Objects for Fileless Malware and Lateral Movement

Security researchers Dylan Tran and Jimmy Bayne have unveiled a new fileless lateral movement technique…

13 hours ago