Thursday, May 15, 2025
HomeArtificial IntelligenceHackers Moving To AI But Lacking Behind The Defenders In Adoption Rates

Hackers Moving To AI But Lacking Behind The Defenders In Adoption Rates

Published on

SIEM as a Service

Follow Us on Google News

Hackers were actively exploiting the generative AI for cyber attacks; not only that, even threat actors are also exploring new ways to exploit other advanced LLMs like ChatGPT.

They could leverage Large Language Models (LLMs) and generative AI for several malicious purposes like phishing, social engineering, malware generation, credential stuffing attacks, fake news, disinformation, automated hacking and many more.

Cybersecurity researchers at Tren Micro recently identified that hackers are actively moving to AI, but lacking behind the defenders in adoption rates.

- Advertisement - Google News

Hackers Moving To AI

The criminal underworld has experienced a rise of “jailbreak-as-a-service” offerings that give anonymous access to legitimate language models like ChatGPT and have prompts that are constantly updated to bypass ethical restrictions.

Free Webinar on Live API Attack Simulation: Book Your Seat | Start protecting your APIs from hackers

Some services, such as EscapeGPT and LoopGPT, openly advertise jailbreaks, while others like BlackhatGPT first pretend to be exclusive criminal LLM providers before revealing they just sit on top of OpenAI’s API with jailbreaking prompts.

EscapeGPT (Source – Trend Micro)

This ever-changing contest between lawbreakers who intend to beat AI censorship and providers who try to stop their products from being cracked has caused a new illegal market for unrestricted conversational AI capabilities.

BlackHatGPT (Source – Trend Micro)

Flowgpt.com is one of the platforms that LoopGPT can leverage to create language models that are specific to individual system prompts which can potentially provide room for “illegal” or open AI assistants. 

Moreover, there has been a surge in fraudulent unverified offerings that only make claims of being very powerful but lack any proof, and these may be scams or abandoned projects like FraudGPT that were heavily advertised but never confirmed.

Threat actors are leveraging generative AI for two main purposes:-

  • Developing malware and malicious tools, similar to widespread LLM adoption by software developers.
  • Improving social engineering tactics by crafting scam scripts, and scaling phishing campaigns with urgency/multi-language capabilities enabled by LLMs.

Spam toolkits like GoMailPro and Predator have integrated ChatGPT features for email content translation/generation. 

Deepfake image (Source – Trend Micro)

Additionally, deepfake services are emerging, with criminals offering celebrity image and video manipulation from $10-$500+, including targeted offerings to bypass KYC verification at financial institutions using synthetic identities. 

Overall, generative AI expands threat actors’ capabilities across coding and social engineering domains.

Is Your Network Under Attack? - Read CISO’s Guide to Avoiding the Next Breach - Download Free Guide

Tushar Subhra
Tushar Subhra
Tushar is a Cyber security content editor with a passion for creating captivating and informative content. With years of experience under his belt in Cyber Security, he is covering Cyber Security News, technology and other news.

Latest articles

Threat Actors Leverage Weaponized HTML Files to Deliver Horabot Malware

A recent discovery by FortiGuard Labs has unveiled a cunning phishing campaign orchestrated by...

TA406 Hackers Target Government Entities to Steal Login Credentials

The North Korean state-sponsored threat actor TA406, also tracked as Opal Sleet and Konni,...

Google Threat Intelligence Releases Actionable Threat Hunting Technique for Malicious .desktop Files

Google Threat Intelligence has unveiled a series of sophisticated threat hunting techniques to detect...

New Adobe Photoshop Vulnerability Enables Arbitrary Code Execution

Adobe has released critical security updates addressing three high-severity vulnerabilities (CVE-2025-30324, CVE-2025-30325, CVE-2025-30326) in...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

Threat Actors Leverage Weaponized HTML Files to Deliver Horabot Malware

A recent discovery by FortiGuard Labs has unveiled a cunning phishing campaign orchestrated by...

TA406 Hackers Target Government Entities to Steal Login Credentials

The North Korean state-sponsored threat actor TA406, also tracked as Opal Sleet and Konni,...

Google Threat Intelligence Releases Actionable Threat Hunting Technique for Malicious .desktop Files

Google Threat Intelligence has unveiled a series of sophisticated threat hunting techniques to detect...