Sunday, May 4, 2025
HomeChatGPTGhostGPT - Jailbreaked ChatGPT that Creates Malware & Exploits

GhostGPT – Jailbreaked ChatGPT that Creates Malware & Exploits

Published on

SIEM as a Service

Follow Us on Google News

Artificial intelligence (AI) tools have revolutionized how we approach everyday tasks, but they also come with a dark side.

Cybercriminals are increasingly exploiting AI for malicious purposes, as evidenced by the emergence of uncensored chatbots like WormGPT, WolfGPT, and EscapeGPT.

The latest and most concerning addition to this list is GhostGPT, a jailbroken variant of ChatGPT designed specifically for illegal activities.

- Advertisement - Google News

Recently uncovered by Abnormal Security researchers, GhostGPT has introduced new capabilities for malicious actors, raising serious ethical and cybersecurity concerns.

What Is GhostGPT?

GhostGPT is a chatbot tailored for cybercriminal use. It removes the ethical barriers and safety protocols embedded in conventional AI systems, enabling unfiltered and unrestricted responses to harmful queries.

GhostGPT
GhostGPT

Investigate Real-World Malicious Links & Phishing Attacks With Threat Intelligence Lookup - Try for Free

By using a jailbroken version of ChatGPT or an open-source large language model (LLM), GhostGPT bypasses safeguards meant to prevent AI from contributing to illegal activities. According to its promotional materials, GhostGPT boasts the following features:

  • Fast Processing: It delivers rapid responses, allowing users to produce malicious content or technical exploits efficiently.
  • No Logs Policy: GhostGPT claims to retain no activity logs, providing users with anonymity and a sense of security.
  • Ease of Access: Unlike traditional LLMs that require prompts or technical knowledge to jailbreak, GhostGPT is marketed and sold on Telegram, making it immediately accessible to buyers.

Capabilities and Uses

GhostGPT is advertised as a tool for a range of criminal activities, including:

  1. Malware Development: It can generate or refine computer viruses and other malicious code.
  2. Phishing Campaigns: The chatbot can draft persuasive emails for business email compromise (BEC) scams.
  3. Exploit Creation: GhostGPT assists in identifying and executing vulnerabilities in software or systems.

While its promotional materials mention potential “cybersecurity” uses, these claims appear disingenuous, especially given its availability on cybercrime forums and advertising targeted at malicious actors.

 The chatbot produced a convincing template with ease
 The chatbot produced a convincing template with ease

In a test, Abnormal Security researchers asked GhostGPT to create a phishing email resembling a DocuSign notification. The result was a polished, convincing email template, showcasing the bot’s ability to assist in social engineering attacks with ease.

The emergence of GhostGPT signals a troubling trend in AI misuse, raising several critical concerns:

  1. Lowering Barriers for Cybercrime
    With GhostGPT, even individuals with minimal technical skills can engage in cybercrime. The chatbot’s simple Telegram-based delivery system eliminates the need for expertise or extensive setup.
  2. Enhanced Cybercriminal Capabilities
    Attackers now have the power to develop malware, scams, and exploits faster and more efficiently than ever before. This dramatically reduces the time and effort required to execute sophisticated attacks.
  3. Increased Risk of AI-Driven Cybercrime
    The popularity of GhostGPT—evident through widespread mentions on criminal forums—highlights the growing interest in leveraging AI for illegal acts. Its existence amplifies concerns about the misuse of generative AI in the hands of malicious actors.

As AI technology evolves, tools like GhostGPT highlight the pressing need for enhanced regulatory frameworks and improved security measures.

The rise of jailbroken AI models shows that even the most advanced technology can become a double-edged sword, with transformative benefits on one side and serious risks on the other.

Cybersecurity experts, policymakers, and AI developers must act swiftly to curb the proliferation of uncensored AI chatbots before they further empower malicious actors and erode trust in the technology.

GhostGPT is not just a wake-up call but a warning of the immense challenges ahead in securing the future of AI.

Integrating Application Security into Your CI/CD Workflows Using Jenkins & Jira -> Free Webinar

Divya
Divya
Divya is a Senior Journalist at GBhackers covering Cyber Attacks, Threats, Breaches, Vulnerabilities and other happenings in the cyber world.

Latest articles

Threat Actors Target Critical National Infrastructure with New Malware and Tools

A recent investigation by the FortiGuard Incident Response (FGIR) team has uncovered a sophisticated,...

New StealC V2 Upgrade Targets Microsoft Installer Packages and PowerShell Scripts

StealC, a notorious information stealer and malware downloader first sold in January 2023, has...

Subscription-Based Scams Targeting Users to Steal Credit Card Information

Cybersecurity researchers at Bitdefender have identified a significant uptick in subscription-based scams, characterized by...

RansomHub Taps SocGholish: WebDAV & SCF Exploits Fuel Credential Heists

SocGholish, a notorious loader malware, has evolved into a critical tool for cybercriminals, often...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

Threat Actors Target Critical National Infrastructure with New Malware and Tools

A recent investigation by the FortiGuard Incident Response (FGIR) team has uncovered a sophisticated,...

New StealC V2 Upgrade Targets Microsoft Installer Packages and PowerShell Scripts

StealC, a notorious information stealer and malware downloader first sold in January 2023, has...

Subscription-Based Scams Targeting Users to Steal Credit Card Information

Cybersecurity researchers at Bitdefender have identified a significant uptick in subscription-based scams, characterized by...