Tuesday, May 6, 2025
HomeChatGPTFake ChatGPT Premium Phishing Scam Spreads to Steal User Credentials

Fake ChatGPT Premium Phishing Scam Spreads to Steal User Credentials

Published on

SIEM as a Service

Follow Us on Google News

A sophisticated phishing campaign impersonating OpenAI’s ChatGPT Premium subscription service has surged globally, targeting users with fraudulent payment requests to steal credentials.

Cybersecurity firm Symantec recently identified emails spoofing ChatGPT’s branding, urging recipients to renew a fictional $24 monthly subscription.

The emails, marked with subject lines like “Action Required: Secure Continued Access to ChatGPT with a $24 Monthly Subscription,” direct users to malicious links designed to harvest login details and financial information.

- Advertisement - Google News

Exploiting ChatGPT’s Popularity

The scam leverages ChatGPT’s widespread adoption, mirroring legitimate OpenAI communications to appear authentic.

Emails often include official-looking logos and typography, with body text warning that access to “premium features” will lapse unless payment details are updated.

Embedded links route users to phishing domains such as fnjrolpa.com, which host counterfeit OpenAI login pages. 

Symantec noted that these domains, though now offline, were registered via international IP addresses to obscure their origins, complicating traceability.

Barracuda Networks observed similar campaigns in late 2024, where over 1,000 emails originated from the domain topmarinelogistics.com—a sender address unrelated to OpenAI.

Despite passing SPF and DKIM authentication checks, the emails contained subtle red flags, including mismatched dates and urgent language uncommon in official correspondence.

The Rising Threat of AI-Powered Phishing

This campaign reflects a broader trend of cybercriminals exploiting generative AI tools to enhance phishing efficacy.

Fraudulent services like FraudGPT—a dark web derivative of ChatGPT—enable scammers to craft grammatically flawless, contextually convincing emails at scale, bypassing traditional detection methods.

Microsoft’s 2023 analysis highlighted that AI-generated phishing content now supports over 20 languages, broadening attackers’ reach.

“AI-generated scams eliminate telltale spelling errors, making even savvy users vulnerable,” said a Barracuda spokesperson.

To combat these threats, cybersecurity teams recommend:

  • Scrutinizing URLs: Authentic OpenAI services use https://chat.openai.com, while phishing sites often employ misspellings or unusual domains.
  • Enabling Multi-Factor Authentication (MFA): Adding layers of security reduces credential theft efficacy.
  • Training Programs: Regular employee education on identifying AI-driven scams is essential, as 60% of users struggle to distinguish machine-generated content.

Phishing remains the most prevalent cybercrime, with 3.4 billion spam emails sent daily. As AI tools lower entry barriers for attackers, the average cost of a data breach now exceeds $4 million. 

The ChatGPT scam underscores the need for proactive defense strategies, blending technological solutions with user awareness.

OpenAI reiterates that subscription updates are managed solely via its platform, urging users to report suspicious communications directly.

Free Webinar: Better SOC with Interactive Malware Sandbox for Incident Response, and Threat Hunting - Register Here

Divya
Divya
Divya is a Senior Journalist at GBhackers covering Cyber Attacks, Threats, Breaches, Vulnerabilities and other happenings in the cyber world.

Latest articles

BFDOOR Malware Targets Organizations to Establish Long-Term Persistence

The BPFDoor malware has emerged as a significant threat targeting domestic and international organizations,...

Uncovering the Security Risks of Data Exposure in AI-Powered Tools like Snowflake’s CORTEX

As artificial intelligence continues to reshape the technological landscape, tools like Snowflake’s CORTEX Search...

UNC3944 Hackers Shift from SIM Swapping to Ransomware and Data Extortion

UNC3944, a financially-motivated threat actor also linked to the group known as Scattered Spider,...

Over 2,800 Hacked Websites Targeting MacOS Users with AMOS Stealer Malware

Cybersecurity researcher has uncovered a massive malware campaign targeting MacOS users through approximately 2,800...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

BFDOOR Malware Targets Organizations to Establish Long-Term Persistence

The BPFDoor malware has emerged as a significant threat targeting domestic and international organizations,...

Uncovering the Security Risks of Data Exposure in AI-Powered Tools like Snowflake’s CORTEX

As artificial intelligence continues to reshape the technological landscape, tools like Snowflake’s CORTEX Search...

UNC3944 Hackers Shift from SIM Swapping to Ransomware and Data Extortion

UNC3944, a financially-motivated threat actor also linked to the group known as Scattered Spider,...