A sophisticated phishing campaign impersonating OpenAI’s ChatGPT Premium subscription service has surged globally, targeting users with fraudulent payment requests to steal credentials.
Cybersecurity firm Symantec recently identified emails spoofing ChatGPT’s branding, urging recipients to renew a fictional $24 monthly subscription.
The emails, marked with subject lines like “Action Required: Secure Continued Access to ChatGPT with a $24 Monthly Subscription,” direct users to malicious links designed to harvest login details and financial information.
Exploiting ChatGPT’s Popularity
The scam leverages ChatGPT’s widespread adoption, mirroring legitimate OpenAI communications to appear authentic.
Emails often include official-looking logos and typography, with body text warning that access to “premium features” will lapse unless payment details are updated.
Embedded links route users to phishing domains such as fnjrolpa.com, which host counterfeit OpenAI login pages.
Symantec noted that these domains, though now offline, were registered via international IP addresses to obscure their origins, complicating traceability.
Barracuda Networks observed similar campaigns in late 2024, where over 1,000 emails originated from the domain topmarinelogistics.com—a sender address unrelated to OpenAI.
Despite passing SPF and DKIM authentication checks, the emails contained subtle red flags, including mismatched dates and urgent language uncommon in official correspondence.
The Rising Threat of AI-Powered Phishing
This campaign reflects a broader trend of cybercriminals exploiting generative AI tools to enhance phishing efficacy.
Fraudulent services like FraudGPT—a dark web derivative of ChatGPT—enable scammers to craft grammatically flawless, contextually convincing emails at scale, bypassing traditional detection methods.
Microsoft’s 2023 analysis highlighted that AI-generated phishing content now supports over 20 languages, broadening attackers’ reach.
“AI-generated scams eliminate telltale spelling errors, making even savvy users vulnerable,” said a Barracuda spokesperson.
To combat these threats, cybersecurity teams recommend:
- Scrutinizing URLs: Authentic OpenAI services use https://chat.openai.com, while phishing sites often employ misspellings or unusual domains.
- Enabling Multi-Factor Authentication (MFA): Adding layers of security reduces credential theft efficacy.
- Training Programs: Regular employee education on identifying AI-driven scams is essential, as 60% of users struggle to distinguish machine-generated content.
Phishing remains the most prevalent cybercrime, with 3.4 billion spam emails sent daily. As AI tools lower entry barriers for attackers, the average cost of a data breach now exceeds $4 million.
The ChatGPT scam underscores the need for proactive defense strategies, blending technological solutions with user awareness.
OpenAI reiterates that subscription updates are managed solely via its platform, urging users to report suspicious communications directly.
Free Webinar: Better SOC with Interactive Malware Sandbox for Incident Response, and Threat Hunting - Register Here