Hackers were actively exploiting the generative AI for cyber attacks; not only that, even threat actors are also exploring new ways to exploit other advanced LLMs like ChatGPT.
They could leverage Large Language Models (LLMs) and generative AI for several malicious purposes like phishing, social engineering, malware generation, credential stuffing attacks, fake news, disinformation, automated hacking and many more.
Cybersecurity researchers at Tren Micro recently identified that hackers are actively moving to AI, but lacking behind the defenders in adoption rates.
The criminal underworld has experienced a rise of “jailbreak-as-a-service” offerings that give anonymous access to legitimate language models like ChatGPT and have prompts that are constantly updated to bypass ethical restrictions.
Free Webinar on Live API Attack Simulation: Book Your Seat | Start protecting your APIs from hackers
Some services, such as EscapeGPT and LoopGPT, openly advertise jailbreaks, while others like BlackhatGPT first pretend to be exclusive criminal LLM providers before revealing they just sit on top of OpenAI’s API with jailbreaking prompts.
This ever-changing contest between lawbreakers who intend to beat AI censorship and providers who try to stop their products from being cracked has caused a new illegal market for unrestricted conversational AI capabilities.
Flowgpt.com is one of the platforms that LoopGPT can leverage to create language models that are specific to individual system prompts which can potentially provide room for “illegal” or open AI assistants.
Moreover, there has been a surge in fraudulent unverified offerings that only make claims of being very powerful but lack any proof, and these may be scams or abandoned projects like FraudGPT that were heavily advertised but never confirmed.
Threat actors are leveraging generative AI for two main purposes:-
Spam toolkits like GoMailPro and Predator have integrated ChatGPT features for email content translation/generation.
Additionally, deepfake services are emerging, with criminals offering celebrity image and video manipulation from $10-$500+, including targeted offerings to bypass KYC verification at financial institutions using synthetic identities.
Overall, generative AI expands threat actors’ capabilities across coding and social engineering domains.
Is Your Network Under Attack? - Read CISO’s Guide to Avoiding the Next Breach - Download Free Guide
The QSC Loader service DLL named "loader.dll" leverages two distinct methods to obtain the path…
Cybercriminals are exploiting the recent critical LDAP vulnerabilities (CVE-2024-49112 and CVE-2024-49113) by distributing fake proof-of-concept…
A NonEuclid sophisticated C# Remote Access Trojan (RAT) designed for the.NET Framework 4.8 has been…
Fraudsters in the Middle East are exploiting a vulnerability in the government services portal. By…
Juniper Networks has disclosed a significant vulnerability affecting its Junos OS and Junos OS Evolved…
CrowdStrike, a leader in cybersecurity, uncovered a sophisticated phishing campaign that leverages its recruitment branding…