Friday, May 24, 2024

Hackers Moving To AI But Lacking Behind The Defenders In Adoption Rates

Hackers were actively exploiting the generative AI for cyber attacks; not only that, even threat actors are also exploring new ways to exploit other advanced LLMs like ChatGPT.

They could leverage Large Language Models (LLMs) and generative AI for several malicious purposes like phishing, social engineering, malware generation, credential stuffing attacks, fake news, disinformation, automated hacking and many more.

Cybersecurity researchers at Tren Micro recently identified that hackers are actively moving to AI, but lacking behind the defenders in adoption rates.

Hackers Moving To AI

The criminal underworld has experienced a rise of “jailbreak-as-a-service” offerings that give anonymous access to legitimate language models like ChatGPT and have prompts that are constantly updated to bypass ethical restrictions.

Free Webinar on Live API Attack Simulation: Book Your Seat | Start protecting your APIs from hackers

Some services, such as EscapeGPT and LoopGPT, openly advertise jailbreaks, while others like BlackhatGPT first pretend to be exclusive criminal LLM providers before revealing they just sit on top of OpenAI’s API with jailbreaking prompts.

EscapeGPT (Source – Trend Micro)

This ever-changing contest between lawbreakers who intend to beat AI censorship and providers who try to stop their products from being cracked has caused a new illegal market for unrestricted conversational AI capabilities.

BlackHatGPT (Source – Trend Micro) is one of the platforms that LoopGPT can leverage to create language models that are specific to individual system prompts which can potentially provide room for “illegal” or open AI assistants. 

Moreover, there has been a surge in fraudulent unverified offerings that only make claims of being very powerful but lack any proof, and these may be scams or abandoned projects like FraudGPT that were heavily advertised but never confirmed.

Threat actors are leveraging generative AI for two main purposes:-

  • Developing malware and malicious tools, similar to widespread LLM adoption by software developers.
  • Improving social engineering tactics by crafting scam scripts, and scaling phishing campaigns with urgency/multi-language capabilities enabled by LLMs.

Spam toolkits like GoMailPro and Predator have integrated ChatGPT features for email content translation/generation. 

Deepfake image (Source – Trend Micro)

Additionally, deepfake services are emerging, with criminals offering celebrity image and video manipulation from $10-$500+, including targeted offerings to bypass KYC verification at financial institutions using synthetic identities. 

Overall, generative AI expands threat actors’ capabilities across coding and social engineering domains.

Is Your Network Under Attack? - Read CISO’s Guide to Avoiding the Next Breach - Download Free Guide


Latest articles

Hackers Weaponizing Microsoft Access Documents To Execute Malicious Program

In multiple aggressive phishing attempts, the financially motivated organization UAC-0006 heavily targeted Ukraine, utilizing...

Microsoft Warns Of Storm-0539’s Aggressive Gift Card Theft

Gift cards are attractive to hackers since they provide quick monetization for stolen data...

Kinsing Malware Attacking Apache Tomcat Server With Vulnerabilities

The scalability and flexibility of cloud platforms recently boosted the emerging trend of cryptomining...

NSA Releases Guidance On Zero Trust Maturity To Secure Application From Attackers

Zero Trust Maturity measures the extent to which an organization has adopted and implemented...

Chinese Hackers Stay Hidden On Military And Government Networks For Six Years

Hackers target military and government networks for varied reasons, primarily related to spying, which...

DNSBomb : A New DoS Attack That Exploits DNS Queries

A new practical and powerful Denial of service attack has been discovered that exploits...

Malicious PyPI & NPM Packages Attacking MacOS Users

Cybersecurity researchers have identified a series of malicious software packages targeting MacOS users.These...
Tushar Subhra Dutta
Tushar Subhra Dutta
Tushar is a Cyber security content editor with a passion for creating captivating and informative content. With years of experience under his belt in Cyber Security, he is covering Cyber Security News, technology and other news.

Free Webinar

Live API Attack Simulation

94% of organizations experience security problems in production APIs, and one in five suffers a data breach. As a result, cyber-attacks on APIs increased from 35% in 2022 to 46% in 2023, and this trend continues to rise.
Key takeaways include:

  • An exploit of OWASP API Top 10 vulnerability
  • A brute force ATO (Account Takeover) attack on API
  • A DDoS attack on an API
  • Positive security model automation to prevent API attacks

Related Articles