Cyber Security News

Hackers Exploit DeepSeek & Qwen AI Models for Malware Development

Check Point Research (CPR) has revealed that cybercriminals are increasingly leveraging the newly launched AI models, DeepSeek and Qwen, to create malicious content.

These models, which lack robust anti-abuse provisions, have quickly become a preferred choice for threat actors over more regulated platforms like ChatGPT.

The exploitation of these tools highlights a concerning shift in the cyber threat landscape, where even low-skilled attackers can harness advanced AI capabilities to execute sophisticated attacks.

Unlike ChatGPT, which has implemented stringent anti-abuse mechanisms over the years, DeepSeek and Qwen appear to offer minimal resistance to misuse.

Threat actors are actively sharing methods to manipulate these models through techniques known as “jailbreaking,” enabling them to bypass restrictions and generate uncensored or harmful content.

Jailbreaking techniques such as the “Do Anything Now” and “Plane Crash Survivors” methods are being widely circulated among cybercriminal communities, further facilitating the misuse of these AI systems.

Jailbreaking Prompt

Real-World Exploits

The misuse of DeepSeek and Qwen has already led to alarming real-world consequences.

For instance, CPR has identified cases where these models were used to develop infostealers malware designed to extract sensitive information from unsuspecting users.

Additionally, cybercriminals have employed DeepSeek to bypass banking anti-fraud protections, potentially enabling large-scale financial theft.

Another troubling trend involves the use of these AI tools in mass spam distribution campaigns.

By combining ChatGPT with DeepSeek and Qwen, attackers are optimizing scripts for spam operations, enhancing their efficiency and reach.

This multi-platform approach underscores the growing sophistication of cyberattacks driven by generative AI technologies.

The Rising Threat of Unregulated AI Models

The rapid adoption of DeepSeek and Qwen by malicious actors underscores the urgent need for stronger safeguards in emerging AI technologies.

As uncensored versions of these models begin to surface on online repositories similar to what has been observed with ChatGPT, the risks associated with their misuse are expected to escalate further.

These tools are not only empowering seasoned hackers but also enabling low-skilled individuals to execute complex attacks without extensive technical expertise.

Check Point Research warns that the emergence of these unregulated AI models represents a significant challenge for cybersecurity professionals.

Without proactive measures to address vulnerabilities and prevent misuse, organizations risk exposure to malware development, financial fraud, and other cyber threats fueled by generative AI advancements.

As the race to develop next-generation AI tools continues, prioritizing security will be critical in mitigating the risks associated with their exploitation.

Investigate Real-World Malicious Links & Phishing Attacks With Threat Intelligence Lookup - Try for Free

Aman Mishra

Aman Mishra is a Security and privacy Reporter covering various data breach, cyber crime, malware, & vulnerability.

Recent Posts

Fortinet FortiOS & FortiProxy Zero-Day Exploited to Hijack Firewall & Gain Super Admin Access

Cybersecurity firm Fortinet has issued an urgent warning regarding a newly discovered zero-day authentication bypass…

5 hours ago

Microsoft Patch Tuesday February 2025: 61 Vulnerabilities Including 25 RCE & 3 0-Day

Microsoft has released its highly anticipated Patch Tuesday security updates for February 2025, addressing a…

6 hours ago

Preventing Attackers from Permanently Deleting Entra ID Accounts with Protected Actions

Microsoft Entra ID has introduced a robust mechanism called protected actions to mitigate the risks…

8 hours ago

Beyond the Horizon: Assessing the Viability of Single-Bit Fault Injection Attacks

The realm of fault injection attacks has long intrigued researchers and security professionals. Among these,…

8 hours ago

Satellite Weather Software Vulnerabilities Let Attackers Execute Code Remotely

IBL Software Engineering has disclosed a significant security vulnerability, identified as CVE-2025-1077, affecting its Visual…

8 hours ago

OpenAI Developing Its Own Chip to Reduce Reliance on Nvidia

OpenAI, the organization behind ChatGPT and other advanced AI tools, is making significant strides in…

13 hours ago