Check Point Research (CPR) has revealed that cybercriminals are increasingly leveraging the newly launched AI models, DeepSeek and Qwen, to create malicious content.
These models, which lack robust anti-abuse provisions, have quickly become a preferred choice for threat actors over more regulated platforms like ChatGPT.
The exploitation of these tools highlights a concerning shift in the cyber threat landscape, where even low-skilled attackers can harness advanced AI capabilities to execute sophisticated attacks.
Unlike ChatGPT, which has implemented stringent anti-abuse mechanisms over the years, DeepSeek and Qwen appear to offer minimal resistance to misuse.
Threat actors are actively sharing methods to manipulate these models through techniques known as “jailbreaking,” enabling them to bypass restrictions and generate uncensored or harmful content.
Jailbreaking techniques such as the “Do Anything Now” and “Plane Crash Survivors” methods are being widely circulated among cybercriminal communities, further facilitating the misuse of these AI systems.
The misuse of DeepSeek and Qwen has already led to alarming real-world consequences.
For instance, CPR has identified cases where these models were used to develop infostealers malware designed to extract sensitive information from unsuspecting users.
Additionally, cybercriminals have employed DeepSeek to bypass banking anti-fraud protections, potentially enabling large-scale financial theft.
Another troubling trend involves the use of these AI tools in mass spam distribution campaigns.
By combining ChatGPT with DeepSeek and Qwen, attackers are optimizing scripts for spam operations, enhancing their efficiency and reach.
This multi-platform approach underscores the growing sophistication of cyberattacks driven by generative AI technologies.
The rapid adoption of DeepSeek and Qwen by malicious actors underscores the urgent need for stronger safeguards in emerging AI technologies.
As uncensored versions of these models begin to surface on online repositories similar to what has been observed with ChatGPT, the risks associated with their misuse are expected to escalate further.
These tools are not only empowering seasoned hackers but also enabling low-skilled individuals to execute complex attacks without extensive technical expertise.
Check Point Research warns that the emergence of these unregulated AI models represents a significant challenge for cybersecurity professionals.
Without proactive measures to address vulnerabilities and prevent misuse, organizations risk exposure to malware development, financial fraud, and other cyber threats fueled by generative AI advancements.
As the race to develop next-generation AI tools continues, prioritizing security will be critical in mitigating the risks associated with their exploitation.
Investigate Real-World Malicious Links & Phishing Attacks With Threat Intelligence Lookup - Try for Free
Cisco has issued a high-severity advisory (cisco-sa-erlang-otp-ssh-xyZZy) warning of a critical remote code execution (RCE)…
Enterprises and managed service providers globally are now facing urgent security concerns following the disclosure…
Security researcher Alessandro Sgreccia (aka "rainpwn") has revealed a set of critical vulnerabilities in Zyxel’s…
A high-severity denial-of-service (DoS) vulnerability in Redis, tracked as CVE-2025-21605, allows unauthenticated attackers to crash servers…
Google’s Mandiant team has released its M-Trends 2025 report, highlighting the increasing sophistication of threat…
A critical remote code execution (RCE) vulnerability, identified as CVE-2025-3248 with a CVSS score of…