Check Point Research (CPR) has revealed that cybercriminals are increasingly leveraging the newly launched AI models, DeepSeek and Qwen, to create malicious content.
These models, which lack robust anti-abuse provisions, have quickly become a preferred choice for threat actors over more regulated platforms like ChatGPT.
The exploitation of these tools highlights a concerning shift in the cyber threat landscape, where even low-skilled attackers can harness advanced AI capabilities to execute sophisticated attacks.
Unlike ChatGPT, which has implemented stringent anti-abuse mechanisms over the years, DeepSeek and Qwen appear to offer minimal resistance to misuse.
Threat actors are actively sharing methods to manipulate these models through techniques known as “jailbreaking,” enabling them to bypass restrictions and generate uncensored or harmful content.
Jailbreaking techniques such as the “Do Anything Now” and “Plane Crash Survivors” methods are being widely circulated among cybercriminal communities, further facilitating the misuse of these AI systems.
The misuse of DeepSeek and Qwen has already led to alarming real-world consequences.
For instance, CPR has identified cases where these models were used to develop infostealers malware designed to extract sensitive information from unsuspecting users.
Additionally, cybercriminals have employed DeepSeek to bypass banking anti-fraud protections, potentially enabling large-scale financial theft.
Another troubling trend involves the use of these AI tools in mass spam distribution campaigns.
By combining ChatGPT with DeepSeek and Qwen, attackers are optimizing scripts for spam operations, enhancing their efficiency and reach.
This multi-platform approach underscores the growing sophistication of cyberattacks driven by generative AI technologies.
The rapid adoption of DeepSeek and Qwen by malicious actors underscores the urgent need for stronger safeguards in emerging AI technologies.
As uncensored versions of these models begin to surface on online repositories similar to what has been observed with ChatGPT, the risks associated with their misuse are expected to escalate further.
These tools are not only empowering seasoned hackers but also enabling low-skilled individuals to execute complex attacks without extensive technical expertise.
Check Point Research warns that the emergence of these unregulated AI models represents a significant challenge for cybersecurity professionals.
Without proactive measures to address vulnerabilities and prevent misuse, organizations risk exposure to malware development, financial fraud, and other cyber threats fueled by generative AI advancements.
As the race to develop next-generation AI tools continues, prioritizing security will be critical in mitigating the risks associated with their exploitation.
Investigate Real-World Malicious Links & Phishing Attacks With Threat Intelligence Lookup - Try for Free
Cybersecurity firm Fortinet has issued an urgent warning regarding a newly discovered zero-day authentication bypass…
Microsoft has released its highly anticipated Patch Tuesday security updates for February 2025, addressing a…
Microsoft Entra ID has introduced a robust mechanism called protected actions to mitigate the risks…
The realm of fault injection attacks has long intrigued researchers and security professionals. Among these,…
IBL Software Engineering has disclosed a significant security vulnerability, identified as CVE-2025-1077, affecting its Visual…
OpenAI, the organization behind ChatGPT and other advanced AI tools, is making significant strides in…