Thursday, April 24, 2025
HomeAIThe State of AI Malware and Defenses Against It

The State of AI Malware and Defenses Against It

Published on

SIEM as a Service

Follow Us on Google News

AI has recently been added to the list of things that keep cybersecurity leaders awake. The increasing popularity of and easy access to large language models (LLMs), such as ChatGPT, DeepSeek, and Gemini, have enabled threat actors to scale and personalize their attacks. 

Organizations need to adapt their cyber defenses based on this trend. But before going all in on developing the cybersecurity strategies that can help fight against AI malware, one needs to understand the current state of AI malware – how developed it is and what makes it different. 

The Potential Dangers of AI Malware

One of the dangers of AI malware is its ability to evade detection through polymorphism. This means that the malware can keep changing its code and behavior in real time. Imagine a criminal who could change their face and clothes every few minutes to blend with whatever environment they’re in – they would be extremely hard to catch. 

- Advertisement - Google News

Similarly, this type of malware challenges traditional threat detection strategies that the cybersecurity community has used for so long, such as relying on signature-based antivirus software.

Another challenge is AI malware’s self-learning capabilities. AI algorithms can enable the malware to learn from past intrusion attempts, identify other vulnerabilities, and optimize attack methods. It can, for instance, learn the best time to attack and the optimal method to use on a specific target system.

As a result, AI can automate the mass production of sophisticated and unique malware variants, effectively shortening the time it takes for threat actors to plan and launch attacks. Finding and exploiting zero-day vulnerabilities can also be made faster—there already have been precedents of AI discovering zero-days, and cybercriminals can then automate the process of writing exploit code using the very same AI. 

Proof-of-Concept Examples of AI-Generated Malware 

But that’s theory—what about practice? Several security researchers have already demonstrated the dangers of AI malware—increased intelligence, greater adaptability, and faster propagation. Below are some examples:

  • BlackMamba: This is a polymorphic keylogger developed by the security company HYAS to illustrate how LLMs (specifically ChatGPT) can be leveraged to create malicious code. BlackMamba fetches malicious code from ChatGPT during runtime and executes it in memory while avoiding traditional file-based detection. The captured keystrokes are then sent to a malicious Microsoft Teams channel. 
  • EyeSpy: Another proof-of-concept by HYAS, EyeSpy is designed to autonomously make informed decisions and synthesize its capabilities to conduct cyberattacks and continuously morph to avoid detection. It can strategize attacks completely on its own. Although not fully weaponized, it demonstrates the potential of AI-driven threats that can choose their targets and attack methods using an attacker mindset. 
  • DeepSeek tricked into writing malware: Researchers at Tenable were able to bypass DeepSeek’s guardrails using a jailbreak technique, prompting the LLM platform to generate a keylogger and ransomware. While the code required manual editing to be fully functional, DeepSeek was still able to provide a useful compilation of techniques and the basic malware structure that can help individuals with minimal coding skills to quickly create malicious code. 
  • OpenAI Operator agents can potentially streamline phishing attacks: Symantec’s Threat Hunter Team was able to successfully instruct OpenAI Operator to identify a target person, obtain the target’s nonpublic email address, create a malicious PowerShell script, and send a convincing phishing email—with minimal human intervention.
  • Morris II: Cornell Tech researchers developed an AI worm that not only can steal sensitive data but can also distribute malicious emails using compromised AI-powered email assistants. This demonstrates the vulnerability of LLM-based chatbot services to cyberattacks. 

Attacks Leveraging AI in The Wild

While most of the proofs of concept cited above are not yet seen in the wild, the cybersecurity community has seen some of the malicious AI use in action.

AI-Enabled Phishing Campaigns

The nation-state threat group known as Fancy Bear, APT28, or Forest Blizzard (among other names) has been operating since 2008 and is a prime example of how attacker tactics evolve over time. In recent months, the group has been observed using AI to craft legitimate-looking phishing emails that mimic official government communications, making them more believable. 

While Fancy Bear does not appear to be using polymorphic AI malware, AI capabilities still allow it to quickly detect—and immediately exploit—vulnerabilities once the target falls for the phishing lure.

The threat group is not alone. AI-assisted phishing is becoming more common as AI tools allow attackers to generate large numbers of phishing emails and then send those emails automatically. 

What makes matters worse is that these emails are potentially more effective than conventional ones. A study conducted by the Institute of Electrical and Electronics Engineers (IEEE) revealed that people are more likely to fall for an AI-generated phishing email. In the study, conventional phishing emails (those used in the past) had a click-through rate of 19% to 28%, while emails crafted using GPT-4 had a much higher click-through rate of 30% to 44%. This figure becomes even higher when AI and human methods are combined (43%).

AI-Generated Malicious GitHub Repositories

Technically proficient Internet users aren’t safe from AI-enabled attacks either. GitHub, the main platform for developer collaboration in the world, has recently been used to distribute malware, specifically through AI-generated fake GitHub repositories

With repositories mimicking popular software, where AI has seemingly been used to generate documentation and README files, the malefactors lure users into downloading a malicious ZIP archive containing malware like SmartLoader or Lumma Stealer

Rogue LLMs

In addition to leveraging publicly available AI tools, cybercriminals leverage uncensored LLM chatbots, such as GhostGPT, WormGPT, and the like, which offer a broader spectrum of malicious use cases. 

These tools are uncensored, meaning that the safety and ethical guidelines of mainstream AI chatbot models do not constrain them. They can generate content that other AI tools would typically flag as harmful and refuse to create it. That lowers the barrier to entry for cybercrime since even novice hackers can create sophisticated attacks.

Is AI Malware a Time Bomb or Just a Scare?

Despite the increased usage of AI in malicious campaigns, some security experts do not buy into the idea that AI malware poses a catastrophic threat. Cisco Talos’s Martin Lee says it’s “overblown,” although that was in reference to the threat of disinformation from deepfakes and other AI-generated content. 

When it comes to AI malware, IBM’s researchers say we also shouldn’t be too worried. According to Kevin Henson, Lead Malware Reverse Engineer with IBM X-Force Threat Intelligence, the concepts behind BlackMamba and EyeSpy are not new, and security teams have existing defense strategies against them. 

While LLMs have advanced coding skills, Henson says, “using ChatGPT [and other AI tools] to generate malware has limitations because the code is generated by models that have been trained on a set of data.” That means machine-generated code will not be as complex as that written by humans. 

However, some other experts are more worried. A NATO-backed security organization Goldilock warned that in the next two years, the world will see a rise in AI-powered malware that targets critical infrastructure in large-scale, coordinated cyberwarfare. 

How To Defend Against AI-Generated Malware and AI-Powered Attacks

Regardless of which viewpoint resonates more with you, the reality is that AI malware is likely to become much more widespread, and organizations need to start preparing for it now. Adopting the following strategies can help strengthen an organization’s security posture against AI-generated threats:

  • Predictive threat intelligence: Organizations need to use AI to fight against AI malware – it can be used to analyze patterns and trends and identify emerging threats before they are fully weaponized. Leveraging advanced analytics for predictive threat intelligence and blocking potential threats at early stages ensures organizations are not caught off guard by evolving AI-driven threats.
  • Behavioral analytics and anomaly detection: This involves establishing a baseline of normal user and system behavior and continuously monitoring them for any deviations from the baseline. In the context of AI malware, behavioral analytics can be effective because AI-powered attacks are most likely to exhibit anomalies that traditional security measures might miss.
  • Cybersecurity training: Given that phishing and social engineering remain top attack vectors in AI-related threats, comprehensive cybersecurity training continues to be essential. But this time, training would need to include the unique dangers of AI, such as AI malware, AI-assisted phishing, and others that may emerge over time. 

Find this News Interesting! Follow us on Google NewsLinkedIn, & X to Get Instant Updates!

Balaji
Balaji
BALAJI is an Ex-Security Researcher (Threat Research Labs) at Comodo Cybersecurity. Editor-in-Chief & Co-Founder - Cyber Security News & GBHackers On Security.

Latest articles

Verizon DBIR Report: Small Businesses Identified as Key Targets in Ransomware Attacks

Verizon Business's 2025 Data Breach Investigations Report (DBIR), released on April 24, 2025, paints...

Lazarus APT Targets Organizations by Exploiting One-Day Vulnerabilities

A recent cyber espionage campaign by the notorious Lazarus Advanced Persistent Threat (APT) group,...

ToyMaker Hackers Compromise Numerous Hosts via SSH and File Transfer Tools

In a alarming cybersecurity breach uncovered by Cisco Talos in 2023, a critical infrastructure...

Threat Actors Exploiting Unsecured Kubernetes Clusters for Crypto Mining

In a startling revelation from Microsoft Threat Intelligence, threat actors are increasingly targeting unsecured...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

Verizon DBIR Report: Small Businesses Identified as Key Targets in Ransomware Attacks

Verizon Business's 2025 Data Breach Investigations Report (DBIR), released on April 24, 2025, paints...

Lazarus APT Targets Organizations by Exploiting One-Day Vulnerabilities

A recent cyber espionage campaign by the notorious Lazarus Advanced Persistent Threat (APT) group,...

ToyMaker Hackers Compromise Numerous Hosts via SSH and File Transfer Tools

In a alarming cybersecurity breach uncovered by Cisco Talos in 2023, a critical infrastructure...