Sunday, February 9, 2025
HomeCyber AIAI Worm Developed by Researchers Spreads Automatically Between AI Agents

AI Worm Developed by Researchers Spreads Automatically Between AI Agents

Published on

SIEM as a Service

Follow Us on Google News

Researchers have developed what they claim to be one of the first generative AI worms, named Morris II, capable of autonomously spreading between AI systems.

This new form of cyberattack, reminiscent of the original Morris worm that wreaked havoc on the internet in 1988, signifies a potential shift in the landscape of cybersecurity threats.

The research, led by Ben Nassi from Cornell Tech, along with Stav Cohen and Ron Bitton, demonstrates the worm’s ability to infiltrate generative AI email assistants, extracting data and disseminating spam, thereby breaching security measures of prominent AI models like ChatGPT and Gemini.

The Rise of Generative AI and Its Vulnerabilities

As generative AI systems, such as OpenAI’s ChatGPT and Google’s Gemini, become increasingly sophisticated and integrated into various applications—ranging from mundane tasks like calendar bookings to more complex operations—so too does the potential for these systems to be exploited.

The researchers’ creation of the Morris II worm underscores a novel cyber threat that leverages the interconnectedness and autonomy of AI ecosystems.

A team of researchers has developed one of the earliest examples of generative AI worms, first reported by Wired.

These worms have the potential to spread from one system to another and may even be capable of stealing data or deploying malware during the process.

By employing adversarial self-replicating prompts, the worm can propagate through AI systems, hijacking them to execute unauthorized actions, such as data theft and malware deployment.

The implications of such a worm are far-reaching, posing significant risks to startups, developers, and tech companies that rely on generative AI systems.

The worm’s ability to spread autonomously between AI agents without detection introduces a new vector for cyberattacks, challenging existing security paradigms.

Security experts and researchers, including those from the CISPA Helmholtz Center for Information Security, emphasize the plausibility of these attacks and the urgent need for the development community to take these threats seriously.

Mitigating the Threat

Despite the alarming potential of AI worms, experts suggest that traditional security measures and vigilant application design can mitigate these risks.

Adam Swanda, a threat researcher at AI enterprise security firm Robust Intelligence, advocates for secure application design and the importance of human oversight in AI operations.

The risk of unauthorized activities can be significantly reduced by ensuring that AI agents do not perform actions without explicit approval.

Additionally, monitoring for unusual patterns, such as repetitive prompts within AI systems, can help in the early detection of potential threats.

Ben Nassi and his team also highlight the importance of awareness among developers and companies creating AI assistants.

Understanding the risks and implementing robust security measures are crucial steps in safeguarding against the exploitation of generative AI systems.

The research serves as a call to action for the AI development community to prioritize security in designing and deploying AI ecosystems.

The development of the Morris II worm by Nassi and his colleagues marks a pivotal moment in the evolution of cyber threats, highlighting the vulnerabilities inherent in generative AI systems.

The need for comprehensive security strategies becomes increasingly paramount as AI permeates various aspects of technology and daily life.

By fostering awareness and adopting proactive security measures, the AI development community can protect against the emerging threat of AI worms and ensure the safe and responsible use of generative AI technologies

You can block malware, including Trojans, ransomware, spyware, rootkits, worms, and zero-day exploits, with Perimeter81 malware protection. All are incredibly harmful, can wreak havoc, and damage your network.

Stay updated on Cybersecurity news, Whitepapers, and Infographics. Follow us on LinkedIn & Twitter

Divya
Divya
Divya is a Senior Journalist at GBhackers covering Cyber Attacks, Threats, Breaches, Vulnerabilities and other happenings in the cyber world.

Latest articles

UK Pressures Apple to Create Global Backdoor To Spy on Encrypted iCloud Access

United Kingdom has reportedly ordered Apple to create a backdoor allowing access to all...

Autonomous LLMs Reshaping Pen Testing: Real-World AD Breaches and the Future of Cybersecurity

Large Language Models (LLMs) are transforming penetration testing (pen testing), leveraging their advanced reasoning...

Securing GAI-Driven Semantic Communications: A Novel Defense Against Backdoor Attacks

Semantic communication systems, powered by Generative AI (GAI), are transforming the way information is...

Cybercriminals Target IIS Servers to Spread BadIIS Malware

A recent wave of cyberattacks has revealed the exploitation of Microsoft Internet Information Services...

Supply Chain Attack Prevention

Free Webinar - Supply Chain Attack Prevention

Recent attacks like Polyfill[.]io show how compromised third-party components become backdoors for hackers. PCI DSS 4.0’s Requirement 6.4.3 mandates stricter browser script controls, while Requirement 12.8 focuses on securing third-party providers.

Join Vivekanand Gopalan (VP of Products – Indusface) and Phani Deepak Akella (VP of Marketing – Indusface) as they break down these compliance requirements and share strategies to protect your applications from supply chain attacks.

Discussion points

Meeting PCI DSS 4.0 mandates.
Blocking malicious components and unauthorized JavaScript execution.
PIdentifying attack surfaces from third-party dependencies.
Preventing man-in-the-browser attacks with proactive monitoring.

More like this

UK Pressures Apple to Create Global Backdoor To Spy on Encrypted iCloud Access

United Kingdom has reportedly ordered Apple to create a backdoor allowing access to all...

Autonomous LLMs Reshaping Pen Testing: Real-World AD Breaches and the Future of Cybersecurity

Large Language Models (LLMs) are transforming penetration testing (pen testing), leveraging their advanced reasoning...

Securing GAI-Driven Semantic Communications: A Novel Defense Against Backdoor Attacks

Semantic communication systems, powered by Generative AI (GAI), are transforming the way information is...