Cybersecurity researchers at Kaspersky have identified a new supply chain vulnerability emerging from the widespread adoption of AI-generated code.
As AI assistants increasingly participate in software development-with Microsoft CTO Kevin Scott predicting AI will write 95% of code within five years-a phenomenon called “slopsquatting” poses significant security threats.
This risk stems from AI systems hallucinating non-existent software dependencies that attackers could exploit by creating malicious packages with these phantom names.
A comprehensive study examining 576,000 Python and JavaScript code samples generated by 16 popular large language models (LLMs) revealed concerning patterns of hallucinated dependencies.
While GPT-4 and GPT-4 Turbo demonstrated the lowest rates of fictitious libraries (under 5%), other models showed significantly higher rates-DeepSeek models exceeded 15%, and CodeLlama 7B topped 25%.
Even adjusting randomness parameters like temperature and top-p values failed to reduce hallucinations to negligible levels.
The research uncovered that JavaScript code contained more phantom dependencies (21%) compared to Python (16%), with newer technologies triggering 10% more non-existent packages.
Most alarming was the discovery that 43% of hallucinated package names consistently reappeared across multiple generation attempts.
The naming patterns followed recognizable patterns: 13% resembled typos differing by a single character from legitimate packages, 9% borrowed names from other programming languages, and 38% used logically plausible but non-existent names.
Slopsquatting Attacks
This new attack vector, named “slopsquatting,” exploits the predictable nature of AI hallucinations.
Unlike traditional typosquatting that targets human typing errors, slopsquatting capitalizes on repeated AI mistakes.
Attackers can run popular AI models, identify frequently hallucinated package names, and publish malicious libraries using these names in public repositories.
The risk intensifies with “vibe coding” practices-where developers provide high-level instructions to AI with minimal code review.
Should developers or automated systems attempt to install all referenced packages without verification, malicious dependencies could compromise the entire supply chain (ATT&CK T1195.001).
With nearly 20,000 malicious libraries discovered in open-source repositories over the past year, security experts anticipate threat actors will systematically exploit this vulnerability.
AI-Assisted Development
Kaspersky recommends five essential measures to mitigate slopsquatting risks:
- Implement automated source-code scanning and static security testing in development pipelines to verify all dependencies and eliminate embedded secrets or tokens.
- Incorporate additional AI validation cycles where models check their own code for errors and analyze the legitimacy of referenced packages. While this reduced hallucinations to 2.4% for DeepSeek and 9.3% for CodeLlama in testing, these rates remain too high for exclusive reliance.
- Prohibit AI assistants from coding critical components and establish formal code review processes with specialized checklists for AI-generated code.
- Create and enforce a restricted list of trusted dependencies, ideally limiting available packages to those in pre-approved internal repositories.
- Train developers specifically on AI security risks in the development context to increase awareness and vigilance.
These precautions become increasingly critical as organizations integrate AI assistants into software development workflows, potentially exposing themselves to this emerging threat vector.
Setting Up SOC Team? – Download Free Ultimate SIEM Pricing Guide (PDF) For Your SOC Team -> Free Download