Friday, April 11, 2025
HomeCyber Security NewsPoisoned AI Coding, Assistant Tools Opens Application to Hack Attack

Poisoned AI Coding, Assistant Tools Opens Application to Hack Attack

Published on

SIEM as a Service

Follow Us on Google News

AI (Artificial Intelligence) has significantly revolutionized software engineering with several advanced AI tools like ChatGPT and GitHub Copilot, which help boost developers’ efficiency. 

Besides this, two types of AI-powered coding assistant tools emerged in recent times, and here we have mentioned them:-

  • CODE COMPLETION Tool 
  • CODE GENERATION Tool

Cybersecurity researchers Sanghak Oh, Kiho Lee, Seonhye Park, Doowon Kim, Hyoungshick Kim from the following universities recently identified that poisoned AI coding assistant tools open the application to hack attack:-

- Advertisement - Google News
  • Department of Electrical and Computer Engineering, Sungkyunkwan University, Republic of Korea
  • Department of Electrical Engineering and Computer Science, University of Tennessee, USA

Poisoned AI Coding Assistant

AI coding assistants are transforming software engineering, but they are vulnerable to poisoning attacks. Attackers inject malicious code snippets into training data, leading to insecure suggestions. 

This poses real-world risks, as researchers’ study with 238 participants and 30 professional developers reveals. The survey shows widespread tool adoption, but developers may underestimate poisoning risks. 

In-lab studies confirm that poisoned tools can influence developers to include insecure code, highlighting the urgency for education and enhanced coding practices in the AI-powered coding landscape.

Code and model poisoning attacks (Source - Arxiv)
Code and model poisoning attacks (Source – Arxiv)

Attackers aim to deceive developers through generic backdoor poisoning attacks on code-suggestion deep learning models. This method manipulates models to suggest malicious code without degrading overall performance and is hard to detect. 

Attackers leverage access to the model or its dataset, often sourced from open repositories like GitHub, and here, the detection is challenging due to model complexity. 

Mitigation strategies include:-

  • Improved code review
  • Secure coding practices
  • Fuzzing

Static analysis tools can help detect poisoned samples, but attackers may craft stealthy versions. After the tasks, participants had an exit interview with two sections:- 

  • 1. Demographic and security knowledge assessment, including a quiz and confidence ratings. 
  • 2. Follow-up questions explored intentions, rationale, and awareness of vulnerabilities and security threats, such as poisoning attacks in AI-powered coding assistants.

Recommendations

Here below we have mentioned all the recommendations:-

  • Developer’s Perspective.
  • Software Companies’ Perspective.
  • Security Researchers’ Perspective.
  • User Studies with AI-Powered Coding Tools.
Tushar Subhra
Tushar Subhra
Tushar is a Cyber security content editor with a passion for creating captivating and informative content. With years of experience under his belt in Cyber Security, he is covering Cyber Security News, technology and other news.

Latest articles

Threat Actors Leverage Email Bombing to Evade Security Tools and Conceal Malicious Activity

Threat actors are increasingly using email bombing to bypass security protocols and facilitate further...

Threat Actors Launch Active Attacks on Semiconductor Firms Using Zero-Day Exploits

Semiconductor companies, pivotal in the tech industry for their role in producing components integral...

Hackers Exploit Router Flaws in Ongoing Attacks on Enterprise Networks

Enterprises are facing heightened cyber threats as attackers increasingly target network infrastructure, particularly routers,...

Threat Actors Exploit Legitimate Crypto Packages to Deliver Malicious Code

Threat actors are using open-source software (OSS) repositories to install malicious code into trusted...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

Threat Actors Leverage Email Bombing to Evade Security Tools and Conceal Malicious Activity

Threat actors are increasingly using email bombing to bypass security protocols and facilitate further...

Threat Actors Launch Active Attacks on Semiconductor Firms Using Zero-Day Exploits

Semiconductor companies, pivotal in the tech industry for their role in producing components integral...

Hackers Exploit Router Flaws in Ongoing Attacks on Enterprise Networks

Enterprises are facing heightened cyber threats as attackers increasingly target network infrastructure, particularly routers,...