Friday, April 11, 2025
HomeArtificial IntelligenceResearchers Propose An Invisible Backdoor Attack Dubbed DEBA

Researchers Propose An Invisible Backdoor Attack Dubbed DEBA

Published on

SIEM as a Service

Follow Us on Google News

As deep neural networks (DNNs) become more prevalent, concerns over their security against backdoor attacks that implant hidden malicious functionalities have grown. 

Cybersecurity researchers (Wenmin Chen and Xiaowei Xu) recently proposed DEBA, an invisible backdoor attack leveraging singular value decomposition (SVD) to embed imperceptible triggers during model training, causing predefined malicious behaviors.

DEBA replaces minor visual features of trigger images with those from clean images, preserving major features for indistinguishability. 

- Advertisement - Google News

Invisible Backdoor Attack – DEBA

Extensive evaluations show that DEBA achieves high attack success rates while maintaining the perceptual quality of poisoned images.

Furthermore, DEBA demonstrates robustness in evading and resisting existing defense measures against such attacks on DNNs. 

The work highlights escalating threats of stealthy backdoor embeddings compromising the trustworthiness of deep learning models.

Deep neural networks (DNNs) receive backdoor attacks in the form of patches introduced by embedding as a starting point, with subsequent implementations becoming stealthy and invisible.

Document

Free Webinar : Mitigating Vulnerability & 0-day Threats

Alert Fatigue that helps no one as security teams need to triage 100s of vulnerabilities.:

  • The problem of vulnerability fatigue today
  • Difference between CVSS-specific vulnerability vs risk-based vulnerability
  • Evaluating vulnerabilities based on the business impact/risk
  • Automation to reduce alert fatigue and enhance security posture significantly

AcuRisQ, that helps you to quantify risk accurately:

A particular process evolves from visible backdoors into adversarial perturbations, label-consistent poisoning, edge-based dynamic triggers, and color shifts to make them look natural.

However, some previous attacks still leave visual traces that expose them as not completely invisible.

Besides this, recent research shows that backdoors can also be extended to face recognition systems used in real-world applications. 

Initially targeting inference errors, these have changed towards creating secret resiliently embeddable backdoor threats which are more dangerous for DNNs deployed across different domains due to their credibility reasons and security concerns.

Yet it remains difficult to devise countermeasures against such disguised poisoning attacks.

Continuing to evolve, the silent back-door attacks on deep neural networks (DNNs) have made further research into effective defenses.

Such efforts concentrate on protecting data inputs, models, and output detection.

Input defenses analyze saliency maps and artifacts for poisoning-suspected anomalies. Model defenses remove backdoors by pruning neurons, fine-tuning, or distilling models.

Output detection identifies infected models by measuring prediction randomness under input perturbations.

However, this race between attacking and defense continues with DEBA as one of the new attacks that can bypass existing defenses through invisible trigger embedding in the course of the training process.

Overview framework of SVD-based backdoor attack (Source – Arxiv)

Given the escalation of surreptitious model corruption and the need for DNNs to be used reliably and securely, evaluating robustness against the emergence of the latest defenses is quite important.

The proposed attack assumes the attacker can poison a portion of the training data without controlling the model architecture or training process.

During inference, attackers can only manipulate inputs. 

DEBA utilizes singular value decomposition (SVD) to decompose images into singular values and vectors capturing structural information.

By replacing the smallest singular values/vectors of clean images with those from trigger images, DEBA embeds imperceptible triggers, retaining the major features of clean images while injecting minor trigger details. 

This process enables generating poisoned images effective for targeted mispredictions during inference while appearing indistinguishable from benign samples. 

The attack is evaluated under the threat model of data poisoning during training but restricted test-time access, demonstrating high attack success and robustness against existing defenses through its covert trigger embedding approach. 

DEBA conducts this invisible trigger embedding in the UV color channels for enhanced efficiency and imperceptibility.

Comprehensive experiments demonstrate DEBA’s superior attack success rates and invisibility compared to prior attacks.

Stay updated on Cybersecurity news, Whitepapers, and Infographics. Follow us on LinkedIn & Twitter.

Tushar Subhra
Tushar Subhra
Tushar is a Cyber security content editor with a passion for creating captivating and informative content. With years of experience under his belt in Cyber Security, he is covering Cyber Security News, technology and other news.

Latest articles

TROX Stealer Harvests Sensitive Data Including Stored Credit Cards and Browser Credentials

Cybersecurity experts at Sublime have uncovered a complex malware campaign revolving around TROX Stealer,...

Chinese eCrime Group Targets Users in 120+ Countries to Steal Banking Credentials

Smishing Triad, a Chinese eCrime group, has launched an extensive operation targeting users across...

Calix Devices Vulnerable to Pre-Auth RCE on Port 6998, Root Access Possible

A severe security flaw enabling unauthenticated remote code execution (RCE) with root privileges has...

Microsoft Boosts Exchange and SharePoint Security with Updated Antimalware Scan

Microsoft has fortified its Exchange Server and SharePoint Server security by integrating advanced Antimalware...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

TROX Stealer Harvests Sensitive Data Including Stored Credit Cards and Browser Credentials

Cybersecurity experts at Sublime have uncovered a complex malware campaign revolving around TROX Stealer,...

Chinese eCrime Group Targets Users in 120+ Countries to Steal Banking Credentials

Smishing Triad, a Chinese eCrime group, has launched an extensive operation targeting users across...

Rogue Account‑Creation Flaw Leaves 100 K WordPress Sites Exposed

A severe vulnerability has been uncovered in the SureTriggers WordPress plugin, which could leave...