Friday, May 24, 2024

Researchers Propose An Invisible Backdoor Attack Dubbed DEBA

As deep neural networks (DNNs) become more prevalent, concerns over their security against backdoor attacks that implant hidden malicious functionalities have grown. 

Cybersecurity researchers (Wenmin Chen and Xiaowei Xu) recently proposed DEBA, an invisible backdoor attack leveraging singular value decomposition (SVD) to embed imperceptible triggers during model training, causing predefined malicious behaviors.

DEBA replaces minor visual features of trigger images with those from clean images, preserving major features for indistinguishability. 

Invisible Backdoor Attack – DEBA

Extensive evaluations show that DEBA achieves high attack success rates while maintaining the perceptual quality of poisoned images.

Furthermore, DEBA demonstrates robustness in evading and resisting existing defense measures against such attacks on DNNs. 

The work highlights escalating threats of stealthy backdoor embeddings compromising the trustworthiness of deep learning models.

Deep neural networks (DNNs) receive backdoor attacks in the form of patches introduced by embedding as a starting point, with subsequent implementations becoming stealthy and invisible.

Document

Free Webinar : Mitigating Vulnerability & 0-day Threats

Alert Fatigue that helps no one as security teams need to triage 100s of vulnerabilities.:

  • The problem of vulnerability fatigue today
  • Difference between CVSS-specific vulnerability vs risk-based vulnerability
  • Evaluating vulnerabilities based on the business impact/risk
  • Automation to reduce alert fatigue and enhance security posture significantly

AcuRisQ, that helps you to quantify risk accurately:

A particular process evolves from visible backdoors into adversarial perturbations, label-consistent poisoning, edge-based dynamic triggers, and color shifts to make them look natural.

However, some previous attacks still leave visual traces that expose them as not completely invisible.

Besides this, recent research shows that backdoors can also be extended to face recognition systems used in real-world applications. 

Initially targeting inference errors, these have changed towards creating secret resiliently embeddable backdoor threats which are more dangerous for DNNs deployed across different domains due to their credibility reasons and security concerns.

Yet it remains difficult to devise countermeasures against such disguised poisoning attacks.

Continuing to evolve, the silent back-door attacks on deep neural networks (DNNs) have made further research into effective defenses.

Such efforts concentrate on protecting data inputs, models, and output detection.

Input defenses analyze saliency maps and artifacts for poisoning-suspected anomalies. Model defenses remove backdoors by pruning neurons, fine-tuning, or distilling models.

Output detection identifies infected models by measuring prediction randomness under input perturbations.

However, this race between attacking and defense continues with DEBA as one of the new attacks that can bypass existing defenses through invisible trigger embedding in the course of the training process.

Overview framework of SVD-based backdoor attack (Source – Arxiv)

Given the escalation of surreptitious model corruption and the need for DNNs to be used reliably and securely, evaluating robustness against the emergence of the latest defenses is quite important.

The proposed attack assumes the attacker can poison a portion of the training data without controlling the model architecture or training process.

During inference, attackers can only manipulate inputs. 

DEBA utilizes singular value decomposition (SVD) to decompose images into singular values and vectors capturing structural information.

By replacing the smallest singular values/vectors of clean images with those from trigger images, DEBA embeds imperceptible triggers, retaining the major features of clean images while injecting minor trigger details. 

This process enables generating poisoned images effective for targeted mispredictions during inference while appearing indistinguishable from benign samples. 

The attack is evaluated under the threat model of data poisoning during training but restricted test-time access, demonstrating high attack success and robustness against existing defenses through its covert trigger embedding approach. 

DEBA conducts this invisible trigger embedding in the UV color channels for enhanced efficiency and imperceptibility.

Comprehensive experiments demonstrate DEBA’s superior attack success rates and invisibility compared to prior attacks.

Stay updated on Cybersecurity news, Whitepapers, and Infographics. Follow us on LinkedIn & Twitter.

Website

Latest articles

Hackers Weaponizing Microsoft Access Documents To Execute Malicious Program

In multiple aggressive phishing attempts, the financially motivated organization UAC-0006 heavily targeted Ukraine, utilizing...

Microsoft Warns Of Storm-0539’s Aggressive Gift Card Theft

Gift cards are attractive to hackers since they provide quick monetization for stolen data...

Kinsing Malware Attacking Apache Tomcat Server With Vulnerabilities

The scalability and flexibility of cloud platforms recently boosted the emerging trend of cryptomining...

NSA Releases Guidance On Zero Trust Maturity To Secure Application From Attackers

Zero Trust Maturity measures the extent to which an organization has adopted and implemented...

Chinese Hackers Stay Hidden On Military And Government Networks For Six Years

Hackers target military and government networks for varied reasons, primarily related to spying, which...

DNSBomb : A New DoS Attack That Exploits DNS Queries

A new practical and powerful Denial of service attack has been discovered that exploits...

Malicious PyPI & NPM Packages Attacking MacOS Users

Cybersecurity researchers have identified a series of malicious software packages targeting MacOS users.These...
Tushar Subhra Dutta
Tushar Subhra Dutta
Tushar is a Cyber security content editor with a passion for creating captivating and informative content. With years of experience under his belt in Cyber Security, he is covering Cyber Security News, technology and other news.

Free Webinar

Live API Attack Simulation

94% of organizations experience security problems in production APIs, and one in five suffers a data breach. As a result, cyber-attacks on APIs increased from 35% in 2022 to 46% in 2023, and this trend continues to rise.
Key takeaways include:

  • An exploit of OWASP API Top 10 vulnerability
  • A brute force ATO (Account Takeover) attack on API
  • A DDoS attack on an API
  • Positive security model automation to prevent API attacks

Related Articles