cyber security

Researchers Propose An Invisible Backdoor Attack Dubbed DEBA

As deep neural networks (DNNs) become more prevalent, concerns over their security against backdoor attacks that implant hidden malicious functionalities have grown. 

Cybersecurity researchers (Wenmin Chen and Xiaowei Xu) recently proposed DEBA, an invisible backdoor attack leveraging singular value decomposition (SVD) to embed imperceptible triggers during model training, causing predefined malicious behaviors.

DEBA replaces minor visual features of trigger images with those from clean images, preserving major features for indistinguishability. 

Invisible Backdoor Attack – DEBA

Extensive evaluations show that DEBA achieves high attack success rates while maintaining the perceptual quality of poisoned images.

Furthermore, DEBA demonstrates robustness in evading and resisting existing defense measures against such attacks on DNNs. 

The work highlights escalating threats of stealthy backdoor embeddings compromising the trustworthiness of deep learning models.

Deep neural networks (DNNs) receive backdoor attacks in the form of patches introduced by embedding as a starting point, with subsequent implementations becoming stealthy and invisible.

Document

Free Webinar : Mitigating Vulnerability & 0-day Threats

Alert Fatigue that helps no one as security teams need to triage 100s of vulnerabilities. :

  • The problem of vulnerability fatigue today
  • Difference between CVSS-specific vulnerability vs risk-based vulnerability
  • Evaluating vulnerabilities based on the business impact/risk
  • Automation to reduce alert fatigue and enhance security posture significantly

AcuRisQ, that helps you to quantify risk accurately:

A particular process evolves from visible backdoors into adversarial perturbations, label-consistent poisoning, edge-based dynamic triggers, and color shifts to make them look natural.

However, some previous attacks still leave visual traces that expose them as not completely invisible.

Besides this, recent research shows that backdoors can also be extended to face recognition systems used in real-world applications. 

Initially targeting inference errors, these have changed towards creating secret resiliently embeddable backdoor threats which are more dangerous for DNNs deployed across different domains due to their credibility reasons and security concerns.

Yet it remains difficult to devise countermeasures against such disguised poisoning attacks.

Continuing to evolve, the silent back-door attacks on deep neural networks (DNNs) have made further research into effective defenses.

Such efforts concentrate on protecting data inputs, models, and output detection.

Input defenses analyze saliency maps and artifacts for poisoning-suspected anomalies. Model defenses remove backdoors by pruning neurons, fine-tuning, or distilling models.

Output detection identifies infected models by measuring prediction randomness under input perturbations.

However, this race between attacking and defense continues with DEBA as one of the new attacks that can bypass existing defenses through invisible trigger embedding in the course of the training process.

Overview framework of SVD-based backdoor attack (Source – Arxiv)

Given the escalation of surreptitious model corruption and the need for DNNs to be used reliably and securely, evaluating robustness against the emergence of the latest defenses is quite important.

The proposed attack assumes the attacker can poison a portion of the training data without controlling the model architecture or training process.

During inference, attackers can only manipulate inputs. 

DEBA utilizes singular value decomposition (SVD) to decompose images into singular values and vectors capturing structural information.

By replacing the smallest singular values/vectors of clean images with those from trigger images, DEBA embeds imperceptible triggers, retaining the major features of clean images while injecting minor trigger details. 

This process enables generating poisoned images effective for targeted mispredictions during inference while appearing indistinguishable from benign samples. 

The attack is evaluated under the threat model of data poisoning during training but restricted test-time access, demonstrating high attack success and robustness against existing defenses through its covert trigger embedding approach. 

DEBA conducts this invisible trigger embedding in the UV color channels for enhanced efficiency and imperceptibility.

Comprehensive experiments demonstrate DEBA’s superior attack success rates and invisibility compared to prior attacks.

Stay updated on Cybersecurity news, Whitepapers, and Infographics. Follow us on LinkedIn & Twitter.

Tushar Subhra Dutta

Tushar is a Cyber security content editor with a passion for creating captivating and informative content. With years of experience under his belt in Cyber Security, he is covering Cyber Security News, technology and other news.

Recent Posts

Norway Recommends Replacing SSLVPN/WebVPN to Stop Cyber Attacks

A very important message from the Norwegian National Cyber Security Centre (NCSC) says that Secure Socket Layer/Transport Layer Security (SSL/TLS)…

2 days ago

New Linux Backdoor Attacking Linux Users Via Installation Packages

Linux is widely used in numerous servers, cloud infrastructure, and Internet of Things devices, which makes it an attractive target…

2 days ago

ViperSoftX Malware Uses Deep Learning Model To Execute Commands

ViperSoftX malware, known for stealing cryptocurrency information, now leverages Tesseract, an open-source OCR engine, to target infected systems, which extracts…

2 days ago

Santander Data Breach: Hackers Accessed Company Database

Santander has confirmed that there was a major data breach that affected its workers and customers in Spain, Uruguay, and…

2 days ago

U.S. Govt Announces Rewards up to $5 Million for North Korean IT Workers

The U.S. government has offered a prize of up to $5 million for information that leads to the arrest and…

2 days ago

Russian APT Hackers Attacking Critical Infrastructure

Russia leverages a mix of state-backed Advanced Persistent Threat (APT) groups and financially motivated cybercriminals to achieve its strategic goals,…

2 days ago