Cyber Security News

NoiseAttack is a Novel Backdoor That Uses Power Spectral Density For Evasion

NoiseAttack is a new method of secretly attacking deep learning models. It uses triggers made from White Gaussian Noise to create several targeted classes in the model, rather than just one, like most current methods. 

This approach also helps avoid being easily detected, which makes it more effective than traditional single-target attacks.

The following cybersecurity researchers of the Department of Electrical, Computer and Biomedical Engineering from the University of Rhode Island discovered that “NoiseAttack,” uses power spectral density for evasion:-

  • Abdullah Arafat Miah
  • Kaan Icer
  • Resit Sendag
  • Yu Bi

Technical analysis

Recent advancements in AI have led to its widespread use in various applications, from image recognition to language processing.

While the rapid adoption of AI is exciting, it has also introduced new security risks. One major vulnerability is backdoor attacks.

In these attacks, malicious individuals insert hidden triggers into AI models during the training process.

When the model encounters the trigger, it can be manipulated to produce a desired output, even if that output is incorrect or harmful.

This allows bad actors to control the model’s behavior in ways the developers never intended.

Are You From SOC/DFIR Teams? - Try Advanced Malware and Phishing Analysis With ANY.RUN - 14 day free trial

An overview of the proposed NoiseAttack (Source – Arxiv)

NoiseAttack uses a special type of random noise called White Gaussian Noise (WGN) as the hidden trigger.

WGN is a type of noise that resembles the snow effect you see on the television set after the transmission has ended.

Unlike previous attacks that use visible patterns or image transformations, NoiseAttack employs the power spectral density of WGN to create an imperceptible, spatially distributed trigger. 

The attack is sample-specific, which means that it only affects predetermined victim classes and is multi-targeted which allows multiple incorrect outputs.

The researchers tested NoiseAttack on various datasets like:-

  • CIFAR-10
  • MNIST
  • ImageNet
  • MS-COCO

Not only that even they also tested the same on several types of models that we have mentioned below:-

  • ResNet50
  • VGG16
  • DenseNet
  • YOLOv5
NoiseAttack on Visual Object Detection (Source – Arxiv)

They achieved high attack success rates during their tests while maintaining normal performance on clean inputs. Notably, NoiseAttack proved effective in both image classification and object detection tasks

Besides this, it successfully evaded state-of-the-art defense methods like GradCam, Neural Cleanse, and STRIP.

In terms of security, the evolving nature of AI highlights the need for more robust defense mechanisms and security measures in AI systems.

What Does MITRE ATT&CK Expose About Your Enterprise Security? - Watch Free Webinar!

Tushar Subhra

Tushar is a Cyber security content editor with a passion for creating captivating and informative content. With years of experience under his belt in Cyber Security, he is covering Cyber Security News, technology and other news.

Recent Posts

Researchers Uncovered Dark Web Operation Acquiring KYC Details

A major dark web operation dedicated to circumventing KYC (Know Your Customer) procedures, which involves…

1 hour ago

Adobe Warns of ColdFusion Vulnerability Allows Attackers Read arbitrary files

Adobe has issued a critical security update for ColdFusion versions 2023 and 2021 to address…

1 hour ago

Beware of New Malicious PyPI packages That Steals Login Details

Two malicious Python packages, Zebo-0.1.0 and Cometlogger-0.1, were recently detected by Fortinet's AI-driven OSS malware…

1 hour ago

Brazilian Hacker Arrested Hacking Computers & Selling Data

A Brazilian man, Junior Barros De Oliveira, has been charged with multiple counts of cybercrime…

1 hour ago

McDonald’s Delivery App Bug Let Customers Orders For Just $0.01

McDonald's India (West & South) / Hardcastle Restaurants Pvt. Ltd. operates a custom McDelivery web…

1 hour ago

North Korean Hackers Stolen $2.2 Billion From Crypto Platforms In 2024

Cryptocurrency hacking incidents in 2024 surged 21.07% YoY to $2.2 billion, with 303 breaches reported,…

1 hour ago