NoiseAttack is a new method of secretly attacking deep learning models. It uses triggers made from White Gaussian Noise to create several targeted classes in the model, rather than just one, like most current methods.
This approach also helps avoid being easily detected, which makes it more effective than traditional single-target attacks.
The following cybersecurity researchers of the Department of Electrical, Computer and Biomedical Engineering from the University of Rhode Island discovered that “NoiseAttack,” uses power spectral density for evasion:-
Recent advancements in AI have led to its widespread use in various applications, from image recognition to language processing.
While the rapid adoption of AI is exciting, it has also introduced new security risks. One major vulnerability is backdoor attacks.
In these attacks, malicious individuals insert hidden triggers into AI models during the training process.
When the model encounters the trigger, it can be manipulated to produce a desired output, even if that output is incorrect or harmful.
This allows bad actors to control the model’s behavior in ways the developers never intended.
Are You From SOC/DFIR Teams? - Try Advanced Malware and Phishing Analysis With ANY.RUN - 14 day free trial
NoiseAttack uses a special type of random noise called White Gaussian Noise (WGN) as the hidden trigger.
WGN is a type of noise that resembles the snow effect you see on the television set after the transmission has ended.
Unlike previous attacks that use visible patterns or image transformations, NoiseAttack employs the power spectral density of WGN to create an imperceptible, spatially distributed trigger.
The attack is sample-specific, which means that it only affects predetermined victim classes and is multi-targeted which allows multiple incorrect outputs.
The researchers tested NoiseAttack on various datasets like:-
Not only that even they also tested the same on several types of models that we have mentioned below:-
They achieved high attack success rates during their tests while maintaining normal performance on clean inputs. Notably, NoiseAttack proved effective in both image classification and object detection tasks
Besides this, it successfully evaded state-of-the-art defense methods like GradCam, Neural Cleanse, and STRIP.
In terms of security, the evolving nature of AI highlights the need for more robust defense mechanisms and security measures in AI systems.
What Does MITRE ATT&CK Expose About Your Enterprise Security? - Watch Free Webinar!
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) issued an urgent warning on March 3,…
Cybersecurity researchers have uncovered a surge in the use of Advanced Encryption Standard (AES) encryption…
Kaspersky's latest report on mobile malware evolution in 2024 reveals a significant increase in cyber…
In a concerning trend, the frequency of scanning attacks targeting Internet of Things (IoT) devices…
Google is rolling out a new privacy-focused feature called Shielded Email, designed to prevent apps and…
Cybersecurity experts are warning of an increasing trend in fileless attacks, where hackers leverage PowerShell…