NoiseAttack is a new method of secretly attacking deep learning models. It uses triggers made from White Gaussian Noise to create several targeted classes in the model, rather than just one, like most current methods.
This approach also helps avoid being easily detected, which makes it more effective than traditional single-target attacks.
The following cybersecurity researchers of the Department of Electrical, Computer and Biomedical Engineering from the University of Rhode Island discovered that “NoiseAttack,” uses power spectral density for evasion:-
Recent advancements in AI have led to its widespread use in various applications, from image recognition to language processing.
While the rapid adoption of AI is exciting, it has also introduced new security risks. One major vulnerability is backdoor attacks.
In these attacks, malicious individuals insert hidden triggers into AI models during the training process.
When the model encounters the trigger, it can be manipulated to produce a desired output, even if that output is incorrect or harmful.
This allows bad actors to control the model’s behavior in ways the developers never intended.
Are You From SOC/DFIR Teams? - Try Advanced Malware and Phishing Analysis With ANY.RUN - 14 day free trial
NoiseAttack uses a special type of random noise called White Gaussian Noise (WGN) as the hidden trigger.
WGN is a type of noise that resembles the snow effect you see on the television set after the transmission has ended.
Unlike previous attacks that use visible patterns or image transformations, NoiseAttack employs the power spectral density of WGN to create an imperceptible, spatially distributed trigger.
The attack is sample-specific, which means that it only affects predetermined victim classes and is multi-targeted which allows multiple incorrect outputs.
The researchers tested NoiseAttack on various datasets like:-
Not only that even they also tested the same on several types of models that we have mentioned below:-
They achieved high attack success rates during their tests while maintaining normal performance on clean inputs. Notably, NoiseAttack proved effective in both image classification and object detection tasks
Besides this, it successfully evaded state-of-the-art defense methods like GradCam, Neural Cleanse, and STRIP.
In terms of security, the evolving nature of AI highlights the need for more robust defense mechanisms and security measures in AI systems.
What Does MITRE ATT&CK Expose About Your Enterprise Security? - Watch Free Webinar!
A major dark web operation dedicated to circumventing KYC (Know Your Customer) procedures, which involves…
Adobe has issued a critical security update for ColdFusion versions 2023 and 2021 to address…
Two malicious Python packages, Zebo-0.1.0 and Cometlogger-0.1, were recently detected by Fortinet's AI-driven OSS malware…
A Brazilian man, Junior Barros De Oliveira, has been charged with multiple counts of cybercrime…
McDonald's India (West & South) / Hardcastle Restaurants Pvt. Ltd. operates a custom McDelivery web…
Cryptocurrency hacking incidents in 2024 surged 21.07% YoY to $2.2 billion, with 303 breaches reported,…