Sunday, April 27, 2025
HomeArtificial IntelligencePEFT-As-An-Attack, Jailbreaking Language Models For Malicious Prompts

PEFT-As-An-Attack, Jailbreaking Language Models For Malicious Prompts

Published on

SIEM as a Service

Follow Us on Google News

Federated Parameter-Efficient Fine-Tuning (FedPEFT) is a technique that combines parameter-efficient fine-tuning (PEFT) with federated learning (FL) to improve the efficiency and privacy of training large language models (PLMs) on specific tasks. 

However, this approach introduces a new security risk called “PEFT-as-an-Attack” (PaaA), where malicious actors can exploit PEFT to bypass the safety alignment of PLMs and generate harmful content. 

Researchers studied the effectiveness of PaaA against different PEFT methods and investigated potential defenses like Robust Aggregation Schemes (RASs) and Post-PEFT Safety Alignment (PPSA). 

- Advertisement - Google News

In particular, when dealing with a wide variety of data distributions, they discovered that RASs are not very effective against PaaA. 

While PPSA can mitigate PaaA, it significantly reduces the model’s accuracy, which highlights the need for new defense mechanisms that can balance security and performance in FedPEFT systems.

Overview of the System Model

It introduces a FedPEFT system for instruction tuning of PLMs using decentralized, domain-specific datasets, as the system faces the risk of PaaA, where malicious clients inject toxic training data to compromise the PLM’s safety guardrails. 

To address this, potential defense mechanisms include robust aggregation schemes (RASs) to mitigate the impact of malicious updates and post-PEFT safety alignment (PPSA) to restore the model’s adherence to safety constraints.

It conducts experiments using four PLMs and three PEFT methods on two domain-specific QA datasets, where malicious clients inject harmful data to compromise model safety. 

The experiments assess the impact of malicious clients on model safety and utility, measuring attack success rate and task accuracy by utilizing the Blades benchmark suite to simulate the FedPEFT system and employs the Hugging Face ecosystem for training and evaluation. 

Evaluation of jailbreak attacks

The paper experimentally evaluated the effectiveness of FedPEFT methods in adapting PLMs for medical question answering, while LoRA consistently outperformed other methods in terms of accuracy but was also more vulnerable to PaA.

RASs were found to be ineffective in defending against PaA, especially in non-IID settings. PPSA effectively mitigated the impact of PaA but at the cost of reduced performance in downstream tasks, which highlights the need for further research to develop robust and efficient defense mechanisms against PaA in FedPEFT.

It introduces a new security threat to FedPEFT known as PaaA, as this attack leverages PEFT methods to bypass safety alignment and generate harmful content in response to malicious prompts. 

The evaluation demonstrates that existing defenses, such as RASs and PPSA, have limitations when it comes to mitigating the effects of PaaA. 

To mitigate this, it suggests future research directions, including developing advanced PPSA techniques and integrating safety alignment directly into the fine-tuning process to dynamically address emerging vulnerabilities while maintaining model performance.

Leveraging 2024 MITRE ATT&CK Results for SME & MSP Cybersecurity Leaders – Attend Free Webinar

Aman Mishra
Aman Mishra
Aman Mishra is a Security and privacy Reporter covering various data breach, cyber crime, malware, & vulnerability.

Latest articles

How To Use Digital Forensics To Strengthen Your Organization’s Cybersecurity Posture

Digital forensics has become a cornerstone of modern cybersecurity strategies, moving beyond its traditional...

Building A Strong Compliance Framework: A CISO’s Guide To Meeting Regulatory Requirements

In the current digital landscape, Chief Information Security Officers (CISOs) are under mounting pressure...

Two Systemic Jailbreaks Uncovered, Exposing Widespread Vulnerabilities in Generative AI Models

Two significant security vulnerabilities in generative AI systems have been discovered, allowing attackers to...

New AI-Generated ‘TikDocs’ Exploits Trust in the Medical Profession to Drive Sales

AI-generated medical scams across TikTok and Instagram, where deepfake avatars pose as healthcare professionals...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

How To Use Digital Forensics To Strengthen Your Organization’s Cybersecurity Posture

Digital forensics has become a cornerstone of modern cybersecurity strategies, moving beyond its traditional...

Building A Strong Compliance Framework: A CISO’s Guide To Meeting Regulatory Requirements

In the current digital landscape, Chief Information Security Officers (CISOs) are under mounting pressure...

Two Systemic Jailbreaks Uncovered, Exposing Widespread Vulnerabilities in Generative AI Models

Two significant security vulnerabilities in generative AI systems have been discovered, allowing attackers to...