Monday, May 5, 2025
HomeCVE/vulnerabilityCritical PyTorch Vulnerability Allows Hackers to Run Remote Code

Critical PyTorch Vulnerability Allows Hackers to Run Remote Code

Published on

SIEM as a Service

Follow Us on Google News

A newly disclosed critical vulnerability (CVE-2025-32434) in PyTorch, the widely used open-source machine learning framework, allows attackers to execute arbitrary code on systems loading AI models—even when safety measures like weights_only=True are enabled.

The flaw impacts all PyTorch versions ≤2.5.1 and has been patched in version 2.6.0, released earlier this week.

CVE IDSeverityAffected VersionsPatched Version
CVE-2025-32434CriticalPyTorch ≤2.5.1 (pip)2.6.0

Vulnerability Overview

The flaw resides in PyTorch’s torch.load() function, which is commonly used to load serialized AI models.

- Advertisement - Google News

While enabling weights_only=True was previously believed to prevent unsafe code execution by restricting data loading to model weights, security researcher Ji’an Zhou demonstrated that attackers can bypass this safeguard to execute remote commands.

This undermines a core security assumption in PyTorch’s documentation, which explicitly recommended weights_only=True as a mitigation against malicious models.

Organizations using this setting to protect inference pipelines, federated learning systems, or model hubs are now at risk of remote takeover.

  • Exploitation Scenario: Attackers could upload tampered models to public repositories or supply chains. Loading such models would trigger code execution on victim systems.
  • Affected Workflows: Any application, cloud service, or research tool using torch.load() with unpatched PyTorch versions.
  • Severity: Rated critical due to the ease of exploitation and potential for full system compromise.
  1. Immediately upgrade to PyTorch 2.6.0 via pip install –upgrade pytorch.
  2. Audit existing models: Validate the integrity of models loaded from untrusted sources.
  3. Monitor advisories: Track updates via PyTorch’s GitHub Security page or the GitHub Advisory (GHSA-53q9-r3pm-6pq6).

The PyTorch team acknowledged the vulnerability, stating, “This issue highlights the evolving nature of ML security. We urge all users to update immediately and report suspicious model behavior.”

PyTorch is foundational to AI research and deployment, with users ranging from startups to tech giants like Meta and Microsoft.

This vulnerability exposes critical infrastructure to attacks that could steal data, disrupt services, or hijack resources.

As AI adoption grows, securing model pipelines is paramount. CVE-2025-32434 serves as a stark reminder that even trusted safeguards require continuous scrutiny.

Update PyTorch installations, audit model sources, and treat all third-party AI artifacts as potential attack vectors until verified.

Find this News Interesting! Follow us on Google NewsLinkedIn, & X to Get Instant Updates!

Divya
Divya
Divya is a Senior Journalist at GBhackers covering Cyber Attacks, Threats, Breaches, Vulnerabilities and other happenings in the cyber world.

Latest articles

Hackers Exploit Email Fields to Launch XSS and SSRF Attacks

Cybersecurity researchers are raising alarms as hackers increasingly weaponize email input fields to execute cross-site...

Luna Moth Hackers Use Fake Helpdesk Domains to Target Victims

A recent investigation by cybersecurity firm EclecticIQ, in collaboration with threat hunters, has exposed...

SonicBoom Attack Chain Lets Hackers Bypass Login and Gain Admin Control

Cybersecurity researchers have uncovered a dangerous new exploitation technique, dubbed the "SonicBoom Attack Chain,"...

Researcher Uses Copilot with WinDbg to Simplify Windows Crash Dump Analysis

A researcher has unveiled a novel integration between AI-powered Copilot and Microsoft's WinDbg, dramatically...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

Hackers Exploit Email Fields to Launch XSS and SSRF Attacks

Cybersecurity researchers are raising alarms as hackers increasingly weaponize email input fields to execute cross-site...

Luna Moth Hackers Use Fake Helpdesk Domains to Target Victims

A recent investigation by cybersecurity firm EclecticIQ, in collaboration with threat hunters, has exposed...

SonicBoom Attack Chain Lets Hackers Bypass Login and Gain Admin Control

Cybersecurity researchers have uncovered a dangerous new exploitation technique, dubbed the "SonicBoom Attack Chain,"...