Sunday, February 9, 2025
HomeAIHackers Can Exploit AI Platform to Achieve Root Access via RCE Vulnerability

Hackers Can Exploit AI Platform to Achieve Root Access via RCE Vulnerability

Published on

SIEM as a Service

Follow Us on Google News

In a critical development within the AI industry, researchers at Noma Security have disclosed the discovery of a high-severity Remote Code Execution (RCE) vulnerability in Lightning AI Studio, a widely adopted AI development platform.

The vulnerability, assigned a CVSS score of 9.4, was found to enable attackers to execute arbitrary commands with root privileges, posing significant threats such as data exfiltration and system compromise.

The issue has since been resolved in close collaboration with Lightning AI.

Vulnerability Overview

The RCE vulnerability stemmed from a hidden URL parameter called command, embedded within Lightning AI Studio’s terminal functionality.

This parameter, though concealed from users, could be manipulated to execute malicious commands.

Attackers could craft a Base64-encoded payload to encode commands and append them to user-specific URLs, exploiting the platform’s lack of input sanitization.

For instance, an attacker could embed a command to recursively delete all files or retrieve sensitive AWS metadata, including access tokens, and redirect them to a remote server.

The exploit relied on publicly accessible details such as usernames and studio paths, which attackers could glean from Lightning AI’s shared Studio templates.

Victims could be targeted via malicious links, shared through email or public forums, that triggered the exploit upon a single click.

Lightning AI Studio operates as a flexible, cloud-based AI development platform, supporting various AI workflows such as training and deployment.

With features such as a VSCode-like interface and persistent environments, it has gained popularity among enterprises and developers.

However, vulnerabilities in its handling of user-controllable inputs, such as hidden URL parameters, made it susceptible to this critical exploit.

The URL schema for Lightning AI Studio links includes variables like PROFILE_USERNAME and STUDIO_PATH, uniquely identifying user studios.

Attackers leveraged these variables to craft malicious URLs, redirecting authenticated users to terminals embedded with harmful commands.

Impact of the Exploit

The implications of this exploit underscored its criticality.

Attackers could potentially:

  • Execute Arbitrary Commands: Using root privileges via authenticated user sessions to manipulate systems.
  • Exfiltrate Data: Sensitive metadata, such as AWS credentials, could be accessed and transferred to malicious servers.
  • Compromise Filesystems: Attackers could delete or modify crucial system files, disrupting operations.

Given the platform’s integration into enterprise-grade AI workflows, the risk of exploitation extended to sensitive AI models and data pipelines across shared environments.

Following responsible disclosure on October 14, 2024, Noma Security and Lightning AI collaborated to address the vulnerability swiftly. A fix was released by October 25, 2024.

Key takeaways from this incident included the need for robust input validation, adherence to the principle of least privilege, and avoidance of directly executing user-controlled inputs to prevent command injection vulnerabilities.

This discovery highlights the critical importance of integrating comprehensive security measures into AI development lifecycles.

As the industry continues to innovate rapidly, ensuring the resilience of platforms like Lightning AI remains paramount.

Noma Security’s efforts in uncovering and mitigating such threats underscore their commitment to protecting the AI ecosystem.

Are you from SOC/DFIR Teams? – Analyse Malware Files & Links with ANY.RUN Sandox -> Try for Free

Aman Mishra
Aman Mishra
Aman Mishra is a Security and privacy Reporter covering various data breach, cyber crime, malware, & vulnerability.

Latest articles

UK Pressures Apple to Create Global Backdoor To Spy on Encrypted iCloud Access

United Kingdom has reportedly ordered Apple to create a backdoor allowing access to all...

Autonomous LLMs Reshaping Pen Testing: Real-World AD Breaches and the Future of Cybersecurity

Large Language Models (LLMs) are transforming penetration testing (pen testing), leveraging their advanced reasoning...

Securing GAI-Driven Semantic Communications: A Novel Defense Against Backdoor Attacks

Semantic communication systems, powered by Generative AI (GAI), are transforming the way information is...

Cybercriminals Target IIS Servers to Spread BadIIS Malware

A recent wave of cyberattacks has revealed the exploitation of Microsoft Internet Information Services...

Supply Chain Attack Prevention

Free Webinar - Supply Chain Attack Prevention

Recent attacks like Polyfill[.]io show how compromised third-party components become backdoors for hackers. PCI DSS 4.0’s Requirement 6.4.3 mandates stricter browser script controls, while Requirement 12.8 focuses on securing third-party providers.

Join Vivekanand Gopalan (VP of Products – Indusface) and Phani Deepak Akella (VP of Marketing – Indusface) as they break down these compliance requirements and share strategies to protect your applications from supply chain attacks.

Discussion points

Meeting PCI DSS 4.0 mandates.
Blocking malicious components and unauthorized JavaScript execution.
PIdentifying attack surfaces from third-party dependencies.
Preventing man-in-the-browser attacks with proactive monitoring.

More like this

UK Pressures Apple to Create Global Backdoor To Spy on Encrypted iCloud Access

United Kingdom has reportedly ordered Apple to create a backdoor allowing access to all...

Autonomous LLMs Reshaping Pen Testing: Real-World AD Breaches and the Future of Cybersecurity

Large Language Models (LLMs) are transforming penetration testing (pen testing), leveraging their advanced reasoning...

Securing GAI-Driven Semantic Communications: A Novel Defense Against Backdoor Attacks

Semantic communication systems, powered by Generative AI (GAI), are transforming the way information is...