In a critical development within the AI industry, researchers at Noma Security have disclosed the discovery of a high-severity Remote Code Execution (RCE) vulnerability in Lightning AI Studio, a widely adopted AI development platform.
The vulnerability, assigned a CVSS score of 9.4, was found to enable attackers to execute arbitrary commands with root privileges, posing significant threats such as data exfiltration and system compromise.
The issue has since been resolved in close collaboration with Lightning AI.
The RCE vulnerability stemmed from a hidden URL parameter called command
, embedded within Lightning AI Studio’s terminal functionality.
This parameter, though concealed from users, could be manipulated to execute malicious commands.
Attackers could craft a Base64-encoded payload to encode commands and append them to user-specific URLs, exploiting the platform’s lack of input sanitization.
For instance, an attacker could embed a command to recursively delete all files or retrieve sensitive AWS metadata, including access tokens, and redirect them to a remote server.
The exploit relied on publicly accessible details such as usernames and studio paths, which attackers could glean from Lightning AI’s shared Studio templates.
Victims could be targeted via malicious links, shared through email or public forums, that triggered the exploit upon a single click.
Lightning AI Studio operates as a flexible, cloud-based AI development platform, supporting various AI workflows such as training and deployment.
With features such as a VSCode-like interface and persistent environments, it has gained popularity among enterprises and developers.
However, vulnerabilities in its handling of user-controllable inputs, such as hidden URL parameters, made it susceptible to this critical exploit.
The URL schema for Lightning AI Studio links includes variables like PROFILE_USERNAME
and STUDIO_PATH
, uniquely identifying user studios.
Attackers leveraged these variables to craft malicious URLs, redirecting authenticated users to terminals embedded with harmful commands.
The implications of this exploit underscored its criticality.
Attackers could potentially:
Given the platform’s integration into enterprise-grade AI workflows, the risk of exploitation extended to sensitive AI models and data pipelines across shared environments.
Following responsible disclosure on October 14, 2024, Noma Security and Lightning AI collaborated to address the vulnerability swiftly. A fix was released by October 25, 2024.
Key takeaways from this incident included the need for robust input validation, adherence to the principle of least privilege, and avoidance of directly executing user-controlled inputs to prevent command injection vulnerabilities.
This discovery highlights the critical importance of integrating comprehensive security measures into AI development lifecycles.
As the industry continues to innovate rapidly, ensuring the resilience of platforms like Lightning AI remains paramount.
Noma Security’s efforts in uncovering and mitigating such threats underscore their commitment to protecting the AI ecosystem.
Are you from SOC/DFIR Teams? – Analyse Malware Files & Links with ANY.RUN Sandox -> Try for Free
A groundbreaking technique for Kerberos relaying over HTTP, leveraging multicast poisoning, has been recently detailed…
Since mid-2024, cybersecurity researchers have been monitoring a sophisticated Android malware campaign dubbed "Tria Stealer,"…
Proton, the globally recognized provider of privacy-focused services such as Proton VPN and Proton Pass,…
The cybersecurity landscape faces increasing challenges as Arcus Media ransomware emerges as a highly sophisticated…
Proofpoint researchers have identified a marked increase in phishing campaigns and malicious domain registrations designed…
A recent investigation by Unit 42 of Palo Alto Networks has uncovered a sophisticated, state-sponsored…