Sunday, May 18, 2025
HomeCyber Security NewsxAI Developer Accidentally Leaks API Key Granting Access to SpaceX, Tesla, and...

xAI Developer Accidentally Leaks API Key Granting Access to SpaceX, Tesla, and X LLMs

Published on

SIEM as a Service

Follow Us on Google News

An employee at Elon Musk’s artificial intelligence venture, xAI, inadvertently disclosed a sensitive API key on GitHub, potentially exposing proprietary large language models (LLMs) linked to SpaceX, Tesla, and Twitter/X.

Cybersecurity specialists estimate the leak remained active for two months, offering outsiders the capability to access and query highly confidential AI systems engineered with internal data from Musk’s flagship companies.

The leak first surfaced when Philippe Caturegli, “chief hacking officer” at Seralys, flagged the compromised credentials for an xAI application programming interface in a GitHub repository belonging to a technical staffer at xAI.

- Advertisement - Google News

Caturegli’s announcement on LinkedIn swiftly caught the eye of GitGuardian, a firm specializing in automated detection of exposed secrets in codebases.

Eric Fourrier, co-founder of GitGuardian, told KrebsOnSecurity that the exposed API key had access to at least 60 fine-tuned LLMs, including unreleased and private models.

These encompassed evolving versions of xAI’s Grok chatbot, as well as specialized models fine-tuned on SpaceX and Tesla data, such as “grok-spacex-2024-11-04” and “tweet-rejector”.

“The credentials could be used to access the xAI API with all privileges granted to the original user,” GitGuardian explained.

“These included not only public Grok models, but also cutting-edge, unreleased, and internal tools never meant for external eyes.”

Despite an automated alert sent to the xAI employee on March 2, the credentials remained valid and active until at least April 30, when GitGuardian escalated the issue directly to xAI’s security team.

Just hours later, the offending GitHub repository was quietly taken down.

Carole Winqwist, GitGuardian’s chief marketing officer, warned that adversaries with such access could manipulate or sabotage these language models for malicious purposes, including prompt injection attacks or even planting code within the AI’s operational supply chain.

“Free access to private LLMs is a recipe for disaster,” Winqwist emphasized.

The leak also highlights growing concerns about the integration of sensitive data with AI tools.

Recent reports indicate Musk’s Department of Government Efficiency (DOGE) and other agencies are feeding federal data into AI, raising questions about broader security risks.

While there is no direct evidence that federal or user data was breached through the exposed API key, Caturegli underscores the seriousness of the incident: “Long-lived credential exposures like this reveal weak key management and poor internal monitoring, raising alarms about operational security at some of the world’s most valuable tech companies.”

Find this News Interesting! Follow us on Google NewsLinkedIn, & X to Get Instant Updates!

Divya
Divya
Divya is a Senior Journalist at GBhackers covering Cyber Attacks, Threats, Breaches, Vulnerabilities and other happenings in the cyber world.

Latest articles

VMware ESXi, Firefox, Red Hat Linux & SharePoint Hacked – Pwn2Own Day 2

Security researchers demonstrated their prowess on the second day of Pwn2Own Berlin 2025, discovering...

Critical WordPress Plugin Flaw Puts Over 10,000 Sites of Cyberattack

A serious security flaw affecting the Eventin plugin, a popular event management solution for...

Sophisticated NPM Attack Leverages Google Calendar2 for Advanced Communication

A startling discovery in the npm ecosystem has revealed a highly sophisticated malware campaign...

New Ransomware Attack Targets Elon Musk Supporters Using PowerShell to Deploy Payloads

A newly identified ransomware campaign has emerged, seemingly targeting supporters of Elon Musk through...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

VMware ESXi, Firefox, Red Hat Linux & SharePoint Hacked – Pwn2Own Day 2

Security researchers demonstrated their prowess on the second day of Pwn2Own Berlin 2025, discovering...

Critical WordPress Plugin Flaw Puts Over 10,000 Sites of Cyberattack

A serious security flaw affecting the Eventin plugin, a popular event management solution for...

Sophisticated NPM Attack Leverages Google Calendar2 for Advanced Communication

A startling discovery in the npm ecosystem has revealed a highly sophisticated malware campaign...