An employee at Elon Musk’s artificial intelligence venture, xAI, inadvertently disclosed a sensitive API key on GitHub, potentially exposing proprietary large language models (LLMs) linked to SpaceX, Tesla, and Twitter/X.
Cybersecurity specialists estimate the leak remained active for two months, offering outsiders the capability to access and query highly confidential AI systems engineered with internal data from Musk’s flagship companies.
The leak first surfaced when Philippe Caturegli, “chief hacking officer” at Seralys, flagged the compromised credentials for an xAI application programming interface in a GitHub repository belonging to a technical staffer at xAI.

Caturegli’s announcement on LinkedIn swiftly caught the eye of GitGuardian, a firm specializing in automated detection of exposed secrets in codebases.
Eric Fourrier, co-founder of GitGuardian, told KrebsOnSecurity that the exposed API key had access to at least 60 fine-tuned LLMs, including unreleased and private models.
These encompassed evolving versions of xAI’s Grok chatbot, as well as specialized models fine-tuned on SpaceX and Tesla data, such as “grok-spacex-2024-11-04” and “tweet-rejector”.
“The credentials could be used to access the xAI API with all privileges granted to the original user,” GitGuardian explained.
“These included not only public Grok models, but also cutting-edge, unreleased, and internal tools never meant for external eyes.”
Despite an automated alert sent to the xAI employee on March 2, the credentials remained valid and active until at least April 30, when GitGuardian escalated the issue directly to xAI’s security team.
Just hours later, the offending GitHub repository was quietly taken down.
Carole Winqwist, GitGuardian’s chief marketing officer, warned that adversaries with such access could manipulate or sabotage these language models for malicious purposes, including prompt injection attacks or even planting code within the AI’s operational supply chain.
“Free access to private LLMs is a recipe for disaster,” Winqwist emphasized.
The leak also highlights growing concerns about the integration of sensitive data with AI tools.
Recent reports indicate Musk’s Department of Government Efficiency (DOGE) and other agencies are feeding federal data into AI, raising questions about broader security risks.
While there is no direct evidence that federal or user data was breached through the exposed API key, Caturegli underscores the seriousness of the incident: “Long-lived credential exposures like this reveal weak key management and poor internal monitoring, raising alarms about operational security at some of the world’s most valuable tech companies.”
Find this News Interesting! Follow us on Google News, LinkedIn, & X to Get Instant Updates!