Saturday, May 24, 2025
Homecyber securityxAI API Key Leak Exposes Proprietary Language Models on GitHub

xAI API Key Leak Exposes Proprietary Language Models on GitHub

Published on

SIEM as a Service

Follow Us on Google News

Employee at Elon Musk’s artificial intelligence firm xAI inadvertently exposed a private API key on GitHub for over two months, granting unauthorized access to proprietary large language models (LLMs) fine-tuned on internal data from SpaceX, Tesla, and Twitter/X.

Security researchers at GitGuardian discovered the leak, which compromised 60 private and unreleased models, including development versions of Grok and specialized tools like a “tweet-rejector” model.

The incident raises critical concerns about operational security at xAI and potential vulnerabilities in AI systems handling sensitive corporate and government data.

- Advertisement - Google News

On March 2, 2024, GitGuardian’s automated systems detected an xAI technical staff member’s GitHub repository containing an active API key for the x.ai platform.

The key, which remained valid until at least April 30, provided access to unreleased Grok iterations such as grok-2.5V and research-grok-2p5v-1018, alongside models tailored for internal use at SpaceX and Tesla.

Philippe Caturegli of Seralys first publicized the leak via LinkedIn, noting the credentials’ prolonged exposure highlighted inadequate key management practices.

GitGuardian’s analysis revealed the API key granted privileges to 60 distinct LLMs, many fine-tuned on proprietary datasets from Musk’s companies.

For instance, grok-spacex-2024-11-04 likely incorporated engineering or operational data from SpaceX, while tweet-rejector may have been designed for content moderation on Twitter/X.

Despite GitGuardian alerting the employee directly in March, xAI only revoked access after a second notification to their security team in late April.

The delayed response suggests insufficient internal monitoring for credential leaks.

Security Risks and Exploitation Pathways

The exposed API key created multiple attack vectors for malicious actors.

Eric Fourrier of GitGuardian warned that attackers could exploit the LLMs via prompt injection-manipulating model outputs to extract training data or inject malicious code.

For example, an adversary might craft queries to retrieve sensitive SpaceX rocket designs or Tesla Autopilot algorithms embedded in the models’ training datasets.

Additionally, compromised models could enable supply chain attacks if manipulated outputs were integrated into xAI’s development pipelines.

Carole Winqwist, GitGuardian’s CMO, emphasized that live API access bypasses standard cybersecurity defenses like firewalls, allowing direct interaction with backend systems.

This vulnerability is exacerbated by xAI’s reliance on long-lived credentials rather than short-term tokens, which security best practices recommend for API access.

Notably, the leaked key’s two-month validity window provided ample time for reconnaissance and lateral movement within xAI’s infrastructure.

Broader Implications for AI and Government Integration

The incident coincides with revelations about Musk’s Department of Government Efficiency (DOGE) deploying AI tools like the GSAi chatbot to analyze federal agency data.

The Washington Post reported in February 2024 that DOGE had fed Education Department records into AI systems to audit spending, with plans to expand across government.

Similarly, Reuters documented DOGE’s use of Grok to monitor federal communications for “hostility” to Trump-era policies.

These integrations create systemic risks, as LLMs trained on sensitive data become high-value targets.

Caturegli noted that while the GitHub leak didn’t expose government records, it demonstrated how poor credential hygiene could compromise AI systems handling classified information.

The delayed mitigation also raises questions about xAI’s preparedness for securing AI deployments in regulated sectors like aerospace and federal contracting.

Security experts argue the incident underscores the need for zero-trust architectures in AI development, including runtime API monitoring and automated secret rotation.

With Musk’s enterprises increasingly bridging corporate and government AI applications, robust safeguards against credential leaks will be critical to preventing downstream breaches of both proprietary and public-sector data.

Find this News Interesting! Follow us on Google NewsLinkedIn, & X to Get Instant Updates!

Mayura Kathir
Mayura Kathirhttps://gbhackers.com/
Mayura Kathir is a cybersecurity reporter at GBHackers News, covering daily incidents including data breaches, malware attacks, cybercrime, vulnerabilities, zero-day exploits, and more.

Latest articles

Zero-Trust Policy Bypass Enables Exploitation of Vulnerabilities and Manipulation of NHI Secrets

A new project has exposed a critical attack vector that exploits protocol vulnerabilities to...

Threat Actor Sells Burger King Backup System RCE Vulnerability for $4,000

A threat actor known as #LongNight has reportedly put up for sale remote code...

Chinese Nexus Hackers Exploit Ivanti Endpoint Manager Mobile Vulnerability

Ivanti disclosed two critical vulnerabilities, identified as CVE-2025-4427 and CVE-2025-4428, affecting Ivanti Endpoint Manager...

Hackers Target macOS Users with Fake Ledger Apps to Deploy Malware

Hackers are increasingly targeting macOS users with malicious clones of Ledger Live, the popular...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

Zero-Trust Policy Bypass Enables Exploitation of Vulnerabilities and Manipulation of NHI Secrets

A new project has exposed a critical attack vector that exploits protocol vulnerabilities to...

Threat Actor Sells Burger King Backup System RCE Vulnerability for $4,000

A threat actor known as #LongNight has reportedly put up for sale remote code...

Chinese Nexus Hackers Exploit Ivanti Endpoint Manager Mobile Vulnerability

Ivanti disclosed two critical vulnerabilities, identified as CVE-2025-4427 and CVE-2025-4428, affecting Ivanti Endpoint Manager...