Monday, May 5, 2025
HomeAIMicrosoft Offers $30,000 Bounties for AI Security Flaws

Microsoft Offers $30,000 Bounties for AI Security Flaws

Published on

SIEM as a Service

Follow Us on Google News

Microsoft has launched a new bounty program that offers up to $30,000 to security researchers who discover vulnerabilities in its AI and machine learning (AI/ML) technologies.

This initiative, announced by the Microsoft Security Response Center (MSRC), aims to encourage responsible disclosure of flaws that could pose serious risks to users and organizations relying on Microsoft’s AI-driven products.

Microsoft’s latest bug bounty reflects its ongoing commitment to protect customers from potential exploitation that could arise from vulnerabilities in both traditional and AI-powered products.

- Advertisement - Google News

“We’re dedicated to working transparently with customers and security researchers in identifying and fixing AI vulnerabilities,” a Microsoft spokesperson said.

The company has outlined a detailed severity classification for common AI/ML vulnerability types, emphasizing the importance of both the technical impact and the ease of exploitation in its triage process.

Understanding AI/ML Security Flaws

Security flaws in AI systems can have far-reaching consequences, from data breaches to manipulated decision-making.

Microsoft’s classification divides these vulnerabilities into two primary categories: Inference Manipulation and Model Manipulation. The following table summarizes Microsoft’s approach:

VulnerabilityDescriptionImpactSeverity
Prompt InjectionInjecting instructions to make the model produce unintended outputs.Data exfiltration or privileged actions (zero-click)Critical
Prompt InjectionSame as above, but requiring some user interaction (one or more clicks).Data exfiltration or privileged actions (user interaction needed)Important
Input PerturbationModifying valid inputs (adversarial examples) to get incorrect model outputs.Data exfiltration or privileged actions (zero-click)Critical
Input PerturbationSame as above, but requiring some user interaction.Data exfiltration or privileged actions (user interaction needed)Important
Model/Data PoisoningManipulating model during training (e.g., altering data or architecture).Used in decision-making affecting other users or public contentCritical

The new bounty, reaching up to $30,000 for the most severe flaws, is one of the highest rewards currently offered for AI security findings.

Microsoft hopes this will foster collaboration between the company and the broader security research community, ensuring rapid identification and resolution of critical threats.

Security researchers interested in participating can review Microsoft’s guidelines and submit their findings through the MSRC portal.

The bounty covers a range of AI vulnerabilities, from prompt injections in language models to data poisoning attacks during machine learning model training.

As AI becomes increasingly central to business and daily life, Microsoft’s proactive approach sets a new standard for industry responsibility.

With this robust bounty program, the company is not only enhancing the security of its own platforms but also contributing to a safer AI ecosystem for everyone.

Find this News Interesting! Follow us on Google NewsLinkedIn, & X to Get Instant Updates!

Divya
Divya
Divya is a Senior Journalist at GBhackers covering Cyber Attacks, Threats, Breaches, Vulnerabilities and other happenings in the cyber world.

Latest articles

Claude AI Abused in Influence-as-a-Service Operations and Campaigns

Claude AI, developed by Anthropic, has been exploited by malicious actors in a range...

Threat Actors Attacking U.S. Citizens Via Social Engineering Attack

As Tax Day on April 15 approaches, a alarming cybersecurity threat has emerged targeting...

TerraStealer Strikes: Browser Credential & Sensitive‑Data Heists on the Rise

Insikt Group has uncovered two new malware families, TerraStealerV2 and TerraLogger, attributed to the...

MintsLoader Malware Uses Sandbox and Virtual Machine Evasion Techniques

MintsLoader, a malicious loader first observed in 2024, has emerged as a formidable tool...

Resilience at Scale

Why Application Security is Non-Negotiable

The resilience of your digital infrastructure directly impacts your ability to scale. And yet, application security remains a critical weak link for most organizations.

Application Security is no longer just a defensive play—it’s the cornerstone of cyber resilience and sustainable growth. In this webinar, Karthik Krishnamoorthy (CTO of Indusface) and Phani Deepak Akella (VP of Marketing – Indusface), will share how AI-powered application security can help organizations build resilience by

Discussion points


Protecting at internet scale using AI and behavioral-based DDoS & bot mitigation.
Autonomously discovering external assets and remediating vulnerabilities within 72 hours, enabling secure, confident scaling.
Ensuring 100% application availability through platforms architected for failure resilience.
Eliminating silos with real-time correlation between attack surface and active threats for rapid, accurate mitigation

More like this

Claude AI Abused in Influence-as-a-Service Operations and Campaigns

Claude AI, developed by Anthropic, has been exploited by malicious actors in a range...

Threat Actors Attacking U.S. Citizens Via Social Engineering Attack

As Tax Day on April 15 approaches, a alarming cybersecurity threat has emerged targeting...

TerraStealer Strikes: Browser Credential & Sensitive‑Data Heists on the Rise

Insikt Group has uncovered two new malware families, TerraStealerV2 and TerraLogger, attributed to the...