AI-generated bogus vulnerability reports, or “AI slop,” are flooding bug bounty platforms, which is a worrying trend in the cybersecurity space.
These fraudulent submissions, crafted by large language models (LLMs), mimic technical jargon convincingly enough to pass initial scrutiny but crumble under expert analysis due to fabricated details and nonexistent code references.
Exploiting Bug Bounty Systems
A recent incident involving the curl project, reported by security researcher Harry Sintonen, exemplifies this issue.
The curl team received a submission via HackerOne (report H1 #3125832) that cited imaginary functions, proposed unverifiable patches, and described irreproducible vulnerabilities.

Despite its polished language, the report was quickly identified as AI-generated nonsense by the technically adept curl maintainers, who dismissed it as a scam orchestrated by an actor linked to the @evilginx account.
Structural Weaknesses in Triage and Trust
The success of such fraudulent reports hinges on systemic vulnerabilities within bug bounty programs, particularly at under-resourced organizations lacking the expertise to thoroughly vet submissions.
Sintonen noted that many companies opt to pay bounties rather than invest in subject matter experts, fearing delays or negative publicity.
According to Socket Report, this creates an exploit opportunity for bad actors who rely on vague reports and fabricated details-such as nonexistent commit hashes or invented functionality-to extract payouts.
The curl case stands out as an exception due to the project’s deep technical expertise and lack of budgetary pressure to approve dubious claims, but not all projects are so fortunate.
Benjamin Piouffle from Open Collective and Seth Larson from the Python Software Foundation echoed similar concerns, reporting a surge in AI-generated garbage flooding their inboxes.
Larson highlighted how urllib3 received a baseless report flagging a non-issue with SSLv2, underscoring how even minimal critical thinking is often absent in these submissions, yet they still consume valuable triage time.
The implications of this trend are dire, as AI tools become more sophisticated and accessible, lowering the barrier for grifters to exploit bug bounty systems.
The real cost lies in the erosion of trust and the diversion of limited resources from addressing genuine vulnerabilities.
Sintonen warns that this could ultimately dismantle the bug bounty model, predicting that legitimate researchers may abandon the field in frustration as AI slop garners unearned rewards, while organizations might withdraw from programs overwhelmed by fraudulent reports.
Criticism has also been directed at platforms like HackerOne for not decisively banning repeat offenders, with community voices like Joe Cooper questioning the platform’s role in protecting its credibility.
Although submitting fake reports can damage a researcher’s reputation on HackerOne, mechanisms like self-closing reports as “Not Applicable”-as done by the curl report submitter-allow bad actors to evade penalties.
As AI-driven deception grows, the security ecosystem faces a critical juncture: platforms and organizations must prioritize stringent validation, researcher verification, and investment in expertise to preserve the integrity of bug bounty programs and protect maintainers from this rising tide of digital deception.
Setting Up SOC Team? – Download Free Ultimate SIEM Pricing Guide (PDF) For Your SOC Team -> Free Download