3 min read

AI Slop Is Coming for Security, And We Might Be Fighting Garbage Instead of What Matters

AI tools promise to supercharge security, but they’re also flooding teams with noise. From curl ending its bug bounty to NVD backlogs growing, AI slop is forcing cybersecurity teams to spend more time filtering garbage than securing what matters.
AI Slop Is Coming for Security,  And We Might Be Fighting Garbage Instead of What Matters
Photo by Fayegh(Shamal) Shakibayi / Unsplash

In recent weeks, a story from the open-source world has crystallized a growing concern: the very tools that promise to supercharge work, and security work in particular, are also threatening to drown it in noise.

In January 2026, the maintainer of curl, one of the Internet’s foundational command-line tools, announced the end of its bug bounty program. Not because vulnerabilities dried up, but because low-quality, AI-generated “AI slop” reports completely overwhelmed the team. According to reporting from BleepingComputer, many of these submissions sounded plausible on the surface but contained no real issues. The volume alone strained a small volunteer group to the point where continuing to offer monetary rewards became untenable.

When AI-assisted submissions outnumber substantive ones, the cost isn’t just time, it’s burnout, distraction, and lost focus on real security work.

Around the same time, we’re seeing federally supported vulnerability infrastructure, like NIST’s National Vulnerability Database (NVD), openly reevaluate its role amid a growing backlog of CVEs. The reality is the system can’t keep up with the sheer volume of submissions that require meaningful human analysis. Clearly more to it than just volume or AI slop, but we won't dive into politics here. NIST has acknowledged that scaling manual review has practical limits and is exploring how responsibility might be shifted elsewhere. The EU’s move to stand up its own vulnerability database is at least in part, a response to these same pressures. It’s becoming clear that we need better ways to keep sources of truth trustworthy as pressure and complexity grow across the security industry, and expectations continue to scale faster than teams can keep up.

Take a look at CybersecurityDive article for more on NVD.

So What is AI Slop?

AI slop isn’t just a catchy term, sadly. It describes a very real phenomenon: high-volume, low-value output produced or amplified by generative AI. On the surface, it often looks like valid or creative work. On closer inspection, it:

  • mimics structure without accurate substance,
  • repackages noise as findings, and
  • creates duplicates or false flags that burn analyst time.

In security, this shows up as bogus vulnerability reports, duplicated or trivial CVEs, and superficial analyses that add little insight but demand triage, validation, and correction – time we typically don't have. Personally, I probably receive three to four AI-slop vulnerability findings a week; often dressed up versions of what used to be simple templates passed around between individuals hoping to score a dollar or two.

The real question in this AI-human race is whether AI-generated content is actually saving human effort, or quietly creating more work downstream. I’m honestly not sure yet. I still appreciate my AI copilots, but the balance still feels fragile.

The Real Problem Isn’t AI

AI isn’t inherently bad for any work and especially security work. In many cases, my own included, AI assists:

  • in triage,
  • in coding up quick scripts for enrichment,
  • recommending relevant threats and intelligence,
  • and augmenting my own insights with additional context.

The curl case feels less like a failure of AI and more like an acceleration of something that’s always existed: incentive abuse. The difference now is scale. What used to be infrequent or easier to spot has become a constant pipeline of reports that aren’t real threats and that pipeline does real damage to the humans trying to make our world safer.

Security Pressures + Extra Effort = A Dangerous Combination

All this comes at a time when many technical and security teams are facing headcount pressures, or simply keeping up in our ever changing landscape. Technical analysts are tasked with more with less, while tooling creates duplicate streams of semi-automated output that must be triaged, prioritized, or rejected.

Couple that with AI is being driven as the all powerful coworker that must be used at all costs across most organizations. The issue though is that we still haven't gotten to the point where AI is absolutely trusted and as AI scales and as we keep humans in the loop the burden of filtering meaningless output scales right along with it.

What Matters

I'm not sure of the solution here yet, but if we want security teams to focus on meaningful work and responsibly leverage AI, we need tooling and processes that prioritize:

  1. better AI-driven signal-to-noise ratios and not more data, but clearer indicators of false positives and noise,
  2. human-centered prioritization frameworks that explicitly distinguish AI-generated output from validated risk, and
  3. AI-assisted evaluation of real exploitability and impact, instead of forcing humans to respond to every alert or report.

None of these have clean answers today, especially in the face of noisy AI slop.

Final Thought

The curl bug bounty shutdown isn’t an isolated quirk, I believe it’s an early warning. If we don’t sharpen what we ask AI tools to produce, and if we don’t design systems that reward meaningful contributions over sloppy superficial volume, we’re going to spend our days filtering garbage, not securing systems.

That said…please don’t take away my AI-slop cat-driving-and-shooting videos. Some slop still sparks a laugh.