On April 7, 2026, Anthropic’s Mythos announcement indicates AI is rapidly driving down the cost of finding potential software vulnerabilities, but that doesn’t mean the economics of security are improving. In practice, organizations already struggle to justify paying even modest amounts for bug discovery, while attackers only need a single exploitable flaw to win. This creates a paradox: as AI makes vulnerability discovery cheaper and more abundant, the real bottleneck—and the real value—shifts to certainty.

There’s a growing narrative that AI has fundamentally changed the economics of vulnerability discovery.

Models like Mythos can now find thousands of potential vulnerabilities, generate exploits, and operate at speeds that would take human experts days or weeks. The conclusion many draw is simple: bug finding is now cheap.

But that conclusion doesn’t quite hold up when you look at how the market actually behaves.

Today, even relatively low-cost tools struggle. Convincing organizations to pay for capabilities like BinLens is sometimes non-trivial. The reason is straightforward: most companies don’t perceive enough direct ROI in “finding bugs,” especially when many of those bugs are low severity, hard to exploit, or never acted upon.

Open source makes this even clearer. Projects like OpenBSD don’t have buyers lined up to pay for vulnerability discovery. No one was spending $20,000 to find bugs there before. So the natural question is: why would they start now just because AI can do it?

At first glance, that suggests the economics haven’t really changed.

But that framing misses the shift.

AI does not fix the monetization problem of vulnerability discovery. If anything, it exposes it. We’re now able to generate large volumes of potential findings at low marginal cost, but there is still no corresponding increase in willingness to pay for those findings.

What has changed is who benefits.

Attackers don’t need to justify ROI. They don’t need a clean signal. They don’t need thousands of real vulnerabilities. They need one. As the cost and skill barrier to discovery drops, the probability of finding a single exploitable issue increases—and that’s enough.

This creates a structural asymmetry. Defenders still operate under budget constraints and ROI calculations. Attackers operate under payoff asymmetry. AI widens that gap.

So the real impact of AI is not that organizations will suddenly start paying $20,000 to find bugs (a figure often cited in early examples).

It’s that they will be forced to respond to an environment where bugs are found anyway.

And this is where the narrative breaks down.

More findings do not translate into more security. In fact, the opposite is often true. Security teams are already overwhelmed. Adding thousands of AI-generated findings—many of which are unverified, non-exploitable, or duplicates—does not improve outcomes. It increases noise.

The Cybersecurity Bottleneck

The bottleneck is no longer discovery.

It is validation.

Which of these issues are real?
Which are actually exploitable?
Which matter in the context of the system?

These are the questions that determine risk. And they are exactly the questions that large-scale, probabilistic AI systems struggle to answer with certainty.

This is the economic shift.

Bug finding becomes cheap, abundant, and increasingly commoditized. But certainty—deterministic, reproducible, explainable proof that a vulnerability is real and exploitable—becomes the scarce resource.

And scarcity is what drives value.

In that world, the winning approach is not to find more bugs, but to eliminate uncertainty.

That is where the real value lies.

Because when anyone can generate thousands of potential vulnerabilities, the only thing worth paying for is knowing which ones are real. That is a fundamentally different problem than discovery.

Low-Hanging Fruit Cyber Cost Savings

There is also a more immediate and practical implication.

If the goal is cost reduction in cybersecurity, there’s a more immediate and obvious target than vulnerability discovery. A large portion of security budgets in enterprise environments is spent on GRC—compliance, documentation, control mapping, and reporting. This is exactly the kind of work LLMs are already very good at: text-heavy, repetitive, and rule-based. In practice, you can already see meaningful gains by using AI to streamline frameworks like CMMC or ISO controls without specialized tooling. By contrast, finding and validating real vulnerabilities remains a fundamentally harder problem. It requires ground truth, system-level reasoning, and proof of exploitability—areas where probabilistic AI still struggles. The result is a split: AI delivers fast ROI in compliance, while in vulnerability analysis, the bottleneck shifts to certainty — and that’s where the real security problem remains.