Attackers' next move
Last updated: 12 May 2026 04:14 UTC
If LLMs are about to flood the CVE pipeline, it is worth asking what the other side of the table does with that knowledge. This essay works through the question in ten charts. They are built from the Mythos five-year model where possible, and from live CISA KEV / NVD data where the evidence is empirical.
The starting argument
Two predictions opened this analysis. First, attackers will continue to develop exploits by reverse-engineering new vendor patches. This is the most reliable attacker workflow there is, and one that LLM-assisted defenders make more productive whether they mean to or not. Second, as some vendors use LLMs effectively to ship cleaner code, attackers will shift focus to vendors that are slower to adopt AI for vulnerability discovery.
Both arguments are correct, and both turn out to be sharper than first stated. The Mythos model and the attacker-side incentive structure together produce a richer picture, with eight additional points worth making and attacker psychology baked into every one of them.
The big picture
Year 1 of LLM-assisted scanning is projected to produce somewhere between 6× and 12× the pre-LLM CVE volume, depending on the size of the latent vulnerability pool. By Year 4 the legacy pool is exhausted and CVE volume falls back to roughly 27,000 per year. But the share of those CVEs originating with attackers doubles, from a third today to two-thirds in steady state.
1 · Attacker share doubles, permanently
2 · Where attacker CVEs come from
Point 1: the patch-diff treadmill
The attacker workflow most resilient to vendor improvement is patch-diff reverse engineering. Every vendor-published CVE comes with a patch, and every patch is a labelled training example handed to the attacker side. LLMs read patches at least as fast as they help write them. The publish-to-exploit window is already collapsing. CISA KEV data shows the day-30 exploitation share rising from 8% in 2021 to 65% by 2023.
This means that more aggressive vendor LLM adoption feeds the attacker side too, not because the LLMs themselves leak, but because the structured output (CVE + patch) of LLM-assisted audit is the highest-quality input the attacker workflow has ever had.
3 · The patch-diff treadmill
4 · Day-30 KEV exploitation share is climbing
The remediation bottleneck
Discovery is not the binding constraint. Veracode’s 2026 data puts mean time-to-fix at 252 days and climbing; HackerOne paused the Internet Bug Bounty in 2025 precisely because the remediation queue overflowed. Every day a published CVE sits unpatched is a day a network-reachable attacker can use it, and LLM adoption widens the queue without widening the remediation capacity feeding it.
5 · Remediation, not discovery, is the binding constraint
Point 2 (sharpened): the vendor differentiation
The original prediction was that attackers will shift focus to vendors not using AI for vulnerability discovery. The data-grounded version of this is sharper: attackers allocate different tactics to different vendor positions on the AI-adoption / product-attractiveness plane, and they do so deliberately. Going quiet is not safety; it is just a different attractor.
Vendors who scan less are not avoiding risk by avoiding scanning. They are concentrating risk for their customers, and making themselves more visible to attackers reading the public signal for “under-audited.”
6 · Vendor coverage is a choice, not a capability
9 · How attackers classify vendors by LLM-adoption posture
The structural asymmetry
The reason all of this points one direction is regulatory. Vendors operate under disclosure obligations (CRA, SEC, contract clauses, bug-bounty SLAs). Attackers operate under none. Every vendor security artefact flows into a one-directional intelligence channel: CVE, advisory, silent-fix changelog, SDLC blog post, hiring page. Both sides discover faster with LLMs, but only one side is required to publish what it finds.
7 · The disclosure-obligation asymmetry
Attacker psychology: reading the vendor side of the board
Threat groups are not opportunistic scrapers; they are strategic readers of the defender ecosystem. They watch which vendors announce LLM-assisted SDLC, which vendors slow patch cadence (a signal of capacity constraint), which products go EOL (a signal of permanent unpatched-CVE supply), and which peer vendors take public ransomware hits (a signal that the attacker class targeting that product category has migrated). LLMs make this OSINT analysis free at scale.
8 · How attackers read vendor moves
Conclusion
The two opening predictions hold, but they understate the case. Patch-diff exploitation is not just a continuing workflow. It is the workflow vendor LLM adoption industrialises. Targeting under-audited vendors is not just a shift in focus. It is the tactic attackers reserve for slow-adopter vendors while they simultaneously race the patch cycle of LLM-leader vendors. The long-run picture is an attacker-dominant CVE pipeline (69% by Year 5), a remediation queue that grows without bound, and a disclosure-obligation asymmetry that LLMs sharpen rather than soften.
The strategic lessons follow. Vendors gain less from LLM adoption than they expect, because the same output that protects pre-release code also fuels the attacker workflow on post-release code. Vendors who under-invest in LLM adoption are not flying below the radar. They are flagging themselves as the soft target on a quadrant attackers already maintain. And the binding constraint is exactly the part of the operation LLMs do not help with: remediation capacity.
If there is a policy direction worth pursuing, it is on the remediation side, not the discovery side. Faster patch deployment, automated rollback mechanisms, and supply-chain distribution improvements would shrink the attacker’s free-exploit window in a way that LLM-assisted scanning, on its own, cannot.
Next, we will recommend approaches for defenders.