Project Glasswing and Mythos: The AI Security Inflection Point
Anthropic's Mythos Preview can find zero-day vulnerabilities autonomously in every major operating system and browser. Project Glasswing gives 12 critical-infrastructure partners early access to patch before similar capabilities reach attackers. The transitional window is real but finite — and what we do with it determines whether AI is a net benefit or net cost to global security.
On April 7, 2026, Anthropic made an announcement that I think will be remembered as the moment AI security stopped being theoretical. They unveiled Project Glasswing — an alliance with Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. Twelve organizations representing the foundational layer of modern computing, plus more than 40 additional partners. The purpose: to give them privileged access to a new model called Claude Mythos Preview before anyone else can use it.
The reason this matters is in what Mythos Preview can do. According to Anthropic’s own technical preview, the model “is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so.” Over the past few weeks of internal testing, it has found thousands of zero-day vulnerabilities — flaws nobody knew existed — including a 27-year-old bug in OpenBSD, a 16-year-old bug in FFmpeg, and a 17-year-old remote code execution vulnerability in FreeBSD’s NFS implementation. Over 99% of what it has found is still unpatched.
I want to be direct about what this is. This is not a marginal improvement. This is an asymmetry shift. And how the next 18 months play out will determine whether AI ends up being a net benefit or a net cost to global software security.
What Anthropic Actually Announced
The factual record matters here, so let me lay it out before getting into opinion.
The model. Claude Mythos Preview is “a general-purpose, unreleased frontier model” that “reveals AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.” Anthropic’s internal benchmarks show Mythos scoring 83.1% on CyberGym vulnerability reproduction (compared to 66.6% for the publicly available Claude Opus 4.6), 77.8% on SWE-bench Pro, and 82% on Terminal-Bench 2.0.
The capability claim. Mythos can operate autonomously. From the technical preview: “Non-experts can also leverage Mythos Preview to find and exploit sophisticated vulnerabilities. Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit.” In one documented test, “Mythos Preview wrote a web browser exploit that chained together four vulnerabilities, writing a complex JIT heap spray that escaped both renderer and OS sandboxes.”
The validation. Anthropic had expert security contractors manually review 198 vulnerability reports generated by Mythos. Their experts agreed with Mythos’s severity assessment exactly in 89% of cases. The remaining 11% included both over-rated and under-rated bugs — meaning the model is calibrated, not just enthusiastic.
The numbers. Anthropic has identified “thousands of additional high- and critical-severity vulnerabilities that we are working on responsibly disclosing.” The company is committing $100 million in Mythos Preview usage credits across Project Glasswing partners, $2.5 million in direct donations to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5 million to the Apache Software Foundation.
The release model. Anthropic stated explicitly: “We do not plan to make Mythos Preview generally available.” Access is limited to Project Glasswing partners and open-source maintainers who apply through the Claude for Open Source program. Post-preview pricing for partners will be $25 per million input tokens and $125 per million output tokens — five times the price of Opus 4.6, signaling that even for paying enterprise customers, Anthropic intends to keep usage gated.
The acknowledgment. Anthropic was direct in the risk report and on the project page: “The same capabilities that make AI models dangerous in the wrong hands make them invaluable for finding and fixing flaws in important software.” And in the technical preview: “The advantage will belong to the side that can get the most out of these tools. In the short term, this could be attackers, if frontier labs aren’t careful about how they release these models.”
That last sentence is the entire story.
Why This Moment Is Different
I have been writing about AI governance and security for a while now, and the common pattern has been incremental improvement. Each new model is somewhat better than the last. The capabilities matter, but the gap between “this is interesting research” and “this changes how the world works” usually takes years to cross.
Mythos Preview crossed that gap in a single release.
The reason is that vulnerability research has historically been the slowest, most expensive, most expertise-bound activity in software security. A skilled security researcher might find one critical zero-day per year of focused effort on a complex codebase. The number of people in the world who can do that work at the highest level is small — measured in thousands, not millions. And the asymmetry has always favored defenders in one specific way: attackers also had to be experts. The barrier to entry was the same for both sides.
Mythos collapses that barrier. An engineer with no security training can ask Mythos to find a remote code execution vulnerability and get one. A nation-state actor with unlimited resources can run Mythos against every piece of critical infrastructure software simultaneously. A teenager with an API key can do things that would have required a team of senior researchers six months ago.
The implication is straightforward: the cost of finding zero-day vulnerabilities is collapsing toward zero, and the speed of finding them is approaching wall-clock time. Whatever you imagined the timeline was for AI-powered vulnerability discovery becoming a serious problem — it just got compressed.
What Anthropic Got Right
I want to give credit where it is due, because the response so far has actually been thoughtful in ways that frontier AI labs are not always thoughtful.
They did not release it. This is the most important thing. Mythos Preview is not on the Claude API. You cannot get to it from the Claude.ai web interface. There is no waitlist for individual developers. Anthropic explicitly stated they do not plan to make it generally available. In an industry that has trained itself to ship aggressively and apologize later, choosing not to ship is a meaningful choice.
They gave defenders a head start. The point of Project Glasswing is that the organizations whose software the entire internet depends on — operating systems, cloud infrastructure, browsers, network gear — get access first. They get to find their own bugs, fix them, and patch them before the same capability becomes available to attackers. This is the right shape. If you cannot prevent dangerous capabilities from existing, you give the defenders a window to use them first.
They put real money on the table for open source. The $4 million in direct donations to Alpha-Omega, OpenSSF, and the Apache Software Foundation is small relative to Anthropic’s revenue but meaningful relative to those organizations’ budgets. Open-source maintainers — the unpaid people maintaining the software the global economy runs on — historically get nothing when commercial AI labs profit from their code. This time they got something.
They committed publicly to transparency. Anthropic published the Mythos Preview technical write-up, the alignment risk report, and the Project Glasswing announcement on the same day. They documented the capability in detail so the security community could evaluate whether they were under-stating or over-stating the threat. They used cryptographic SHA-3 commitments to prove vulnerability findings existed at time of writing without disclosing the unpatched bugs themselves. This is the right operational discipline.
They acknowledged the asymmetry honestly. “The advantage will belong to the side that can get the most out of these tools. In the short term, this could be attackers.” That sentence is in Anthropic’s own publication. They are not pretending this is unambiguously positive. They are saying it directly: this is dangerous, we are doing what we can, we may not get it right.
What Worries Me
Now the harder part.
The release strategy buys time, not safety. Project Glasswing creates a window where defenders have access and attackers do not. But the window closes the moment a model with similar capabilities becomes broadly available — from any lab, anywhere in the world. Anthropic knows this. They wrote in the technical preview that they “aim to enable defenders to begin securing the most important systems before models with similar capabilities become broadly available.” The strategy is explicitly transitional. The question nobody can answer right now is how long the transition lasts. Six months? Eighteen months? Three years? Whatever the answer, the world’s critical software needs to be measurably more secure by the end of it, or the asymmetry inverts permanently.
Twelve companies and 40 partners is not enough. The Project Glasswing partner list is impressive, but it represents the organizations that already have the resources and security maturity to use Mythos well. Apple, Google, Microsoft, AWS — these companies were already going to be fine. The software that will be most exposed when Mythos-equivalent capabilities leak or are independently developed is the software at the long tail: open-source projects with one maintainer, vendor systems running unmaintained code, embedded firmware in industrial equipment, the millions of lines of code holding small businesses and municipal governments together. None of those have a seat at this table.
The economic incentives may not align. Mythos pricing for partners post-preview is $25/$125 per million tokens — five times the cost of Opus 4.6. That pricing signals scarcity, which is appropriate for safety reasons, but it also means defensive use of Mythos is expensive. Attackers who eventually get access to similar capabilities through other models, jailbreaks, or foreign labs will not be paying those rates. The cost asymmetry pushes against defenders.
“Limited availability” has a track record of not staying limited. Every previous case of a frontier AI capability being held back has eventually leaked, been replicated, or become commodity. The Claude Code source code was accidentally leaked in March, just two weeks before the Glasswing announcement. Models get extracted. Weights get exfiltrated. Open-source equivalents get trained. The realistic question is not whether Mythos-equivalent capabilities reach attackers but when. Anthropic is buying defenders a head start. They are not preventing the eventual reality.
The “ground-up reimagining” of security is not optional. From the Mythos preview: “We believe the capabilities that future language models bring will ultimately require a much broader, ground-up reimagining of computer security as a field.” This is correct, and it is also a much larger statement than it appears. If the assumption that finding novel vulnerabilities requires expert humans is false — and Mythos demonstrates it is false — then the entire vulnerability disclosure ecosystem, the bug bounty economy, the CVE process, and the assumption that security through obscurity buys meaningful time all need to be rethought. None of those redesigns exist yet. We have months, maybe a year or two, to build them.
My Predictions for What Happens Next
I am going to put some specific predictions on record because I think this moment deserves it.
Within 6 months: At least one Project Glasswing partner will publicly disclose a major class of vulnerabilities found by Mythos in widely-deployed software. The disclosure will trigger an emergency patching cycle larger than anything since Heartbleed. The patches will land before exploitation, because of the head start.
Within 12 months: A non-Anthropic lab will release a model with comparable vulnerability discovery capabilities. It might be from OpenAI, Google DeepMind, Meta, or a well-resourced foreign lab. When it happens, the gating Anthropic put on Mythos becomes irrelevant overnight. The transitional period Anthropic talks about ends here.
Within 18 months: The first nation-state-attributed mass exploitation campaign will use AI-generated zero-days against critical infrastructure. The targets will be systems that were not Project Glasswing priorities — water utilities, regional hospitals, smaller telecoms, industrial control systems with poor patching discipline. The damage will be quantifiable in human terms, not just IT terms.
Within 24 months: Cyber insurance underwriting will require evidence of AI-assisted vulnerability assessment as a condition of coverage. Just as MFA went from optional to required, AI-augmented security testing will become table-stakes for any organization wanting renewable cyber coverage.
Within 36 months: Software liability law will start to catch up. The legal theory that software vendors have no duty of care for security flaws becomes harder to defend when AI can find those flaws cheaply. Expect regulatory pressure first in the EU, then in California, then federally.
What Organizations Should Do Now
I am not going to pretend there is a tidy action list that neutralizes this. There is not. But there are things that matter more than they did six months ago:
Treat your software bill of materials as a security artifact, not a compliance artifact. If you do not know what software you depend on, you cannot patch it when the disclosures start. Most organizations still cannot answer “what version of this library is deployed where” with any confidence. That has to change, fast.
Accelerate your patching discipline. The window between vulnerability disclosure and exploitation is going to compress. Organizations whose patching cycles are measured in weeks need to get to days. Organizations measured in months need to find a different operating model entirely.
Invest in third-party assessment capability. Not because Mythos itself is available to you — it is not — but because the methodology of “find vulnerabilities at machine speed” is going to spread. The organizations that can demonstrate they have already been assessed by capable tools will be in much better positions when the next wave of disclosures hits.
Engage with open-source maintainers. If your organization depends on open-source software (it does), then the funding gap between what maintainers earn and what their software is worth is your problem. Anthropic put $4 million on the table. Whatever you can match, match.
Have the AI governance conversation now, not later. Project Glasswing is an example of an AI capability being deployed under explicit governance. Most organizations do not have the framework to make those decisions when the stakes get high. Build the framework before you need it.
I think Project Glasswing is one of the most thoughtful releases of a dangerous capability in the history of AI. I also think it is not enough to prevent the bad outcomes I described above. Both things can be true. The question is not whether Anthropic did the right thing — they did something close to the right thing. The question is whether the rest of us — defenders, organizations, governments, the broader security community — will use the head start they have given us, or waste it.
The clock starts now.
Genesis tracks the AI security conversation for the security and compliance practitioners who need to make decisions about it. If you are trying to figure out how AI capabilities like Mythos Preview should shape your security posture, contact us.
References
Frequently Asked Questions
- What is Project Glasswing?
- Project Glasswing is an Anthropic-led initiative announced April 7, 2026, bringing together 12 launch partners (including AWS, Apple, Cisco, CrowdStrike, Google, JPMorgan Chase, Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks) plus 40+ additional organizations. The goal is to give defenders early access to Claude Mythos Preview — a new vulnerability-discovery AI model — so they can find and fix flaws in critical infrastructure software before similar capabilities become broadly available.
- What is Claude Mythos Preview?
- Claude Mythos Preview is an unreleased Anthropic frontier model with vulnerability discovery capabilities that exceed all but the most skilled human researchers. According to Anthropic's published technical preview, it has found thousands of zero-day vulnerabilities in major operating systems and web browsers — including a 27-year-old bug in OpenBSD and a 16-year-old bug in FFmpeg. It scores 83.1% on CyberGym vulnerability reproduction (vs 66.6% for Opus 4.6) and can autonomously chain multiple vulnerabilities into working exploits.
- Will Mythos Preview be publicly available?
- No. Anthropic stated explicitly: 'We do not plan to make Mythos Preview generally available.' Access is limited to Project Glasswing partners and open-source maintainers who apply through the Claude for Open Source program. Post-preview pricing for partners will be $25 per million input tokens and $125 per million output tokens — five times the price of Opus 4.6, signaling intentional gating even for paying enterprise customers.
- What should organizations do about this?
- Treat your software bill of materials as a security artifact (not a compliance artifact), accelerate your patching discipline because the window between disclosure and exploitation will compress, invest in third-party assessment capability, fund the open-source maintainers your software depends on, and have an AI governance conversation now rather than after an incident. The transitional period Anthropic created with Glasswing is finite — what organizations do during it determines whether they benefit or pay the cost.