Anthropic built Claude Mythos Preview, its most capable model to date, and then decided not to release it publicly. Instead, it launched Project Glasswing: $100 million in compute credits and $4 million in direct grants, distributed to 50 organisations tasked with securing global critical infrastructure. The decision tells you something important about where AI development is actually headed in 2026.
Every few months, the AI industry produces a benchmark number that generates a news cycle. Model X scores higher than Model Y on a reasoning test. Model Z can now code better than the average engineer. These stories matter, but they are surface level. The more interesting story is what organisations do with capability once they have it.
Project Glasswing is that more interesting story.
If you want context on how Anthropic's model releases have been evolving for marketers, see my earlier post on Claude 4 and what changed for practical workflows. For the broader context on how enterprise AI adoption is progressing, Anthropic's $19B ARR trajectory gives useful background.
What Is Claude Mythos Preview?
Claude Mythos Preview is Anthropic's most capable model as of April 2026. On the benchmarks that matter most for real-world agentic work, it substantially outperforms Claude Opus 4.6. On cybersecurity capability benchmarks specifically, the performance gap is significant: where Opus 4.6 produced working JavaScript shell exploits in 2 out of hundreds of attempts, Mythos succeeded 181 times on the same benchmark according to InfoQ's reporting on the launch.
That is the number that explains why Anthropic chose not to release it publicly.
Mythos Preview can identify and exploit zero-day vulnerabilities across major operating systems and browsers. It can reason through complex multi-step attack sequences. It is, on the available evidence, the most capable AI system for offensive cybersecurity work that exists outside classified government programmes.
What Is Project Glasswing?
Project Glasswing is Anthropic's answer to the question: what do you do when you have a model that is too capable to release but too valuable to not use?
The programme restricts access to a consortium of approximately 50 organisations selected for their role in global critical infrastructure. The current launch partners include AWS, Apple, Google, Microsoft, CrowdStrike, JPMorganChase, Nvidia, Cisco, and Broadcom. Anthropic estimates this set of organisations is responsible for a significant portion of the world's core software infrastructure.
The structure of the programme includes $100 million in usage credits distributed across launch partners, and $4 million in direct grants to open-source security organisations. Partners are expected to use Mythos Preview specifically for defensive security work: finding vulnerabilities before attackers do, hardening systems, stress-testing infrastructure.
The model is not available for general enterprise purchase or through the standard API. Access is by invitation and requires agreeing to usage restrictions that limit application to defensive security contexts.
Why Does This Matter Beyond the Cybersecurity Industry?
Project Glasswing is interesting to a broader audience for three reasons that go beyond the technical specifics.
First, it is a concrete example of capability governance at the frontier. The standard AI deployment playbook is: build the model, run safety evals, release it with appropriate terms of service and guardrails, monitor usage. Project Glasswing represents a different playbook: build the model, determine that the risk profile of general release exceeds acceptable thresholds, and design a restricted deployment programme instead. This is harder to execute, more expensive, and commercially less attractive in the short term. The fact that Anthropic did it anyway is meaningful signal.
Second, it reflects the direction of enterprise AI positioning more broadly. The organisations who will have access to the most capable AI systems are not necessarily those who pay the most. They are those who demonstrate the governance capacity to handle it responsibly. That is a significant shift from how software licensing has historically worked.
Third, it raises a practical question for every organisation thinking about AI strategy: how do you build the governance infrastructure that eventually earns you access to frontier capability? This is not a hypothetical for 2030. Glasswing is a live programme in 2026.
$100 million in Claude Mythos Preview usage credits across 50 launch partners.
$4 million in direct grants to open-source security organisations.
Access restricted to defensive security applications only.
Usage monitoring with Anthropic oversight of deployment contexts.
Launch partners include AWS, Apple, Google, Microsoft, CrowdStrike, JPMorganChase, Nvidia, Cisco, and Broadcom.
What Does This Tell Us About Anthropic's Strategy?
Anthropic's stated mission is the responsible development of AI for long-term benefit. Project Glasswing is that mission operationalised in a way that has direct commercial costs. Anthropic is not generating standard API revenue from Mythos Preview. It is distributing significant compute credits. It is running a programme that requires substantial oversight and management.
The business logic, if there is one, is that establishing trust as the company that makes hard deployment decisions now positions Anthropic better for the long-term enterprise market than maximising short-term revenue from capability releases. Anthropic's $30 billion ARR run rate in April 2026 suggests that positioning is working. The organisations most worried about AI risk are also some of the most willing to pay for the AI provider they trust most.
For marketing and growth professionals following the AI industry, the takeaway is this: the companies that will shape how AI capability is accessed and deployed over the next five years are making decisions right now that look commercially irrational in the short term and strategically essential in the long term.
At Mindex Studio, when we work with organisations on AI strategy and implementation, the governance question comes up in almost every conversation. The organisations that treat AI governance as an operational constraint to manage are building slower than those that treat it as a capability to develop. Project Glasswing illustrates exactly why.
What Should Organisations Take Away from This?
Three practical implications for leaders thinking about AI strategy:
Governance is becoming a capability, not just a constraint. The organisations in the Glasswing consortium are there because they have demonstrated governance maturity. That position took years to build. Organisations that invest in AI governance infrastructure now are building access to future capability, not just managing current risk.
The frontier capability gap will be gated by trust, not just budget. If the Glasswing model extends, the most capable AI systems available for enterprise use will not be on a standard pricing page. They will be available to organisations that can demonstrate responsible deployment. This is a different kind of competitive advantage than raw compute or engineering talent.
The "release everything, iterate based on feedback" model has limits at the frontier. For most AI applications, the standard deployment playbook is fine. For capability that can genuinely compromise critical infrastructure, it is not. Knowing where that line is, and having the infrastructure to govern it, is part of what it means to operate as an AI-native organisation in 2026.
| Aspect | Standard model release | Project Glasswing approach |
|---|---|---|
| Access | API, usage-based pricing | Invite-only, 50 organisations |
| Revenue model | Per-token or enterprise licence | $100M in credits distributed |
| Usage monitoring | Terms of service, automated | Active Anthropic oversight |
| Deployment scope | General purpose | Defensive security only |
Frequently Asked Questions
Can any organisation apply to join Project Glasswing?
As of April 2026, access is by Anthropic invitation only. The programme is structured around organisations with significant roles in critical infrastructure rather than general enterprise applicants. There is no public application process. Anthropic has indicated it may expand the programme over time, but the pace and criteria for expansion have not been publicly disclosed.
Is Claude Mythos Preview more capable than GPT-5 or Gemini Ultra?
On the specific offensive cybersecurity benchmarks Anthropic has published data for, Mythos Preview performs substantially above what has been publicly reported for comparable models. Direct head-to-head comparisons are difficult because not all providers publish results on the same benchmarks. What is clear from Anthropic's own numbers is that Mythos represents a meaningful capability step-change from Opus 4.6 on tasks requiring complex multi-step reasoning in high-stakes contexts.
What is the difference between Claude Mythos Preview and Claude Opus 4.6?
The clearest published difference is on cybersecurity capability benchmarks where Mythos succeeded 181 times on exploit generation tests where Opus 4.6 succeeded twice. Beyond that, Anthropic has described Mythos as generally more capable at complex multi-step reasoning and agentic tasks. Full benchmark comparisons have not been published because of the sensitivity of the capability data involved.
How does Project Glasswing relate to AI safety research?
Project Glasswing is an operational programme, not a research programme. It is about deploying existing capability responsibly rather than researching new safety methods. The connection to AI safety is that it embodies a deployment governance model that Anthropic views as consistent with responsible development: making capability available for clearly beneficial applications while restricting access for contexts where misuse risk is too high to manage through standard guardrails.
Should enterprise organisations be building their own AI governance frameworks in response to this?
Yes, and not only in response to Glasswing specifically. The broader trend in enterprise AI is toward capability access being tied to demonstrated governance maturity. Organisations that have documented AI usage policies, human oversight processes, and clear frameworks for evaluating AI deployment risks are better positioned to access frontier capability as it becomes available, and to use it effectively when they do.


