The most safety-conscious frontier AI lab on earth just told the market it cannot economically serve 120 customers on a single model. Then it set out to raise the largest private funding round in history to fix the math, and the White House looked at the numbers and said no. If you are an enterprise leader being pressured to chase the next frontier capability, this is the data point that should land on your desk this week.
The story so far
On April 7, Anthropic announced Claude Mythos, a frontier model so capable at finding and exploiting software vulnerabilities that the company chose not to release it publicly. Instead, it stood up Project Glasswing, a controlled-access program of about 50 vetted partners. The list reads like a Fortune 50 cybersecurity wishlist: Apple, Microsoft, Google, AWS, Nvidia, JPMorgan Chase. Within days of the announcement, Bloomberg reported that unauthorized users had already accessed Mythos through a third-party vendor environment, leveraging publicly available techniques and information from the earlier Mercor breach. They reportedly also had access to other unreleased Anthropic models. Week one, fifty partners, breached.
What changed this week
Then came two stories that, together, reframe the entire conversation. First, The Wall Street Journal reported, with Bloomberg confirmation, that Anthropic proposed expanding Mythos access to roughly 70 additional companies, bringing the total to about 120. The White House told Anthropic, privately, that they oppose the move.
The first reason is the obvious one: security. The system was compromised in week one with 50 partners, and adding 70 more increases the attack surface in ways the administration is not comfortable with. The second reason has not had nearly enough oxygen in the coverage. The administration also told Anthropic, in plain English, that the company does not have enough computing power to serve 120 customers without degrading the U.S. government’s own access to the model.
Read that sentence twice, because it is the entire story.
The compute math
Now follow the money on the same news cycle. Bloomberg reported the same week that Anthropic is in early talks for a funding round that would value the company at over $900 billion, more than double its current $350 billion mark and enough to leapfrog OpenAI‘s $850 billion valuation and make Anthropic the most valuable AI startup in the world. What is the capital for? Per reports tied to the WSJ coverage, part of the raise is specifically aimed at funding the compute capacity required to scale Mythos.
So here is the math, the way it actually reads. You have the most safety-conscious frontier lab on earth, what the lab itself describes as the most powerful model it has ever built, and a customer list of 50 hand-picked partners, every one of them a Fortune 50 or critical-infrastructure player. And you cannot serve them, plus 70 more, without one of two things happening: either the U.S. government’s access gets degraded, or you raise potentially the largest private funding round in history to buy enough compute to make the math work. The administration looked at that math and said no, and that is not a security-only objection. It is a market signal.
What this means for your enterprise AI roadmap
If you are a CIO, CISO, CHRO, or CFO anywhere near AI strategy right now, this story should land on your desk with one question attached. If Anthropic, with Google’s $40 billion commitment and Amazon’s $25 billion commitment behind it, cannot economically serve 120 customers on one model, what makes you think you should be in line to chase the next frontier capability layer right now?
There is a story about compute scarcity that the AI vendor narrative has been quietly papering over for two years. The pitch decks talk about agentic this and frontier that and capability the other, while the compute reality is that even the leaders of the field cannot meet demand at the scale they have already promised, let alone the scale they are selling you for tomorrow. When the seller of the most powerful model on earth says, even at a $900 billion valuation, that it still cannot serve more than a few dozen customers without rationing, the buyer-side translation is direct: you are not behind, you are not missing out, you are being sold AI futures the vendor cannot deliver compute for.
Foundation first, again
The Mythos story is not anti-innovation, and it is not even anti-frontier. The capability is real, the breakthroughs are real, and the cybersecurity implications, both defensive and offensive, will reshape the next decade. The pace of stacking is the question, because every enterprise leader I talk to is being pressured, from above by boards and from below by ambitious teams, to be on the next thing. The Mythos story is a hard data point for pushing back, because even the people building the next thing cannot economically deliver it at the scale they are promising. The market is rationing this capability whether you want to participate or not.
The right move, the move I see actually working when I look at every guest who has come on the main show, is the unsexy one. Build a measurement layer that tells you what your current AI is actually doing — that is the SEE step from our enterprise AI ROI playbook: full visibility before any scaling conversation. Build a governance posture that includes your third-party vendor chain, because that is exactly where Mythos itself was breached, and the same logic that flags unauthorized shadow AI inside your walls applies to vendor environments outside them — Olakai’s governance layer exists for that reason. Build a strategy that names what AI is for in your business, not what AI is in the market, because the MEASURE step tells you what to track when the board asks “is any of this paying off.” Prove value in 30 to 60 days with a structured pilot before any scaling commitment, so the unit economics are real numbers and not slide-deck promises. Then, when the next frontier capability becomes economically deliverable at the scale you actually need, you will be ready to stack it on a foundation that holds — and the measurement layer you built will tell you which capabilities are actually worth stacking.
If you are running ahead of that, the question is no longer whether you are taking on too much risk; the question is whether your vendor can even serve you. This week’s news suggests, increasingly, that the answer is no.
Coming next on Enterprise AI Unlocked
I sat down last week with Jason Smith, AI Lead EMEA at Publicis Groupe, and Rob Saltrese, Co-Founder and COO of Lyra Labs, for a full Roundtable on the Mythos breach and what it tells every enterprise about foundation-first AI strategy. The White House news arrived after we hit stop on the recording, and we could not have planned a sharper data point if we had tried. The episode is now live on Enterprise AI Unlocked, and the full conversation goes deeper than this article on vendor-chain risk, board-level AI literacy, and what foundation-first looks like in practice.
In the meantime, the math is on the table. It is not pretty, and it is telling you something important about where enterprise AI actually is, versus where the headlines say it should be.
Want help building the measurement and governance foundation before you stack the next frontier capability? Talk to an Expert about how Olakai measures AI ROI and governs risk across your stack.