On April 7, 2026, Anthropic dropped a bombshell that perfectly captures the moment we’re living in. They announced Claude Mythos Preview. Apparently, it’s a frontier AI model so capable at autonomous reasoning, coding and especially finding and exploiting software vulnerabilities that they’re not releasing it to the public, hence “apparently”.

Instead, through a new initiative called Project Glasswing, they’re handing exclusive early access to a hand picked consortium of the world’s biggest players: Amazon Web Services, Apple, Google, Microsoft, NVIDIA, Cisco, Broadcom, CrowdStrike, Palo Alto Networks, JPMorgan Chase, the Linux Foundation and dozens more critical infrastructure organizations.

The stated goal is noble on the surface: use Mythos to scan and harden the world’s most foundational software like operating systems, browsers, open-source libraries before bad actors get their hands on similar capabilities. Anthropic says the model has already uncovered thousands of zero-days, including a 27 year old bug in OpenBSD and a 16 year old flaw in FFmpeg. They’re committing up to $100 million in API credits and $4 million in donations to open source security groups.

It’s framed as responsible stewardship in the AI era.

I had a pretty viral post on X for simply asking: “Who will give me my 15 days to safely fix vulnerabilities in my apps which Mythos will expose to everyone?”

My post went viral among developers because it nailed the uncomfortable truth. Big tech gets the spear first. Everyone else gets the pointy end later, if at all.

The mechanics of the new feudal order

It’s a beautiful preview of techno feudalism, the shift Yanis Varoufakis and others have described, where traditional capitalism gives way to a system in which a handful of tech lords control the digital “land” (compute, data, frontier models) and extract rents from everyone else.

In classical feudalism, lords owned the fields and serfs worked them and paid tribute. In techno feudalism, the lords own the intelligence layer itself. Frontier AI models like Mythos become the new strategic resource. Access isn’t democratized through the market or open research, but rationed by invitation to those who already sit atop the economic pyramid.

  • The Lords (Anthropic, OpenAI, Google, etc.): They decide who gets the god-tier model

  • The Vassals (Apple, Microsoft, AWS, banks, cloud giants): They get private previews to fortify their castles and the open source commons they depend on

  • The Serfs (independent developers, small agencies, startups, individual researchers): They wait for whatever sanitized, guardrailed “consumer” version eventually trickles down, probably months or years later, after the big players have already patched their critical paths

Frontier intelligence gets concentrated inside enterprises, while individuals get mid models.

Why this feels like a turning point

We’ve seen hints of this before like closed AI research, compute moats, API rate limits that favor enterprise contracts. But Mythos makes it explicit. The model isn’t about to being smarter at writing code apparently and it’s better than almost all humans at breaking code. Anthropic’s own testing showed engineers with no formal security background waking up to fully working remote code execution exploits after letting Mythos run overnight.

That puts us in a situation where every major system runs on software that suddenly becomes trivially exploitable by AI and the first mover advantage is existential. The companies in Project Glasswing get to scan their supply chains, their clouds, their browsers, their kernels before the knowledge becomes widespread. Smaller players? We’ll be playing catch up once the model leaks, gets replicated or a weaker public version drops.

This isn’t paranoia. It’s the logical outcome of frontier AI development. Training these models requires billions in compute and data that only the largest labs possess. Safety concerns are real like releasing a zero day factory to the open internet would be reckless. But the solution chosen (private access for the already powerful) entrenches the very power imbalance that makes regulation and broad access harder in the future.

The broader pattern

Look around: AI coding assistants that enterprises pay premium for while indie devs get capped versions. Enterprise only reasoning models. “Responsible scaling” policies that conveniently align with commercial interests. The pattern is consistent in that AI progress accelerates for those who can afford the private previews, while the rest of the economy operates on last year’s capabilities.

Critics of techno feudalism argue this isn’t inevitable. Open-source efforts (Meta’s Llama series, various smaller labs) are pushing back. Some replies to my post hoped for a public submission system where anyone could pay for a Mythos security review. But the trajectory is clear: the gap between what the elite can do with AI and what the rest of us can do is widening, not narrowing.

Anthropic deserves credit for transparency and for prioritizing defense over offense. Patching real vulnerabilities in critical infrastructure is important. But the structure of the rollout reveals the deeper issue: in the AI age, security itself is becoming a luxury good allocated by corporate fiat rather than a public good.

We’re not “cooked” yet, as I half jokingly asked, but we are watching the early chapters of a new economic order being written in real time. The question isn’t whether frontier AI will reshape cybersecurity, it already has. The question is whether the benefits will be hoarded by the new digital nobility or eventually extended to the rest of us.

For now, the serfs (you and I) are still waiting for their Mythos, maybe in 5 years we will actually have the honor to pay for it…

Keep reading