The "most powerful AI model ever developed" was exposed to the world because someone left a checkbox on "public."

It was not a supply chain attack. Not a zero-day. Not a state-sponsored hacking group with sophisticated tooling.

It was a CMS with default settings.

On March 26, 2026, a Fortune reporter discovered close to 3,000 internal Anthropic files — draft blog posts, PDFs, images, memos — stored in a public, searchable data store. Among them, a draft blog post describing a model called Claude Mythos. According to the document, Anthropic considers it "a step change" in AI capabilities and believes it poses "unprecedented cybersecurity risks."

The company building the world's most dangerous autonomous cybersecurity weapon could not configure read permissions on its own blog.

The irony is not subtle. It is structural.


What leaked

Anthropic's data store had a default configuration that made all assets publicly accessible unless someone explicitly marked them private. Nobody did.

The leaked draft reveals Mythos is part of a new internal tier called Capybara — above the current Haiku, Sonnet, and Opus family. According to the document, the model delivers "dramatically better" results in coding, academic reasoning, and — the part that keeps Wall Street awake — cybersecurity.

Specific capabilities mentioned:

All of this according to Anthropic's PR draft. Zero published benchmarks. Zero papers. Zero third-party access to verify anything.

"The most powerful AI model we have ever developed" is the claim. The evidence is: trust us.


The panic

Markets did not wait for evidence.

On March 27, one day after the leak, cybersecurity stocks crashed:

Bitcoin dropped $4,000, from $70,000 to $66,000. The Nasdaq fell 2.15%. Billions in market capitalization evaporated.

All because of a draft blog post that nobody at Anthropic bothered to mark as private.

An internal marketing document — not a technical paper, not a reproducible benchmark, not an independent demonstration — caused a flash crash in the global cybersecurity sector.

The question is not whether the model is real. The question is when a PR draft became market intelligence.


The precedent nobody wants to mention

There is context that makes this leak more uncomfortable than it looks.

In November 2025, Anthropic published a report admitting that Chinese state-backed hackers had used Claude Code to execute an espionage campaign against approximately 30 organizations — technology firms, financial institutions, chemical manufacturers, and government agencies.

The attack began in September 2025. The hackers jailbroke the model, disguising malicious commands as legitimate security testing requests. Once compromised, the system identified valuable databases, exploited vulnerabilities, harvested credentials, and created backdoors for data exfiltration.

According to Anthropic, the model executed 80-90% of the work. Human operators only intervened for "high-level decisions."

Four months later, Anthropic announces a model that is "dramatically superior" to the one Chinese hackers already used successfully.

And leaks it by not configuring a CMS.


The convenient coincidence

The same day Fortune published the leak story, Bloomberg and The Information reported that Anthropic is considering an IPO as early as October 2026, targeting a valuation above $60 billion.

Multiple analysts questioned the "accident" narrative. Leaking the world's most powerful AI model right before going public is... convenient. The news cycle filled with Mythos. Every technology, finance, and cybersecurity publication on the planet covered the story. Free marketing at global scale.

Anthropic says it was human error.

The timing says otherwise.

But the truth is it does not matter which version is correct. If it was accidental, a company building autonomous cybersecurity tools cannot configure read permissions. If it was deliberate, a company positioning itself as the bastion of "safe and responsible AI" manipulated the market with a PR draft containing zero verifiable data.

Both versions are worse than the other.


The missing numbers

What the Mythos draft does NOT include:

What it DOES include:

Five superlative adjectives. Zero numbers.

The predecessor model, Opus 4.6, scores 80.8% on SWE-bench. Does Mythos score 85%? 90%? 99%? Unknown. Because the leaked document was a public relations draft, not a technical report.

Anthropic chose the name Mythos "to evoke the deep connective tissue that links knowledge and ideas."

Good choice. A myth is exactly what we have so far.


What is actually known

Early access to Mythos is limited exclusively to cyberdefense organizations. Anthropic says it is "being deliberate" about the release given the model's capabilities. It is expensive to run and not ready for general availability.

A price of up to $2,000 per month has been rumored. An official launch is expected for October 2026 — conveniently aligned with the IPO timeline.

The model competes against what OpenAI internally calls "Spud" and Google's recent Gemini advances. The arms race toward AGI has its own hype supply chain, and every lab needs its viral moment before going out to raise capital.


Anthropic built a model that, they say, can find and exploit software vulnerabilities faster than any human team.

They stored it in a CMS with all permissions set to "public by default."

The model that was supposed to herald a new era of autonomous cyberattacks was discovered because a reporter searched an open data store.

It did not need a jailbreak. It did not need an exploit chain. It did not need a state-sponsored hacking group.

It just needed someone to forget a checkbox.

Next time Anthropic publishes a post about "safe and responsible AI," remember that their CMS had no authentication.

The most powerful myth in the world had the door wide open.