135,000 GitHub stars. 21,639 exposed instances. 512 vulnerabilities in a single audit. And twelve percent of the marketplace was malware.
The npm of agents
OpenClaw — the most popular open-source AI agent on the planet — went from zero to 135,000 GitHub stars in three weeks. Two million visitors per week. Integration with Slack, Google Workspace, WhatsApp, Telegram. Peter Steinberger launched it in November 2025 as "Clawdbot." By late January, it was the fastest-growing open-source project in platform history.
ClawHub was its skill marketplace — modules extending the agent's capabilities. Anyone could publish. No review. No signing. No vetting. The exact equivalent of npm in 2015, with one fundamental difference: skills weren't libraries executing code in a build pipeline. They were instructions that an AI agent with access to your operating system, your credentials, and your corporate applications interpreted as directives.
Oren Yomtov at Koi Security audited the marketplace. Of 2,857 available skills, 341 were malicious. Twelve percent. Not subtle typosquatting or payloads hidden in transitive dependencies. Twelve percent of the catalog, through the front door, with names like "solana-wallet-tracker" and "youtube-auto-summarizer."
By mid-February, with 10,700+ skills in the catalog, the count exceeded 1,184 confirmed malicious.
Markdown is an installer
The most precise line about this crisis came from 1Password's security team: "Markdown isn't 'content' in an agent ecosystem. Markdown is an installer."
Every skill has a SKILL.md file the agent reads as prompt context. ClawHavoc's malicious skills — 335 uploaded by a single user, "hightower6eu" — placed fake installation instructions in the "Prerequisites" section. The agent, acting as a trusted intermediary, presented the user with a dialog asking them to copy a script from glot[.]io and paste it into Terminal.
The script contacted 91.92.242.30, downloaded a universal Mach-O binary, and executed Atomic Stealer — a $500-$1,000/month MaaS. AMOS harvested: iCloud Keychain passwords, browser cookies, 60+ cryptocurrency wallet types, SSH keys, Telegram sessions, and .clawdbot/.env credentials.
As Snyk's Liran Tal put it: three lines in a markdown file yielded shell access because the AI agent was the trusted intermediary.
When direct skill uploads were detected, the attacker pivoted. On February 21, an account called "linhui1010" posted malicious payloads in comments of 99 of the top 100 most-downloaded skills. Base64, curl | bash, same C2. Only "agent-browser" was clean.
Nine CVEs in four days
The marketplace was one problem. The platform was another.
CVE-2026-25253 — CVSS 8.8, discovered by Mav Levin — was a one-click RCE. The Control UI accepted a gatewayUrl query parameter without validation. A malicious link caused the UI to open a WebSocket to the attacker's server and transmit the auth token "in milliseconds." Browsers don't enforce CORS on WebSocket. OpenClaw didn't check the Origin header. One click, game over.
Then ClawJacked — CVE-2026-32025, CVSS 7.5, from Oasis Security. Any website could open a WebSocket to localhost on the gateway port and brute-force the password. No rate limit. No lockout. "Hundreds of attempts per second." Once authenticated from localhost, the script registered as a trusted device with no user confirmation.
And CVE-2026-22172 — CVSS 9.9. Authenticated users could literally declare their own admin scopes during the WebSocket handshake. The system trusted the client's self-asserted permissions.
Between March 18 and 21: nine CVEs in four days. CVSS from 5.9 to 9.9. Kaspersky documented 512 vulnerabilities in a single audit. 156 security advisories in tracking. 128 awaiting CVE assignment.
Censys found 21,639 exposed instances across 52 countries. 93.4% of verified vulnerable instances had no authentication protection whatsoever.
The corrupted soul
The most disturbing part wasn't the entry. It was the persistence.
Malicious skills wrote instructions to SOUL.md — the agent's personality file, loaded at every session. Uninstalling the skill didn't remove the corruption. As MMNTM's analysis documented: "The skill is gone; the soul corruption remains."
The platform even shipped an internal hook called soul-evil that enabled runtime persona swapping without disk modifications, triggerable by probability or time window. The user received no notification.
And Moltbook — "the front page of the agent internet" — was the cherry on top. A social network for AI agents with 770,000+ active agents. Its creator, Matt Schlicht, explained without irony: "I didn't write a single line of code for Moltbook. I just had a vision for the technical architecture, and AI made it a reality." The reality: a Supabase database with no Row Level Security, exposing 1.5 million API tokens, 35,000 emails, and private messages between agents containing plaintext OpenAI API keys.
The response
Peter Steinberger — OpenClaw's creator — implemented a report button. Skills with more than three reports were auto-hidden. Three reports. For a marketplace where twelve percent was malware.
On February 14, Steinberger announced he was joining OpenAI. The project transferred to an independent foundation. By late March, OpenClawd finally shipped automated vetting: static analysis, behavioral testing, runtime sandboxing, and cryptographic signatures for releases.
Two months after twelve percent.
512 vulnerabilities. 1,184 malicious skills. A hook that corrupts the agent's soul and survives uninstallation. A marketplace with no review where twelve out of every hundred modules were trojans. And the creator's first response was a report button before leaving to work for OpenAI. The question isn't whether AI agents are secure. The question is whether anyone expected them to be.