// ai

Claude Code Source Leak: What 390K Lines Expose About AI's "Secret Sauce"

2026-04-02
Claude Code Source Leak: What 390K Lines Expose About AI's "Secret Sauce"
2026-04-02

Anthropic has a leak problem. Last week, internal documents spilled details about their unreleased “Mythos” model. This week, they published the entire source code for Claude Code—their flagship AI coding agent—directly to npm. Not a hacker breach. Not a disgruntled employee. Just a source map file that wasn’t supposed to ship.

Here’s the thing: this isn’t the first time. Claude Code’s source has leaked before, and Anthropic’s lawyers sent hundreds of DMCA takedowns to GitHub repos that mirrored it. But this time? The leak is bigger, the community is faster, and the narrative has shifted from “oops” to “open source by accident.”

What Actually Leaked (And How)

On March 30, 2026, Anthropic published Claude Code v2.1.89 to npm. Bundled inside was cli.js.map—a 60MB source map file containing the complete, unminified TypeScript source. If you downloaded the package before they yanked it, you got 390,000 lines of production code, internal comments, and unreleased features.

Source maps are debugging bridges. When you ship minified JavaScript, you include a map file that translates compressed code back to the original source for error tracking. Normally, these maps get uploaded to Sentry or similar services—not bundled with the public package. But someone at Anthropic configured their build pipeline wrong, and the map went out with the release.

The file includes everything: the main React-based terminal renderer, 40+ tools, sub-agent orchestration, and a background engine called Dream. It also contains feature flags, internal codenames (like “Tangu” for analytics), and a subsystem called “undercover mode”—ironically designed to prevent Anthropic employees from leaking internal info in public commits.

The Features That Weren’t Supposed to Be Public

Most coverage stops at “source code leaked.” But the interesting part is what the code reveals about where Anthropic is heading.

Kairos Mode: Not just a coding assistant, but an always-on agent that “watches, logs, and proactively acts.” It maintains append-only daily logs, runs on a heartbeat timer, and can trigger actions without user input. The prompt explicitly frames this as “Claude trying to be helpful without being annoying.” It’s designed for brief interactions, scheduled check-ins, and persistent sessions—basically Jarvis for your terminal.

Dream System: A background memory consolidation engine that runs as a sub-agent. It activates when: (1) 24 hours have passed since the last dream, (2) at least five sessions have occurred, and (3) no other dream is running. The prompt tells Claude to “synthesize what you have learned recently into durable, well-organized memory.” This is how they plan to make long-term context actually work without blowing up token costs.

Coordinator Mode: Spins up multiple worker agents in parallel, each with full tool access but specific instructions. Think of it as Claude managing a team of Claudes. The code shows five levels of permission cascading (policy → flags → local → project → user), suggesting they’re building enterprise-grade access controls.

Buddy System: A Tamagotchi-style companion that hatches in your terminal. It’s a deterministic gacha system with species, rarity, shiny variants, and procedurally generated stats (debugging, patience, chaos, wisdom, snark). The leak revealed it’s tied to your userId and a fixed salt—meaning it’s trivially brute-forceable. The community has already generated “god-roll” UUIDs for legendary shinies.

The Security Culture That Enabled This

This leak didn’t happen in a vacuum. Check Point Research disclosed three critical vulnerabilities in Claude Code just last month (CVE-2025-59536, CVE-2026-21852) that allowed remote code execution and API key theft via malicious .claude/settings.json files. Anthropic patched them, but the pattern is revealing: configuration files are treated as executable code, and the trust model assumes developers only open trusted repos.

The same lax release hygiene that let those vulnerabilities ship is what put source maps in a production npm package. When your entire company is moving at breakneck speed to ship agentic features, basic packaging checks get skipped. The “undercover mode” system—designed to prevent leaks—actually confirms how paranoid Anthropic is about exposure, yet they still shipped the digital equivalent of leaving the keys in the door.

The Community Response: Rewrite, Don’t Fork

Anthropic’s legal team immediately started firing DMCA notices at GitHub repos that mirrored the leaked source. But here’s where it gets clever: developers aren’t forking the code—they’re rewriting it.

One project has already translated Claude Code into Python, and another is building a Rust version. Because it’s a derivative work of leaked code, it exists in a copyright gray area. Anthropic can’t easily DMCA it, and the community gets a clean-room implementation they can actually use. This is the same strategy that let Clean Room BIOS clones flourish in the 80s, and it’s happening in real-time on GitHub.

The Discord servers are buzzing with people unlocking features. Kairos mode is being activated. The Buddy system is getting modded. Someone even got Doom running inside Claude Code’s terminal renderer. It’s not just a leak—it’s a permission slip to hack.

Why This Matters: The Harness Is the Product

Here’s what nobody’s saying clearly: The model is not the product. The harness is.

Claude Opus is the engine. Claude Code is the car. And Anthropic just gave everyone the factory schematics. The code shows exactly how they handle prompt caching, sub-agent orchestration, tool calling, memory compaction, and permission systems. For anyone building AI agents, this is a masterclass in production-ready architecture.

The irony? Claude Code isn’t even the best harness. Terminal.bench ranks it 39th among harness-model pairs. Cursor’s harness gets 93% performance out of Opus vs. Claude Code’s 77%. OpenCode (open source) is arguably better architected. But Claude Code is the most popular, and now everyone can see how the sausage is made.

What Happens Next

Anthropic is in a corner. They can:

  1. Double down on DMCAs and become the “bad guy” lab that sues its own users.
  2. Open source it and lean into the momentum (but risk losing control).
  3. Ignore it and hope the news cycle moves on (it won’t).

Their official statement called it “human error, not a security breach” and promised “measures to prevent recurrence.” But the code is already mirrored on dozens of sites. You can’t un-leak 390,000 lines.

The smarter move? Do what OpenAI did when their front-end code had a bug—make a joke about it. Let engineers blog about the cool features. Open source it on their own terms. The community wants to be excited about Anthropic; they’re just waiting for permission.

Practical Takeaways

If you’re a developer:

  • You can inspect the code for educational purposes. Don’t deploy it commercially—Anthropic’s terms still apply.
  • Look at how they structure sub-agents and prompt caching. That’s the gold.
  • The Buddy system is a fun Easter egg. Generate your god-roll UUID and enjoy your legendary owl companion.

If you’re building an AI tool:

  • Study the permission cascade system. It’s over-engineered but solves real enterprise problems.
  • The Dream memory system is a clever hack for long-term context. Adapt it.
  • Don’t rely on obfuscation. If it’s in the client, assume it’s public.

If you’re Anthropic:

  • Stop sending DMCAs to people who aren’t distributing the original code. It’s making you look scared.
  • Open source Claude Code. The secret is out, and the community is doing it for you anyway.
  • Let your engineers talk. The code is good (7/10, per Claude’s own assessment). Let them be proud.

FAQ

Q: Is Claude Code now open source? A: No. The code was accidentally published, but Anthropic hasn’t changed the license. Using it commercially violates their terms. However, derivative rewrites (Python/Rust ports) exist in a legal gray area.

Q: What is Kairos mode? A: An unreleased always-on assistant mode that runs in the background, logs activity, and can act proactively. It uses scheduled check-ins and a simplified UI for non-coding tasks.

Q: How do I check if my version has the source map? A: Download the npm package and look for cli.js.map. If it’s there and larger than 50MB, it’s the leaked version. Anthropic has pulled the bad release, but mirrors exist.

Q: Can Anthropic sue me for looking at the code? A: Probably not for just looking. But redistributing the original TypeScript source or using it to build a competing product puts you in legal jeopardy. The rewrites are safer but not risk-free.

Q: What’s the difference between this and the Mythos leak? A: The Mythos leak was internal documents about a new model. This is the actual source code for Claude Code, their developer tool. Both stem from release process failures.

Q: Is Claude Code secure to use? A: The disclosed vulnerabilities are patched, but the leak reveals a culture of moving fast and breaking things. If you’re doing sensitive work, audit your .claude/settings.json and don’t open untrusted repositories.

The Real Story

This isn’t about a mistake. It’s about a fundamental shift in how AI tools are built and distributed. The harness—the glue that turns a language model into an agent—is becoming infrastructure. Infrastructure wants to be open. Anthropic tried to keep it closed, and the internet routed around them.

The code is out there. The features are unlocked. The community is building. The question isn’t whether Claude Code will be open source—it’s whether Anthropic gets to be part of that conversation, or watches from the outside while their own tool gets rebuilt without them.

They can still own the best model. They can still run the best API. But the harness? That belongs to everyone now.

Never miss an update

Join 50,000+ developers getting our weekly tech insights.