- 5
- April
On March 31, 2026, the unexpected happened — Anthropic accidentally released the source code of Claude Code to the public via an npm package containing over 512,000 lines of TypeScript across 1,900 files. Within hours, the code was mirrored, forked, and analyzed by security researchers worldwide — this EP 1/3 article summarizes what happened, what was exposed, and how Anthropic responded.
Quick Summary — What Was Leaked?
The source code of Claude Code v2.1.88 was leaked through a JavaScript sourcemap bundled with the npm package by mistake, pointing to a zip archive on Anthropic's own Cloudflare R2 storage — totaling 512,000 lines of TypeScript across 1,900 files. However, it did not include Model Weights, Training Data, or Customer Data.
Timeline of the Claude Code Leak
This incident didn't happen in isolation — just days earlier, Anthropic had experienced the Claude Mythos leak, where internal information about an unreleased AI model was exposed through a misconfigured CMS. When the Claude Code source code leaked shortly after, the security community took particular notice.
| Date | Event |
|---|---|
| March 26-27, 2026 | A configuration error in Anthropic's CMS exposed ~3,000 unpublished assets, including a draft blog post about Claude Mythos (internal name: Capybara) — Fortune discovered and notified Anthropic |
| March 31, 2026 | Claude Code v2.1.88 was published on npm with a JavaScript sourcemap pointing to a zip archive on Anthropic's Cloudflare R2 storage |
| March 31, 2026 (hours later) | Researcher Chaofan Shou discovered the sourcemap and posted about it on X (Twitter) — the code was downloaded from Cloudflare R2 and mirrored to GitHub |
| March 31 - April 1, 2026 | The repository was forked thousands of times. Researchers worldwide began analyzing the code, discovering feature flags, system prompts, and internal architecture |
| April 1-2, 2026 | Anthropic issued an official statement acknowledging the incident, confirming it was Human Error in the release process, not a security breach — the fix was deployed but the code had already been mirrored globally |
What Was Leaked?
The first question everyone asked was "What exactly was leaked?" — the answer is the source code of the Claude Code CLI only, not the AI model itself, and no customer data was exposed.
| Leaked | Not Leaked |
|---|---|
| Complete TypeScript source code (512,000 lines, 1,900 files) | Model Weights (the AI model itself) |
| System Prompts used within Claude Code | Training Data (data used to train the AI) |
| Feature Flags for unreleased features (persistent assistant, remote control, session review) | Customer Data |
| Internal architecture of Claude Code | API Keys or Credentials |
| "Frustration regex" and "Undercover mode" (fake tool) | Financial or internal corporate data |
What Did the Source Code Reveal?
Researchers who analyzed the code discovered several interesting details about how Claude Code works internally:
- Feature Flags for unreleased features — flags were found for "persistent assistant" (AI that keeps working after you close the terminal), "remote control" (controlling Claude Code from a phone or browser), and "session review" (reviewing and replaying past work sessions)
- "Frustration regex" — a regex system that detects when users show signs of frustration, adjusting responses accordingly
- "Undercover mode" (fake tool) — an unexpected feature, a fake tool whose exact purpose remains unclear
- Internal architecture — revealed how Claude Code manages context, invokes tools, and communicates with Anthropic's API
- CLAUDE.md as the core configuration mechanism — confirmed that the CLAUDE.md file is the primary way Claude Code receives instructions and standards from users
Unreleased Features (Found via Feature Flags)
- Persistent Assistant — AI that continues working even after you close the terminal
- Remote Control — control Claude Code from your phone or browser
- Session Review — review and replay past work sessions
- Frustration Detection — automatically adjusts behavior based on user emotions
Claude Mythos — The Secret Model Leaked Days Earlier
Before the Claude Code source code leaked, just days earlier Anthropic faced another data exposure incident — this time involving information about an unreleased next-generation AI model.
On March 26-27, 2026, a configuration error in Anthropic's CMS was discovered, allowing public access to ~3,000 unpublished assets, including a draft blog post revealing details about "Claude Mythos" (internal name: Capybara) — a model that Anthropic described as a "step change" in capability, larger than Claude Opus 4.6, and one that Anthropic stated poses "unprecedented cybersecurity risks."
Fortune was the first media outlet to discover and report the issue, after which Anthropic promptly fixed the problem.
| Details | Claude Mythos (from Leak) | Claude Opus 4.6 (Current) |
|---|---|---|
| Internal Name | Capybara | Opus 4.6 |
| Size | Larger than Opus (exact figures not disclosed) | Publicly released |
| Capability | "Step change" — a clear leap forward | Current flagship model |
| Risk | "Unprecedented cybersecurity risks" | Has passed safety evaluation |
| Status | Not yet released (as of April 2026) | Available for use |
How Did Anthropic Respond?
Anthropic issued an official statement quickly after the incident, with the key points summarized as follows:
- "Some internal source code had been leaked within a Claude Code release" — acknowledged that the source code leak was real
- "No sensitive customer data or credentials were involved" — confirmed that no customer data or credentials were exposed
- "Release packaging issue caused by human error, not a security breach" — clarified it was a human mistake in the release process, not an attack
- Swift remediation — the issue was fixed quickly, but the code had already been mirrored globally and could not be recalled
Anthropic's response was notably swift and transparent. However, this incident remains an important lesson that software release processes must be rigorously audited — especially when dealing with mission-critical systems.
Impact on the AI Industry
The Claude Code leak had wide-ranging implications for the AI and technology industry:
- Confidence in enterprise AI tool adoption — organizations considering AI tools must now evaluate supply chain security risks more carefully
- Even leading AI companies can make OpSec mistakes — this incident demonstrates that operational security failures can happen to any organization, regardless of how strong their security team is
- The importance of supply chain security — npm, PyPI, and other package registries are risk points that need serious attention, including checking whether packages contain data that shouldn't be there
- A lesson for every organization using AI tools — choosing AI tools should consider the vendor's security track record, not just the tool's capabilities
For Organizations Using AI Tools
This incident serves as a reminder that organizations should carefully evaluate the AI tools they use, consider the security track record of service providers, and always have an emergency response plan in place — enterprise ERP systems should employ digital signatures and two-factor authentication to mitigate risks.
Summary — The Claude Code Leak Incident
| Topic | Summary |
|---|---|
| What happened | Claude Code v2.1.88 source code was leaked through a JavaScript sourcemap in the npm package |
| Root cause | Human error in the release process — sourcemap pointed to a zip archive on Cloudflare R2 |
| What was leaked | 512,000 lines of TypeScript, system prompts, feature flags, internal architecture |
| What was not leaked | Model Weights, Training Data, Customer Data, API Keys/Credentials |
| Anthropic's response | Quickly acknowledged the incident, confirmed it was human error and not a security breach — fixed the issue but the code had already been mirrored worldwide |
The Claude Code leak is an important lesson that even leading AI companies can make operational security mistakes — what makes the difference is a swift and transparent response.
— Saeree ERP Team
Continue Reading — EP 2 and EP 3
- EP 2: Critical Vulnerabilities Discovered After the Source Code Leak
- EP 3: Lessons from the Claude Code Leak for Organizations
References
- Fortune — Anthropic leaks its own AI coding tool's source code
- The Hacker News — Claude Code Source Leaked via npm Packaging Error
- SecurityWeek — Critical Vulnerability in Claude Code Emerges Days After Source Leak
- Axios — Anthropic leaked its own Claude source code
- VentureBeat — Claude Code's source code appears to have leaked
If your organization is looking for an ERP system that prioritizes security with high standards, you can schedule a demo or contact our advisory team for further discussion.
