Even the Safety-First AI Lab Ships a Leaky Pipeline
On the morning of March 31, 2026, security researcher Chaofan Shou (@Fried_rice) was poking around a routine update to @anthropic-ai/claude-code on the npm registry. What he found set off one of the more embarrassing 24-hour cycles in recent AI industry history. Version 2.1.88 of Claude Code had shipped with a 57MB source map file, cli.js.map, a debugging artifact that should never have made it into a production package. And it didn’t just point somewhere. It contained the source.
By 4:23 a.m. ET, a post on X with a direct download link had gone viral. Within hours, the repository had accumulated nearly 30,000 stars and over 40,200 forks. DMCA takedown notices started flying. The cat was already out of the bag.
What actually went wrong
This wasn’t a breach in the traditional sense. No one exploited a vulnerability. No credentials were stolen. This was a misconfigured build.
# The failure chain, simplified:
1. Dev builds Claude Code v2.1.88 for npm release
2. Build generates cli.js.map (57MB) for internal debugging
3. Modern source maps embed source via "sourcesContent" field
4. .npmignore / package.json "files" field NOT configured to exclude it
5. The map file, containing 1,900 files and ~500,000 lines of TypeScript, ships to the public registry
6. World downloads it before anyone notices
This wasn’t just a link to a bucket. The cli.js.map file embedded the full, unobfuscated TypeScript source directly. Because of how modern build tools work, that single file was a complete reconstruction of the codebase, from the core engine handling tool-call loops and streaming responses to the permission model. One of the more humanizing details to emerge from the community’s excavation was an extensive regex filter for profanity used to screen user prompts.
Hacker News users had a field day with that last one. But the bigger point stands: one misconfigured build artifact handed the entire world a 57MB engineering manual.
This is the second leak in seven days
What makes this worse is the pattern. Just days before, Fortune reported that Anthropic had inadvertently exposed roughly 3,000 internal files in a publicly accessible data cache, including a draft blog post about an unreleased model codenamed Capybara (also referred to as “Mythos”). The source code leak confirmed and significantly expanded on those details. Researchers found that Capybara appears to be a Claude 4.6 variant, that Anthropic is iterating on v8 of the model, and that internal benchmarks reportedly show a 29–30% false claims rate in v8, a regression from the 16.7% seen in v4. That kind of internal performance data is exactly what competitors don’t need to see.
Key exposure: The leaked code contained fully built but unshipped features, a self-improvement loop where Claude reviews its own session history, a persistent background assistant called Buddy, remote browser control, and an autonomous “always-on” daemon mode internally named KAIROS. The name is Greek for “at the right time.” The irony of that branding given the circumstances is not lost.
Black Monday: the supply chain angle nobody is talking about enough
Here’s what I want the security community to focus on. March 31, 2026 wasn’t just an Anthropic bad day. It was a supply chain catastrophe in layers.
It started on March 30, when a critical Python supply chain compromise hit LiteLLM, a massively popular library integrated across thousands of AI projects, downloaded millions of times daily. That breach was still fresh when Anthropic’s npm package went sideways the next morning.
Then, running concurrently with the Claude Code exposure, a separate and deliberate attack compromised the axios npm package, one of the most widely used HTTP libraries in the JavaScript ecosystem. Malicious versions 1.14.1 and 0.30.4 contained a Remote Access Trojan. And here’s the kicker: Claude Code itself uses axios as a dependency. Users already noted the overlap. Anthropic’s accidental exposure and a targeted malicious attack were hitting the same tool, in the same install window, on the same morning.
Action required, check your lockfiles now: If you installed or updated Claude Code via npm on March 31, 2026 between 00:21 and 03:29 UTC, you may have pulled in the malicious axios build. Search your
package-lock.json,yarn.lock, orbun.lockbfor versions 1.14.1 or 0.30.4, or the dependencyplain-crypto-js. Rotate credentials if found. Anthropic has since yanked v2.1.88, but if your CI/CD cached that layer or a developer has it sitting in their localnode_modules, you are still at risk. Runnpm ls axiosimmediately. Full technical detail on the axios attack is covered by The Hacker News and VentureBeat.
LiteLLM breach on March 30. Anthropic source exposure on March 31. Axios RAT active in the same install window. All hitting the same ecosystem within 24 hours. Whether coordinated or coincidental, March 31, 2026 is now the Black Monday of AI supply chain security. The “Anthropic was careless” story is real, but it’s almost a distraction from the bigger picture: the infrastructure developers use to build and run AI tools is an active and expanding attack surface, and right now it is not being treated that way.
The model-layer paradox
Anthropic is one of the most deliberate companies in AI when it comes to safety research. They publish Constitutional AI papers. They write responsible scaling policies. The irony of a company this focused on safety at the model layer fumbling basic release hygiene twice in one week is instructive, and it’s not a gotcha. It’s a pattern every organization building and shipping software should recognize in themselves.
Anthropic confirmed the incident with the following statement: “This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”
The lesson: Safety and security are not the same discipline. You can have world-class AI alignment researchers and still ship a leaky npm package. Model safety and pipeline security require different teams, different tooling, and different processes. Both matter. Neither substitutes for the other.
What your team should take away
Audit your build pipelines now. Source maps, debug artifacts, internal configs, anything generated during build that isn’t required at runtime should be explicitly excluded from your release artifacts via .npmignore or the files field in package.json. Don’t assume the defaults are safe. They are not.
Your cloud storage is part of your attack surface. A publicly accessible bucket referenced from a production artifact is a vulnerability, even if you didn’t intend it to be accessible. Treat bucket permissions as a first-class security control, not an afterthought.
Pin your dependencies and verify them. The axios compromise is a textbook supply chain attack. Lock your versions, use a private registry or artifact cache, and verify hashes. This is hygiene, not heroics.
Scan what you publish, not just what you run. Most security scanning happens at build or deploy time. Add a pre-publish step that inspects your actual artifact, what files are bundled, what they reference, whether any of those references point somewhere they shouldn’t.
Run a tabletop on your release process. Walk through the failure mode: what if a debug artifact shipped to production? What would you know, how fast, and what would you do? If you don’t have a clean answer, you need one before a researcher on X has it for you.
Don’t touch the mirrors, legal and IP risk is real. The leaked Claude Code source is now sitting across 40,000+ GitHub forks. That does not make it open source. Using leaked logic, prompts, or architectural patterns in a commercial product is a legal landmine regardless of how “available” the code feels. If your team has already pulled the repo to take a look, talk to your counsel before anyone writes a line of code inspired by what they saw.
The bottom line
Anthropic will survive this. The leaked code doesn’t expose model weights, no customer data was compromised, and the company has the resources to remediate and harden its release pipeline. But the broader lesson lands regardless of whether you’re an AI lab or a mid-market SaaS shop.
Supply chain security is not a niche concern. It lives in your .npmignore. It lives in your S3 bucket policies. It lives in your lockfile. And when it goes wrong, it goes wrong publicly, fast, and completely outside your control.
The next 40,000 forks might be yours.
Mario Lunato is the Field CISO at Knox Systems and writes about FedRAMP, cloud security, and GRC engineering at OneUpSec.tech.