By now you've heard. On March 31, 2026, Anthropic screwed up. A routine Claude Code update on npm included a source map file that pointed to a zip containing the product's entire source code. 512,000 lines of TypeScript, 1,906 files. All out in the open.
I'm not going to rehash the details because you've been reading threads about this for two days. What I do want to talk about is what isn't being discussed enough, and what I find considerably more concerning than the leak itself.
Three incidents, one single day
The first thing to understand is that March 31 wasn't one incident. It was three. And most outlets have blended them into a single story, which has created quite a bit of confusion.
The Claude Code source code leak. A packaging error caused the source map to end up in the public npm package. A security researcher named Chaofan Shou spotted it within minutes. Within hours, the code was replicated across hundreds of GitHub repositories, with over 41,500 forks. Anthropic started firing DMCAs to try to contain the damage, but by that point it was already impossible.
The axios supply chain attack. This is completely independent from the Claude Code leak, even though it happened the same day. An attacker compromised the npm account of axios's lead maintainer and published malicious versions (1.14.1 and 0.30.4) that included a remote access trojan. The exposure window was about 3 hours, between 00:21 and 03:15 UTC. Google Threat Intelligence attributed the attack to a group linked to North Korea.
The internal package typosquatting. An npm user registered five packages with names identical to Anthropic's internal packages that appeared in the leaked code. For now they were empty stubs, but the goal was clear: wait for someone to try compiling the leaked code and catch them with a malicious dependency.
Three separate incidents. Different actors. Different methods. But because they all landed on the same day, many articles treated them as the same story.
Did the axios attack affect Claude Code?
This is the question I've seen the most over the past few days, and the short answer is no. Claude Code bundles all its dependencies, including axios, into a single JavaScript file at build time. When you install Claude Code via npm, axios isn't downloaded separately. There's no dependency to resolve at install time.
Several outlets reported that Claude Code users who installed or updated during that 3-hour window might have been affected. That's technically incorrect for Claude Code specifically, although it is a real risk for any other project that had axios as a dependency and ran an npm install during that period.
And I can confirm it was a real risk. That same day I received emails from Datadog warning that they'd detected the compromise and recommending action if the affected versions had been installed. Axios has over 100 million weekly downloads on npm. Security firm Huntress detected the first compromised systems just 89 seconds after the malicious version was published, through automated CI/CD pipelines.
So while Claude Code dodged the bullet thanks to its architecture, the axios attack was serious and affected a lot of people.
Cloned repos are the real danger
This is where things go wrong.
In the hours following the leak, dozens of repositories appeared with Claude Code's source. Some were direct copies. Others were rewrites in Python or Rust. The most well-known is claw-code, created by a Korean developer named Sigrid Jin, who rewrote the code in Python before dawn to sidestep the DMCAs. The repo hit 50,000 stars in two hours, becoming the fastest-growing repository in GitHub's history.
Other repos went further. Some claimed to have stripped telemetry, unlocked experimental features hidden behind feature flags, or straight-up removed security restrictions. And people were installing them.
The question I keep asking myself is: who's audited that code?
We're talking about tools that have access to bash, that can read and write files on your system, that execute commands autonomously. If someone slips malicious code into one of those rewritten repos, they've got direct access to your machine. And you're not going to catch it because you're not reviewing 500,000 lines of someone else's code before running npm install.
A pattern we've seen before
It's not like this is something new. We've seen this pattern before.
The most well-known case is the xz-utils backdoor in 2024. An attacker spent two years earning the trust of the open source community as a maintainer of a compression library used by virtually every Linux system. Once they had enough permissions, they injected malicious code that allowed root access to any server running that library. It was discovered by pure chance, literally because a Microsoft engineer noticed his SSH logins were taking half a second longer than usual.
Recently, Veritasium published a video explaining in detail how that attack worked and how it nearly compromised the entire internet. It's explained really well for anyone, you don't need to be technical to understand how serious it was:
The parallel with what's happening now is direct. You've got repositories rewritten overnight with AI, maintained by people you don't know, with code you haven't reviewed, and they're going to have access to execute commands on your operating system. If someone wants to sneak in a backdoor, the perfect scenario is already set up.
Why I run my agents in containers
After years working with AI agents for development, there's one thing I'm clear on: I never run an agent directly on my host machine.
All my agents run inside Docker containers, in my case through DDEV. They only have access to the volumes I explicitly mount. They can't touch the rest of my filesystem, they don't have access to my SSH keys, they can't read my host environment variables, they can't install anything outside their sandbox.
Is it more work to set up? Yes. Does it add a layer of complexity? Also yes. But the alternative is giving a tool that runs bash autonomously full access to your system. And after what we've seen this week, I think it's pretty clear why that's a bad idea.
My recommendation is simple: if you use Claude Code, OpenCode, Codex, or any other coding agent, run it inside a container. Docker, DDEV, Podman, whatever you prefer. It doesn't have to be complicated. The isolation layer a container gives you can be the difference between a scare and a disaster.
And of course, don't install random repos from people you don't know. Especially if those repos can execute commands on your system autonomously.
Final thoughts
What gives me a mix of concern and irony is that Anthropic, the company that markets itself as the safety-first AI lab, couldn't protect its own source code. And this is their second data leak in less than a week, because just days ago internal documents about an unreleased model were also made public.
But beyond pointing fingers at Anthropic, what really concerns me is the trend. We're going to see more and more tools with deep access to our systems, more repos promising "improved" or "unlocked" versions of commercial products, and more supply chain attacks like the axios one.
From experience I can tell you that security isn't something you can bolt on after the fact. You need to design your work environment assuming that at some point something will fail. Containers aren't the solution to every problem, but they're a barrier that's cheap to set up and that'll save you more than one headache.
PS: If you're interested in setting up AI agents in DDEV containers, I'm putting together a detailed technical article about my configuration. Follow me so you don't miss it.