Earlier today (March 31st, 2026) - Chaofan Shou on X discovered something that Anthropic probably didn’t want the world to see: the entire source code of Claude Code, Anthropic’s official AI coding CLI, was sitting in plain sight on the npm registry via a sourcemap file bundled into the published package.
I’ve maintained a backup of that code on GitHub here but that’s not the fun part… Let’s dive deep into what’s in it, how the leak happened and most importantly, the things we now know that were never meant to be public…
This is, without exaggeration, one of the most comprehensive looks we’ve ever gotten at how the production AI coding assistant works under the hood. Through the actual source code.
A few things stand out:
- The engineering is genuinely impressive. This isn’t a weekend project wrapped in a CLI. The multi-agent coordination, the dream system, the three-gate trigger architecture, the compile-time feature elimination - these are deeply considered systems.
- There’s a LOT more coming. KAIROS (always-on Claude), ULTRAPLAN (30-minute remote planning), the Buddy companion, coordinator mode, agent swarms, workflow scripts - the codebase is significantly ahead of the public release. Most of these are feature-gated and invisible in external builds.
- The internal culture shows. Animal codenames (Tengu, Fennec, Capybara), playful feature names (Penguin Mode, Dream System), a Tamagotchi pet system with gacha mechanics. Some people at Anthropic is having fun…If there’s one takeaway this has, it’s that security is hard…
Source: https://kuber.studio/blog/AI/…Entire-Source-Code-Got-Leaked… [web-archive]
-–
I think more. What the GPL protected was not the scarcity of code but the freedom of users. The fact that producing code has become cheaper does not make it acceptable to use that code as a vehicle for eroding freedom. If anything, as the friction of reimplementation disappears, so does the friction of stripping copyleft from anything left exposed. The erosion of enforcement capacity is a legal problem. It does not touch the underlying normative judgment.
That judgment is this: those who take from the commons owe something back to the commons. The principle does not change depending on whether a reimplementation takes five years or five days. No court ruling on AI-generated code will alter its social weight.
This is where law and community norms diverge. Law is made slowly, after the fact, reflecting existing power arrangements. The norms that open source communities built over decades did not wait for court approval. People chose the GPL when the law offered them no guarantee of its enforcement, because it expressed the values of the communities they wanted to belong to. Those values do not expire when the law changes.
Source: https://github.com/instructkr/claw-code/…/2026-03-09-is-legal-…-erosion-of-copyleft.md
-–
Related: https://github.com/instructkr/claw-code (Better Harness Tools, not merely storing the archive of leaked Claude Code but also make real things done. Now rewriting in Rust…)
Inside
assistant/, there’s an entire mode called KAIROS i.e. a persistent, always-running Claude assistant that doesn’t wait for you to type. It watches, logs, and proactively acts on things it notices.This is gated behind the
PROACTIVE/KAIROScompile-time feature flags and is completely absent from external builds.Oh hey, that’s creepy as shit, even if it’s not active (I assume that’s what the second paragraph means)
Referenced over 150 times in the source, KAIROS is an unreleased autonomous daemon mode where Claude operates as a persistent, always-on background agent. It receives periodic
<tick>prompts to decide whether to act proactively, maintains append-only daily log files, and subscribes to GitHub webhooks.KAIROS includes
autoDream- a background memory consolidation process that runs as a forked subagent while the user is idle. The dream agent merges observations, removes contradictions, converts vague insights into absolute facts, and gets read-only bash access. A companion feature called ULTRAPLAN offloads complex planning to a remote cloud session running Opus 4.6 with up to 30 minutes of dedicated think time.I’m not really sure what you are trying to tell me here…? I mean I read the entire article that you posted and it sort of explained the feature flag thing, so I’m good there, and the rest of this stuff was more or less in the article as well.
I am sorry. The source is different, and just in case someone needs more meta on it.
Gotcha, thanks for the clarification :)
it’s not “not active” it doesn’t exist in external builds
Why does the writeup sound like an LLM? I don’t think people text like this…
Cause it is. They say they read all the source code, it’s 500k lines and it leaked yesterday. So either they’re lying, or they’re a robot. Or both. Likely both.
I do believe in people… yet who knows nowadays sometimes indeed…
The person of the blog of MindDump states to be 19 at their main website://A 19-year-old AI developer & Perplexity Business Fellow…
Hey! I’m an AI dev & Tech Enthusiast from New Delhi, India
I’m studying Computer Science & AI from BITS Pilani and AI & Data Science from GGSIPU and building generative UI for LLMs @ PolyThink.I’ve built and shipped 53+ projects (38+ AI Based) in the past year, run Projects to see some of my favourites (that I’m allowed to show)
I also write a blog called MindDump with 10-20k readers a month.
Also working on Democratizing private, local SLMs with superior context as your External Brain @ SecondYou, to know more read this tweet.I design agentic LLM pipelines, post-train models, optimise local RAG systems and play around with resource constrained projects like The Backdooms and MiniLMs, but to know more about the languages and tools I know run Skills…
Fun Fact: I started programming when I was 12 on Roblox, became a Roblox millionaire at 14 as a freelance dev, at 16 I got into 3D modelling and won India’s biggest student contest for it, but almost went to culinary school at 17 because I suck at math - glad it worked out though :)
Source: https://kuber.studio/
That bio makes it seem it’s someone from c/linkedinlunatics
Their GitHub repo says they used AI tools to rewrite the entire codebase in Rust (avoiding legal issues I guess?) which does not give me confidence about their use of AI.
Thin foil hat here: something is off at anthropic.
The system outages are spiking. 2 days ago their follow up model to Opus, called Mythos, ‘leaked’ and now the CC source?
Within weeks of having a public spat with the Pentagon whilst they have found to be using Anthropic products to engage in the most clueless war ever engaged in by the USA.
I’ve run out of tin foil, but it doesn’t add up.
The entire LLM industry is falling apart.
They’re not realizing the growth in compute they promised
They’re not realizing the results they promised (duh)
Almost every company that tries to use LLMs is failing to successfully implement it, because it doesn’t work.
Nobody is willing to pay what it actually costs, all current use is heavily subsidized so they’re burning money fast
Investors are finally digging their heads out of their senses and asking what that strong smell of burning cash is.
Energy costs are rising, making LLMs even more insanely expensive.
The only thing they’ve still got going for is that most so-called tech journalism is fucking awful and will just parrot LLM press releases without using their brain.
The growth in compute is on par or similar to Moore’s law. There’s not much wrong there. The problem they are having is that they can’t scale a probabilistic system to a have deterministic outcomes, therefore making it economically suicidal for any company to invest in this technology.
Your point about the press is true for all journalism. Also not their fault. Once captain bonespurs allowed prince bonesaw to get away with sawing journalists into pieces while they are alive, journalists will think twice.
I don’t think investors are just opportunistic as always so this war is changing their appetite a bit, but they’ll come back to this. The hype for LLMs isn’t going to stop anytime soon.
It’s a raw technology, like nuclear fission. You need a lot of auxiliary equipment, knowledge and expertise before you can power a metropolis with it. It’ll take a decade before this technology will mature.
This. I have been thinking about why ‘now’, of all times, too.
workers sabotaging their unethical workplace maybe ? hope so
This reads like an ad for claude
Does this mean I can download compile and run my own Claud?
You can compile and run your own clean room designed version as well :) Getting the actually leaked code could be a bit complicated (but torrents should exist)
Edit: nevermind, just saw the archive.org link for the source in the conmentsection here. So finding the actual source should be easy as well
At the risk of sounding stupid, this leak has nothing to do with the model, right? Just the interface to interact with a model?
That’s what it sounds like, but a bit more. It’s not just an API to send query and get response, it orchestrates the agents and performs extra functions.
If there’s one takeaway this has, it’s that security is hard. But
.npmignoreis harder, apparently :PLol
I hope they get sued for billing users multiple times for using Claude code’s resume function, or invalidating the token cache when it sees questions about billing.
Related:
- https://redd.it/1s96vcr (Thanks to the leaked source code for Claude Code, I used Codex to find and patch the root cause of the insane token drain in Claude Code and patched it. Usage limits are back to normal for me!…)
- https://github.com/Rangizingo/cc-cache-fixlol, somebody is salty.
Stack Overflow is free, by the way.
I barely used stack overflow.
RTFM.
Okay, well, reading the fucking manual is also free. So why are you complaining about how expensive Claude is?
I’m saying people who paid for a service and were misled/cheated/shorted by bugs should get a refund.
This isn’t even the first time.
Archived link to the code itself: https://web.archive.org/web/20260331105530/https://pub-aea8527898604c1bbb12468b1581d95e.r2.dev/src.zip
The engineering is genuinely impressive.
Where? I think I missed it. Looks like slop to me.
Multi-Agent Orchestration - “Coordinator Mode”
i think this is already part of public releaes. i.e. when I was writing a skill earlier this week, claude refered to the main agent as coordinator and the skill does spawn subagents in parallell. (when I run the skill, it does say the subagents are in paralell, but they seems to get run sequentially)
this is in web-claude-code, not in terminal claude code. So maybe the webversion has some different feature flags than the cli?
‘context-1m-2025-08-07’ // 1M token context window
this is definitely part of webclaudecode

buddy mode still seems to be disabled, sadly




