Claude Code Source Code Leaked, Over 500,000 Lines of Code Obtained by Developers

marsbitPublicado a 2026-04-01Actualizado a 2026-04-01

Resumen

Anthropic faces a major security incident as the complete source code for its AI tool Claude Code, exceeding 500,000 lines of TypeScript, was leaked. The breach originated when a developer shared a compressed file on Twitter, which quickly gained over 5.3 million views. The exposure was due to Anthropic accidentally including .map files when publishing to npm, revealing core code. The leaked code unveils several advanced features, including a project codenamed BUDDY, which offers a personalized cyber pet system with traits like debugging power and even a "sarcasm value." Another feature, KAIROS, aims to create an "always-on" AI assistant capable of maintaining memory across sessions and initiating tasks. The code also includes a "Nightly Dreaming" mechanism, allowing Claude to organize information and consolidate core content while the user sleeps. Despite the breach, the incident has inadvertently showcased Anthropic's innovative capabilities, turning a PR crisis into an unexpected demonstration of their cutting-edge AI developments.

Recently, artificial intelligence company Anthropic encountered an unprecedented crisis when the complete source code of its latest tool, Claude Code, was accidentally leaked within the developer community. The incident began when a developer, Chaofan Shou, posted a compressed package containing over 500,000 lines of TypeScript code on Twitter, sparking widespread attention and discussion among developers globally. This leak quickly exploded online, with page views surpassing 5.3 million within hours, becoming a hot topic in the tech circle.

So, what was the cause of this leak? According to reports, when Anthropic published the code to npm, it actually forgot to remove the .map files, leading to the exposure of the core code. This mistake not only embarrassed the company but also raised external doubts about its security management. However, this accidental leak has given the outside world an early glimpse into some of Anthropic's cutting-edge functionalities in the AI field.

From the source code, it can be seen that Claude Code includes several astonishing features, with the project codenamed BUDDY being particularly notable. It can not only accompany developers in writing code but also has a unique personalized pet system, allowing users to incubate their own cyber pets based on personal information. This pet not only has a cute appearance but also has dynamic stats like debugging power and patience, and even a "sarcasm value," undoubtedly adding a lot of fun to the work atmosphere.

Even more forward-looking is the feature codenamed KAIROS, which aims to achieve an "always-on" AI assistant. Unlike traditional conversation methods, KAIROS can maintain memory across multiple sessions and proactively initiate tasks. Additionally, the source code includes a Nightly Dreaming mechanism, where Claude organizes information and consolidates core content while the user sleeps. This innovative concept not only makes the AI smarter but also brings new possibilities for future work methods.

Although Anthropic is currently busy dealing with this leak, the unexpected exposure might also provide a platform to showcase their cutting-edge technology. This accidental PR crisis has instead highlighted the company's innovative capabilities. The future of the AI field will become even more exciting because of such attempts.

Preguntas relacionadas

QWhat was the cause of the Claude Code source code leak at Anthropic?

AAnthropic forgot to remove .map files when publishing the code to npm, which exposed the core source code.

QHow many lines of TypeScript code were leaked in the Claude Code incident?

AOver five hundred thousand lines of TypeScript code were leaked.

QWhat is the name of the project that features a personalized cyber pet system for developers?

AThe project is called BUDDY, which includes a personalized cyber pet system with traits like debugging power and patience.

QWhat is the KAIROS feature designed to do according to the leaked source code?

AKAIROS is designed to be an 'always-on' AI assistant that maintains memory across multiple sessions and can proactively initiate tasks.

QWhat mechanism does Claude have for processing information during user inactivity?

AClaude has a 'Nightly Dreaming' mechanism where it organizes information and consolidates core content while the user is sleeping.

Lecturas Relacionadas

An Open-Source AI Tool That No One Saw Predicted Kelp DAO's $292 Million Vulnerability 12 Days Ago

An open-source AI security tool flagged critical risks in Kelp DAO’s cross-chain architecture 12 days before a $292 million exploit on April 18, 2026—the largest DeFi incident of the year. The vulnerability was not in the smart contracts but in the configuration of LayerZero’s cross-chain bridge: a 1-of-1 Decentralized Verifier Network (DVN) setup allowed an attacker to forge cross-chain messages with a single compromised node. The tool, which performs AI-assisted architectural risk assessments using public data, identified several unremediated risks, including opaque DVN configuration, single-point-of-failure across 16 chains, unverified cross-chain governance controls, and similarities to historical bridge attacks like Ronin and Harmony. It also noted the absence of an insurance pool, which amplified losses as Aave and other protocols absorbed nearly $300M in bad debt. The attack unfolded over 46 minutes: the attacker minted 116,500 rsETH on Ethereum via a fraudulent message, used it as collateral to borrow WETH on lending platforms, and laundered funds through Tornado Cash. While an emergency pause prevented two subsequent attacks worth ~$200M, the damage was severe. The tool’s report, committed to GitHub on April 6, scored Kelp DAO a medium-risk 72/100—later acknowledged as too lenient. It failed to query on-chain DVN configurations or initiate private disclosure, highlighting gaps in current DeFi security approaches that focus on code audits but miss config-level and governance risks. The incident underscores the need for independent, AI-powered risk assessment tools that evaluate protocol architecture, not just code.

marsbitHace 1 hora(s)

An Open-Source AI Tool That No One Saw Predicted Kelp DAO's $292 Million Vulnerability 12 Days Ago

marsbitHace 1 hora(s)

Trading

Spot
Futuros
活动图片