In response to the recent source code leak incident, AI giant Anthropic has officially launched a legal counterattack. According to the latest news, the company has submitted multiple DMCA (Digital Millennium Copyright Act) takedown notices to GitHub, demanding the removal of all illegally hosted Claude Code source code repositories on the platform.
As a result, GitHub adopted a "wholesale" approach, not only deleting the reported main repository but also simultaneously taking down over 8,100 related forked repositories. This marks one of the largest code copyright cleanup operations in the AI industry in recent years.
Leak Cause Reversed: Not "Human Error," but a Tool BUG
Public opinion previously widely believed the leak was due to employee operational error, but the latest investigation report reveals that the real culprit may be an underlying BUG in a certain packaging tool used internally by Anthropic.
This BUG caused the system to mistakenly include sensitive files and complete TypeScript source code, which should have remained private, when building the production environment package. The exposure of this technical detail somewhat alleviates external doubts about the professional competence of Anthropic employees but also reveals serious security vulnerabilities in their automated workflow.
Although GitHub cooperated by taking down 8,100 repositories, it is nearly impossible to completely erase this data, as the source code has been downloaded, cloned, and disseminated by tens of thousands of developers worldwide over the past 48 hours, spreading to Telegram, cloud storage, and private Git platforms.
Currently, a large number of developers within the community are conducting "archaeological" research on this code. The leaked source code not only reveals the implementation logic of Claude Code but also contains a wealth of internal instructions regarding model behavior fine-tuning.






