Original | Odaily Planet Daily (@OdailyChina)
Author | Azuma (@azuma_eth)
On April 8, Anthropic, the AI development company behind Claude, officially announced the launch of a new initiative called "Project Glasswing." This project will be jointly advanced in collaboration with several industry giants including Amazon, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks.
Anthropic stated that this is an urgent measure aimed at protecting the world's most critical software. The parties will jointly use the Mythos Preview version to discover and fix potential flaws in the systems the world currently relies on.
Mythos is the next-generation AI model currently under development by Anthropic. It is the first model in human history to surpass the ten trillion parameter mark (in contrast, mainstream models on the market currently range from hundreds of billions to one trillion parameters), with a staggering training cost of $10 billion. Compared to Claude's current most powerful model, Opus 4.6, Mythos shows significantly improved scores in tests for software coding, academic reasoning, and cybersecurity.
Rumors about Mythos began circulating in the market last week, with widespread concern being — would Mythos, with its specialized cybersecurity capabilities, affect the current security offense and defense landscape? If maliciously used, could it cause larger-scale security incidents? Odaily also reported on this matter and discussed the potential impact on security offense and defense in the cryptocurrency industry with Yu Xian, founder of the security firm SlowMist (see 《Odaily Interview with Yu Xian: Leak of Anthropic's Nuclear-Grade New Model, How Will It Affect Crypto Security Offense and Defense?》). However, Anthropic had not publicly acknowledged the existence of Mythos at that time, so relevant information remained limited.
On April 8, with the announcement of the "Project Glasswing" plan, Anthropic disclosed more details about Mythos. Based on the actual test cases published by Anthropic, the company has not exaggerated Mythos's capabilities. In fact, its power is such that the company dares not release the model publicly directly, for fear of it being maliciously used by hacker groups. Instead, it plans to first allow major corporations to试用 (try out) through the "Project Glasswing" initiative to identify and patch potential vulnerabilities in advance.
Mythos Shows Its Muscle: Unearthing Thousands of "Zero-Day Vulnerabilities" in Weeks
When discussing Mythos's capabilities, Anthropic直言 (stated bluntly) that the model's birth signifies the arrival of a严峻 (grim) reality — the coding ability of AI models has reached an extremely high level, and in terms of discovering and exploiting software vulnerabilities, they can almost surpass all but the most skilled humans.
According to Anthropic's disclosure, within just a few weeks, Anthropic used Mythos to identify thousands of zero-day vulnerabilities (i.e., defects previously unknown even to the software developers themselves). Many of these are high-risk vulnerabilities, affecting all major operating systems and mainstream browsers, and impacting a range of other critical software.
Anthropic provided several representative examples:
- Mythos discovered a 27-year-old vulnerability in OpenBSD, a system long renowned for being "extremely secure" and widely used in critical infrastructure like firewalls. This vulnerability allows an attacker to remotely crash the system directly;
- In the widely used video processing library FFmpeg, Mythos found a 16-year-old vulnerability. The code containing this issue had been triggered over 5 million times by automated tests but remained undetected;
- Mythos was also able to automatically chain multiple vulnerabilities in the Linux kernel to escalate privileges from a regular user level to full control of the server.
More worryingly, Anthropic stated that most of these vulnerabilities were "autonomously discovered and exploitation paths constructed" by Mythos with almost no human intervention. This perhaps indicates that AI has begun to possess automated offensive and defensive capabilities similar to top-tier hacker teams.
On evaluation benchmarks, Mythos also shows a断层级 (generational leap) evolution compared to Opus 4.6. For example, in cybersecurity vulnerability reproduction tests, Mythos achieved 83.1%, while Opus 4.6 scored 66.6%; it also achieved significant leads in multiple coding and reasoning tests.
Perhaps precisely because Mythos's capabilities are too powerful, Anthropic did not choose to open the model directly but first launched the "Project Glasswing" initiative to allow the entire internet to "fortify" in advance.
Through this initiative, Anthropic will provide early access to the Mythos Preview version to participating parties, for use in discovering and fixing vulnerabilities or weaknesses in their foundational systems — focusing on tasks such as local vulnerability detection, black-box testing of binary programs, endpoint security hardening, and system penetration testing.
Anthropic also承诺 (committed) to provide participating parties with a total of $100 million in model usage credits to support usage throughout the research preview phase. Thereafter, the Mythos Preview version will be available to participants at a price of $25 per million input tokens / $125 per million output tokens (participants can also access the model via Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry). In addition to the model usage credits, Anthropic will donate $2.5 million to the Linux Foundation for Alpha-Omega and OpenSSF, and $1.5 million to the Apache Software Foundation, to help open-source software maintainers cope with the evolving security landscape.
Anthropic plans to gradually expand the participation scope of "Project Glasswing" and continue推进 (advancing) it for several months, while sharing experiences as much as possible so that other organizations can apply the relevant insights to their own security construction. Within 90 days, Anthropic will publicly report阶段性成果 (phase results), including fixed vulnerabilities and disclosable security improvements.
Technology Will Only Keep Advancing, But There's No Need for Excessive Worry
AI is irreversibly changing the world we are familiar with, including the field of cybersecurity focused on in this article. As the门槛 (threshold) for discovering and exploiting vulnerabilities is significantly lowered, people inevitably worry: will AI become a sharp blade in the hands of malicious actors, threatening the existing balance of network security? (PS: For cryptocurrency users who need to place real money in wallet systems or on-chain protocols, this concern is particularly strong.)
Addressing this issue, Anthropic believes "there are still reasons for optimism." AI models are dangerous precisely because they have the capability to cause harm in the hands of wrongdoers. But at the same time, AI also holds immeasurable value in discovering and fixing critical software defects and developing newer, safer software.
It is predictable that AI capabilities will continue to evolve rapidly in the coming years. However, as new attack methods emerge, new defense mechanisms will also appear simultaneously. Technological upgrades are inevitable, but this does not mean the risk is必然失控 (necessarily uncontrollable) — as long as the defense system evolves同步 (synchronously), it might even be possible to use AI to build a higher-strength security moat.









