Author: Claude, Deep Tide TechFlow
Deep Tide Intro: According to a May 12 report by *The New York Times*, at a closed-door meeting organized by the Carnegie Endowment for International Peace in Singapore last month, a representative from a Chinese think tank requested that Anthropic grant access to the Claude Mythos model. The request was refused on the spot.
The incident subsequently reached the White House, triggering high alert within the U.S. National Security Council.
Mythos is the most powerful AI model released by Anthropic in April this year. Its offensive and defensive capabilities in the cybersecurity field are seen as "digital weapon-grade" technology. It is currently accessible to only about 40 U.S. and U.K. institutions. This event occurred as the Trump administration is brewing an executive order on AI regulation. He will also lead a business delegation to China this week to discuss AI-related issues.
A closed-door conversation in Singapore is becoming the latest flashpoint in the US-China AI competition.
According to *The New York Times* on May 12, at a non-public meeting organized by the Carnegie Endowment for International Peace in Singapore last month, a Chinese think tank representative approached Anthropic officials during a break and made a request: hoping the company would relax its policy to allow Chinese access to its newest and most powerful AI model, Claude Mythos.
Anthropic refused on the spot.
This was not a formal diplomatic request from the Chinese government. However, according to reports from multiple media outlets, the incident triggered high alert from officials in the Trump administration's National Security Council (NSC) upon reaching Washington, seen as another signal of China's continued pressure in the AI field.
Mythos: A 'Digital Weapon' with Capabilities Far Surpassing Predecessors, Restrictively Released
To understand the weight of this event, one must look at Mythos itself.
Claude Mythos Preview was officially released on April 7, 2026, but not to the public. Anthropic limited its access within a framework called the "Project Glasswing" cybersecurity defense initiative, granting access to only about 40 institutions. Partners include Amazon, Apple, Microsoft, CrowdStrike, Cisco, Nvidia, JPMorgan Chase, and the Linux Foundation.
According to Anthropic's official blog and a TechCrunch report on April 7, during internal testing, Mythos autonomously discovered thousands of zero-day vulnerabilities (i.e., security flaws previously unknown to developers), covering all major operating systems and mainstream browsers. Some vulnerabilities had existed for up to 27 years. In cybersecurity evaluations like CyberGym, Mythos's performance significantly surpassed the previous-generation model Claude Opus 4.6. Its validation score on SWE-bench reached 93.9%, compared to Opus 4.6's 80.8%.
China Excluded, Labeled an 'Adversarial Nation'
Anthropic lists China as an "adversarial nation." Its services are generally unavailable in mainland China, and the restricted release of Mythos explicitly excludes Chinese institutions.
According to a three-part series in *South China Morning Post* from late April to early May, China's reaction to the Mythos incident presents a complex picture. The official level has been relatively restrained, with no major public statements or strong responses. Some within China's AI community even questioned whether Anthropic was using security risks as a marketing gimmick to restrict model access to U.S. companies.
However, reactions within the cybersecurity industry have been starkly different. After the release of Mythos, stock prices of Chinese cybersecurity listed companies like Qi An Xin Group, Sangfor Technologies, and 360 Security Technology rose for several consecutive days, as markets anticipated accelerated demand for AI-driven cybersecurity.
IDC China Senior Research Manager Austin Zhao, in an interview with *SCMP*, stated that a Chinese model at the Mythos level "will definitely appear," but currently, the overall capabilities of domestic cybersecurity models are "still far from Mythos." However, the capabilities of Chinese models are also improving rapidly, a trend deemed irreversible. IDC predicts the scale of China's AI cybersecurity industry will grow from 1.58 billion yuan in 2025 to 59.35 billion yuan (approx. $8.7 billion) in 2030, an increase of over 37 times.
The practical dilemma is that the underlying software running in many Chinese banks, energy companies, and government agencies overlaps heavily with the systems where Mythos discovered vulnerabilities. But currently, China has no seat at this defense-upgraded table.
White House Alert and Policy Game: Executive Order Brewing, Trump Visiting China This Week
The alert triggered by the Singapore closed-door meeting adds to a series of larger policy games.
According to a May 11 report by *The Washington Post*, sharp disagreements have emerged within the Trump administration regarding AI regulation. On one hand, national security officials (including the NSA and the Office of the Director of National Intelligence) are pushing for security assessments of AI models by intelligence agencies before public release. On the other hand, the Commerce Department system prefers to keep the assessment authority within its own jurisdiction. White House National Economic Council Director Kevin Hassett revealed in a Fox Business interview last week that the government is studying the issuance of an executive order to provide a clear roadmap for the safety assessment process of AI models, similar to the FDA's pre-market review mechanism for drugs.
Simultaneously, Trump is scheduled to visit China this week, with AI-related issues expected to be on the agenda.
According to an Axios report on May 12, U.S. officials expressed hope to "use the leaders' meeting to open a dialogue, to see if a communication channel for AI affairs should be established." However, Melanie Hart, Senior Director of the Global China Hub at the Atlantic Council, cautioned that during the Biden administration, China mainly "collected U.S. information, rather than seriously discussing AI safeguards" in AI safety dialogues, and those attending the talks were often Foreign Ministry officials lacking AI technical expertise.








