Editor's Note:
When personal experiences of life and death intertwine with the metaphors of a nation's institutional rise and fall, political narratives cease to be abstract discussions of systems and become profound emotional realizations. This article uses the passing of a father and the birth of a child as a starting point, extending the personal insight that "death is a process" to a reflection on the current state of the American republican system. In the author's view, the current conflict between artificial intelligence companies and the government is not an isolated incident but a glimpse into the long-term loosening of institutions and the imbalance of power structures.
The article focuses on the dispute between Anthropic and the U.S. defense system, discussing contract terms, policy boundaries, and the threat of "supply chain risks." What is at stake is no longer just a game between corporations and the government but a more fundamental question: in the era of frontier AI, who should hold control? Private enterprises, executive power, or some yet-to-mature public mechanism? When national security becomes a justification for expanding power, and policy tools increasingly rely on temporary and coercive arrangements, is the predictability and rule-based nature of the republican system diminishing?
Technological leaps and institutional changes may occur simultaneously, and their convergence often shapes the trajectory of an era. The author questions the government's actions while retaining hope for the rebuilding of future institutions, reminding readers not to equate "democratic control" with "government control." Against the backdrop of rapid AI advancement and ongoing governance reshaping, this debate may only be the beginning. Finding a balance between security, efficiency, and freedom will be a long-term challenge.
Below is the original text:
Over a decade ago, I sat by my father's side as he passed away. Six months earlier, he had been a vibrant man, stronger than I am today, cycling faster and with more endurance than most people in their twenties. Then one day, he underwent heart surgery and was never the same again. It was as if his soul had been drained, the light in his eyes gone. Occasionally, he would briefly regain his spirit, the familiar father momentarily returning to his aging body, but such moments grew increasingly rare. His thoughts became fragmented, his voice softer.
Over those six months, he was in and out of the hospital. On the final day, he was moved to hospice care. He barely spoke that day. In his last hours, he had almost left this world. He lay in bed, his breathing slowing, his voice growing fainter until it was almost inaudible, replaced by an unsettling "death rattle"—a sign that his body could no longer swallow. A body that cannot swallow can no longer eat or drink; in a sense, it has given up the struggle.
My mother and I exchanged glances, both aware of the obvious truth but unwilling to voice it or ask the questions in our hearts. We knew time was running out. At that point, saying or asking anything would not yield useful information; pressing further would only add to the pain.
I had spoken to him privately more than once. I held his hand, trying to say goodbye. My mother returned to the room, and the three of us held hands. Finally, a machine emitted a long beep, signaling that he had crossed an invisible threshold. In the late afternoon of December 26, 2014, my father died.
A few days later, eleven years after that, on December 30, 2025, my son was born. I have witnessed death, and I have witnessed birth. What I learned is that neither is a momentary event but an unfolding process. Birth is a series of awakenings; death is a series of slumbers. My son will take years to truly be "born," while my father took six months to truly "depart." Some people even take decades to slowly die.
At some point in my life, though I cannot pinpoint exactly when, the American Republic as we knew it began to decline. Like most natural deaths, its causes were complex and intertwined. No single event, crisis, attack, president, political party, law, idea, individual, corporation, technology, mistake, betrayal, failure, misjudgment, or foreign adversary "alone" caused the beginning of its end, though all played a role. I do not know how far along we are in this process, but I know we are in the "hospice room." I have known this for a while, though, like all mourners, I sometimes deny it. I hesitate to speak of it because doing so often brings pain.
However, without acknowledging that we are sitting by the bedside, I cannot write with the analytical rigor you expect today. To honestly discuss the development of frontier AI and the future we ought to build, we cannot avoid the fact that the Republic we knew is in its final moments. Only here, there is no machine to sound the final beep. We can only watch quietly.
In American history, our Republic has "died" and "been reborn" multiple times. The United States has experienced more than one "founding." Perhaps we are on the threshold of another rebirth, turning the page to a new chapter of national self-reinvention. I hope so. But it is also possible that we no longer possess enough virtue and wisdom to support a new founding, and a more realistic understanding is that we are slowly transitioning into a "post-republican" era of American governance. I do not claim to know the answer.
What I am about to write is a clash between an AI company and the U.S. government. I do not want to exaggerate this. The kind of "death" I am describing has been ongoing for most of my life. The events I will describe happened last week and may even be resolved to some extent within days.
I am not saying this incident "caused" the death of the Republic, nor that it "ushered in a new era." If it has any significance, it is only that it made the ongoing decline more apparent to me personally, harder to deny. I see last week's events as the "death rattle" of the old Republic, a sound emitted by a body that has given up the struggle.
As far as I know, this is what happened: During the Biden administration, the AI company Anthropic reached an agreement with the Department of Defense (now called the "War Department," hereafter DoW) allowing its AI system Claude to be used in classified environments. This agreement was expanded by the Trump administration in July 2025 (full disclosure: I served in the Trump administration at the time but was not involved in this transaction). Other language models could be used in non-classified scenarios, but until recently, classified work—involving intelligence collection, combat operations, etc.—could only use Claude.
The initial agreement negotiated by the Biden team with Anthropic—notably, several key architects of the Biden administration's AI policy joined Anthropic immediately after their terms ended—included two usage restrictions. First, Claude could not be used for mass surveillance of Americans. Second, it could not be used to control lethal autonomous weapons, i.e., weapons capable of operating through the entire identification, tracking, and engagement process without human involvement. The Trump administration had the opportunity to review these terms when expanding the agreement and ultimately accepted them.
Trump officials claimed that their change of heart was not due to an eagerness to conduct mass surveillance or deploy lethal autonomous weapons but rather opposition to the idea of private enterprises imposing restrictions on military technology use. This shift in government attitude led to policy measures intended to harm or even destroy Anthropic—one of the fastest-growing companies in capitalist history and a current leader in the global AI field, which the government claims is crucial to the nation's future. But more on that later.
The Trump administration's argument is not entirely without merit: the idea of private enterprises setting restrictions on military technology use does sound somewhat wrong. However, in reality, thousands of private companies do exactly that. Every technology transaction between the military and private companies exists in the form of contracts (hence the term "defense contractors"), and these contracts typically include operational restrictions (e.g., "System X shall not be used in Country Y," similar to common clauses in Musk's Starlink), technical restrictions (e.g., "a certain fighter jet is certified for use under specific conditions"), and intellectual property restrictions ("the contractor owns and may reuse the relevant technology intellectual property").
In some ways, Anthropic's terms resemble these traditional restrictions. For example, the company is not opposed to lethal autonomous weapons per se but believes that existing frontier AI systems are not yet capable of autonomously deciding human life and death. This is quite similar to "fighter jet certification restrictions."
But the key difference is that the restrictions Anthropic imposed through contract are more like policy restrictions than technical restrictions. For instance, the difference between "this fighter jet is not certified to fly at a certain altitude" and "you shall not fly at a certain altitude." The military perhaps should not have accepted such terms, and private enterprises perhaps should not have set them. But the Biden administration accepted them, and the Trump administration initially accepted them, until later reversing course.
This itself indicates that such terms are not absurd violations. There is no law stating that contracts can only have technical restrictions and not policy restrictions. The contract is not illegal; it may simply seem unwise in hindsight. Even if you support the stance against mass surveillance and lethal autonomous weapons, you might think that defense contracts are not the best tool for achieving policy goals. Under the常规 rules of the Republic, the way to achieve new policies is through legislation.
However, "through legislation" increasingly sounds like a joke in contemporary United States. If you genuinely want to achieve a certain outcome, legislation is no longer the preferred path. Governance is becoming more informal,临时性增强, executive power is膨胀, and policy tools are increasingly mismatched with their goals.
The Trump administration claimed that its change of heart was driven by two concerns: first, that Anthropic might withdraw its services at a critical moment; second, that as a subcontractor, Anthropic's terms could约束 other military contractors. Coupled with the government viewing Anthropic as a political opponent (they may be correct in this judgment), the military suddenly realized it was reliant on a company it did not trust.
The rational approach would have been to cancel the contract and publicly explain the reasons, while implementing regulatory条款 to prevent similar situations in the future. But the War Department insisted that the contract must allow "all lawful uses" and threatened to designate Anthropic as a "supply chain risk." This designation is typically reserved for companies controlled by foreign adversaries, such as Huawei. The War Secretary went further, vowing to阻止 all military contractors from having "any commercial relationship" with Anthropic.
This is almost equivalent to declaring "corporate murder" against a company. Even if the bullet may not be fatal, it sends a clear signal: do business on our terms, or your business ends.
This touches on a core principle of the American Republic: private property. If the military told Google, "sell global personalized search data, or be designated a risk," it would be no different in principle from the current actions. So-called private property is merely a resource that can be requisitioned in the name of national security.
This move will increase the capital costs of the entire AI industry, weaken the international credibility of U.S. AI, and potentially even damage the profitability prospects of the AI industry itself.
With each presidential transition, U.S. policymaking becomes more unpredictable,粗暴, and arbitrary. It is difficult to judge when ordered liberty evaporates.
Even if the War Secretary retracts the threat, the damage is already done. The government has shown: if you refuse to submit, you may be treated as an enemy. This constitutes a deeper erosion of American political culture.
More importantly, this is the first公开争论 truly围绕 "where control over frontier AI should reside." Our public institutions appear disordered, malicious, and lacking strategic clarity. The failure of political elites is not new but a theme that has intensified over the past two decades: "the same as before, but noticeably worse."
Perhaps the next phase of rebuilding will be closely tied to advanced AI. In the construction of future institutions, please do not equate "democratic control" with "government control." The gap between the two has never been more apparent than today.
Whatever the future holds, we must ensure that mass surveillance and autonomous weapons do not erode freedom. I applaud AI labs for holding the line. In the coming decades, our freedoms may be more fragile than we imagine.
Everyone must choose the future they are willing to fight for or defend. When making that choice, please ignore the noise of that "death rattle" and think independently. You are entering a new era of institutional construction.
But before that, take a moment to mourn the Republic that once was.







