February 23, 2026, a Monday that should have been quiet, saw IBM's stock price suffer its most brutal single-day plunge since October 2000. The decline settled at 13.2% at the close, with approximately $40 billion in market value evaporating within hours. The trigger was not an earnings miss nor a regulatory hammer, but a product announcement: AI startup Anthropic declared that its Claude Code tool could modernize COBOL program language running on IBM systems, and COBOL is precisely the highly profitable "moat" business for IBM.
Three days later, a similar script played out in the exact opposite way. On February 26, Jack Dorsey's fintech company Block announced layoffs of about 4,000 people, nearly 50% of its workforce, citing AI-driven efficiency gains as the reason. But the market's reaction was截然不同—Block's stock price jumped over 24% in after-hours trading. Dorsey stated frankly in his letter to shareholders: "I believe that within the next year, most companies will reach the same conclusion and make similar structural adjustments."
Two events, the same driving factor—AI; two截然不同的 market reactions—one暴跌, one暴涨. What exactly happened behind this? The answer perhaps points to a deeper proposition: AI is redefining 'what constitutes a valuable asset'. For executives of listed companies, investors, and traditional enterprise decision-makers, understanding this revaluation logic is no longer a forward-looking strategic consideration but an urgent matter of survival.
I. The Same AI, Different Market Verdicts
To understand the contrast between these two events, one must first see their respective asset structures.
IBM's暴跌,表面上 is a technical threat from the Claude Code tool, but实质上 is the market repricing its core asset model. COBOL, this programming language born in the late 1950s, still supports about 95% of global ATM transactions and the core systems of many critical areas like finance, aviation, and government. Anthropic wrote in its blog: "Trillions of lines of COBOL code run in production environments every day, powering critical systems. Despite this, the number of people who understand COBOL is decreasing year by year."
For a long time, modernizing COBOL systems has been a complex and costly engineering feat, which became a lucrative business moat for IBM. But Anthropic claimed: "With the power of AI, teams don't need to spend years; they can modernize COBOL codebases within quarters." The subtext the market heard was: IBM's reliance on labor-intensive system maintenance revenue and service revenue around mainframes is being eroded by AI technology.
However, it's worth noting that IBM's stock price rebounded 2.68% the next day. Wall Street analysis firms like Wedbush and Evercore ISI quickly came to its defense, calling the暴跌 an "unfounded overreaction." Their理由直指问题的核心: Enterprise clients are unlikely to immediately abandon their mainframe systems just because a new AI tool can translate legacy code. There is a huge gap between translating code syntax and the system modernization involving deep hardware-software integration.
IBM itself issued a response the same day, making a key argument: The challenge of modernization is not a COBOL language problem, but an IBM Z platform problem—translating code hardly captures the actual complexity; the platform's value comes from decades of software and hardware integration, which code translation cannot migrate.
Looking at the Block event.同样是大规模裁员,同样由AI驱动, the market's verdict was a 24% rise. The key lies in the changing asset structure of Block. Since 2024, Block has been restructuring its business model and staffing while heavily investing in AI tools to improve operational efficiency, including developing its own tool called Goose.
Block's CFO Amrita Ahuja emphasized when explaining the layoffs: "We are taking bold and decisive action, but we are building from a position of strength." This "position of strength" has data support: Full-year 2025 gross profit reached $10.36 billion, a 17% year-on-year increase. Strong financial performance provided a buffer for the company to push forward with large-scale restructuring at this time.
The market's interpretation was clear: Block is not passively shrinking under AI impact but is actively optimizing its asset structure—exchanging fewer "human assets" for higher output efficiency from "technology assets." Laying off 50% while raising full-year guidance意味着单位人力产出的价值正在被AI放大 (means the value output per human unit is being amplified by AI).
II. The AI Era: Four Types of Assets Being Repriced
These two cases reveal a trend that is happening: AI is becoming the "repricer" of asset value. Different types of assets show截然不同的 value curves under AI's evaluation framework.
The first category is human capital-intensive assets. The value of IBM's COBOL maintenance teams, traditional analysts, programmers, and other "information processors" is being diluted by AI. Anthropic mentioned when introducing Claude Code that the tool can identify "risks that would take human analysts months to discover." This is not to say humans are no longer important, but that the value of work relying on information asymmetry and procedural knowledge is being compressed by technology.
However, it must be cautiously noted that AI replaces "information processing," not "value creation." Futurum Group analyst Mitch Ashley pointed out in a research report that successful COBOL modernization projects require multiple dimensions like business scope definition, technical assessment, data migration planning, behavioral equivalence verification, observability, and organizational change management; code translation is just one part. Those human capabilities that can navigate complex systems, understand business essence, and make strategic judgments remain scarce.
The second category is data assets, which are becoming the high ground of value in the AI era. With the rapid development of generative AI, the value attributes of data are being reshaped. Research published by Tang et al. in PLOS One pointed out that generative AI has changed the way data is acquired, processed, and utilized. The value of data assets depends not only on their intrinsic quality and relevance but also on their application scenarios, transformation capabilities, and market demand within the generative AI framework.
This means the uniqueness, continuity, and governability of data are becoming core value dimensions. A dataset might be extremely valuable in one scenario and useless in another. Enterprises that can provide exclusive, continuous, high-quality data for AI model training are gaining new pricing power.
The third category is algorithm and model assets. The fact that OpenAI and Paradigm collaborated to launch EVMbench, used to evaluate AI's ability to detect, patch, and exploit smart contract vulnerabilities, itself indicates that algorithms are becoming quantifiable assets. Model weights, algorithm frameworks, and training methodologies are becoming identifiable, controllable, and monetizable intangible assets.
The fourth category is traditional tangible assets, which are undergoing differentiation. Those physical assets reliant on "information asymmetry" and "human intermediation" face devaluation pressure, while physical assets possessing "AI-resistant" attributes—such as energy facilities, scarce resources, core infrastructure—have relatively stable value. The reason is simple: AI can analyze and optimize the operation of these assets but cannot replace their physical existence and value-bearing function.
III. From "Asset Revaluation" to "AI Immunity"
Based on the above analysis, enterprises need a systematic framework to judge whether their assets will appreciate or depreciate in the AI era. The RWA Research Institute proposes an "AI Immunity" asset identification framework containing three core characteristics.
The first characteristic is non-codifiability. This refers to those value elements that are difficult for AI to fully learn or replicate. The COBOL code itself can be translated by AI, but the transaction processing capability built from the chip level in Z-series mainframes running COBOL systems, quantum-safe encryption, and eight-nines reliability—these are things AI tools cannot replicate. Futurum Group's research noted, "Code translation captures不了实际的复杂性, platform value来自数十年的软硬件集成" (Code translation cannot capture the actual complexity; platform value comes from decades of software and hardware integration). Similarly, offline scenario control, tacit industry knowledge, complex relationship networks—these elements difficult to "code" constitute the first line of immune defense for assets.
The second characteristic is the data moat. Does the enterprise possess exclusive, continuous, and governable data assets? Does it merely use public data, or can it generate data others cannot access? China CITIC Bank has begun exploring using large models to evaluate data asset value, attempting "data asset capitalization." The logic behind this is: In the AI era, data is not only raw material for production but also an asset itself. But not all data has a moat—public web data will soon be "digested" by AI models, and only enterprises with exclusive data sources can obtain a premium under the AI valuation framework.
The third characteristic is AI empowerment elasticity. Can the asset itself be enhanced by AI rather than replaced? This is the key to distinguishing between an IBM-style shock and a Block-style transformation. IBM's core business—maintaining legacy COBOL systems—is the target of AI "replacement"; whereas Block's business model—payments, financial services—can be "empowered" by AI. In fact, IBM itself has developed watsonx Code Assistant for Z, a dedicated tool that allows customers to securely refactor and modernize legacy code directly on the platform while retaining enterprise-grade security. When assets can form synergy with AI rather than antagonism, their value increases.
Conversely, AI-fragile assets also exhibit three characteristics: reliance on "information processing" as core value, susceptibility to replacement by standardized processes, and lack of data generation and accumulation capabilities. Comparing against these three characteristics, enterprises can conduct a "stress test" on their asset portfolio.
IV. The New Opportunity in RWA: What Assets Are Worth Tokenizing?
Extending the above framework to the RWA (Real World Asset tokenization) field leads to a clear conclusion: RWA is not about "any asset can be put on-chain," but about筛选出 those hard assets that can weather the AI cycle amidst the great tide of AI revaluation.
In March 2026, the total value of on-chain RWA突破$25 billion, nearly quadrupling from a year ago. But the Hong Kong Web3.0 Standardization Association clearly stated in its RWA industry white paper released in August 2025: "The notion that 'everything can be RWA' is a false proposition." Assets that have successfully achieved规模化落地 need to meet three thresholds: value stability, clarity of legal ownership, and verifiability of off-chain data.
Combining the "AI Immunity" framework, we can further refine this: Assets worth tokenizing are, first and foremost, those whose value remains stable in the AI revaluation
The first category is physical assets possessing "AI immune" characteristics. This includes energy assets, infrastructure, scarce resources, etc. The value of such assets does not rely on information processing but stems from physical existence and actual utility. The white paper mentioned新能源RWA (like charging piles, photovoltaic assets) and GPU computing power assets fall into this category. Among them, GPU computing power assets,凭借AI产业的"刚性需求" (relying on the "rigid demand" of the AI industry) and credible "digital genes," are becoming ideal anchor assets for RWA.
The second category is programmable data assets. Assets that possess exclusive data sources and can be automatically monetized through smart contracts combine both a "data moat" and "AI empowerment elasticity." The white paper categorizes data along with intellectual property and carbon credits as intangible assets. But caution is needed: not all data can become assets—only data that can be continuously generated, clearly owned, and verified has the foundation for tokenization.
The third category is hybrid assets, combining "non-codifiable" physical control rights with "programmable" digital equity. For example, the产权 (property rights) of commercial real estate can be tokenized, but the actual operation, maintenance, and leasing of the property—these offline scenario control rights—remain in the hands of professional institutions. This "physical + digital" dual-layer structure utilizes the liquidity advantages of blockchain while retaining the offline value anchor point of "AI immunity."
Conversely, two types of assets need to be treated cautiously for tokenization in the AI era. One is financial assets highly dependent on human intermediation, whose value is easily compressed by AI; the other is standardized assets without a data moat, which lack bargaining power under the AI valuation framework.
V. Action Guide: From Cognition to Decision
IBM's $40 billion evaporation is a signal of an era—assets reliant on information asymmetry and human labor are being repriced by AI. Block's counter-trend rise is the clarion call of another era—enterprises that can embrace AI and optimize their asset structure are being repriced by the market.
For decision-makers of listed companies and traditional enterprises, this is not just technological anxiety but a fundamental restructuring of the asset value system. CEOs need to answer an unavoidable question: How much is my asset portfolio worth in the eyes of AI?
Based on the analysis in this article, three actionable suggestions can be proposed.
First, immediately initiate an "AI stress test" of assets. Evaluate the core business units of the enterprise one by one against the three characteristics of the "AI Immunity" framework—non-codifiability, data moat, and AI empowerment elasticity. Identify which businesses are most likely to shrink in value under AI impact and which businesses might gain amplification effects from AI.
Second, establish a dynamic asset portfolio management mechanism. In the context of AI revaluation, asset allocation is no longer a "buy and hold" static strategy. Enterprises need to consciously increase the proportion of "AI immune" assets while formulating transformation or divestiture plans for those AI-fragile assets. This is not just the responsibility of the finance department but requires coordination between strategy, technology, and business departments.
Third, re-examine RWA strategy. Before considering asset tokenization, first use the "AI Immunity" framework to screen the underlying assets. The core value of RWA is not "being on-chain" itself, but obtaining better liquidity and pricing efficiency for quality assets through tokenization. If the underlying asset itself is depreciating in the AI era, then tokenization only accelerates the loss of value.
Finally, it must be specially noted that according to Document No. 42 jointly issued by eight Chinese ministries, any form of token issuance and tokenized trading is strictly prohibited within mainland China. The RWA tokenization discussed in this article refers only to asset digitization practices under compliant overseas frameworks. When exploring related businesses, enterprises must strictly遵守"境内严禁、境外备案"的监管红线 (adhere to the regulatory red line of "strict prohibition domestically, filing required overseas").
When AI starts pricing assets, the only sense of security comes from those things AI cannot price—not code, not data, but humanity's own judgment of value.
(This article is written based on publicly available materials and data. Data sources include authoritative media and research institutions such as Nasdaq,腾讯新闻 (Tencent News), Futurum Group, PLOS One, 21财经 (21 Finance & Economics),工商时报 (Industrial and Commercial Times), etc. The views in this article do not constitute any investment advice.)







