Stablecoins hit $300B market cap: Why Tether’s $10B profit is just the beginning

ambcryptoPublished on 2026-01-26Last updated on 2026-01-26

Abstract

Stablecoins have become the dominant revenue engine in crypto, with the market cap reaching $300 billion. Tether exemplifies this by generating over $10 billion in profit in 2025. Ethereum serves as the primary settlement layer, with stablecoin supply on it growing to over $160 billion and generating roughly $5 billion in revenue that year. Concurrently, Ondo Finance has emerged as a leading real-world asset (RWA) platform, with its Total Value Locked (TVL) surging to $2.5 billion, driven by tokenized U.S. Treasuries and equities. The growth in RWA TVL is closely linked to the expansion of the stablecoin market, as stablecoins provide the essential liquidity and settlement rails. This trend highlights a shift towards structural, yield-based growth rather than speculative narratives.

While L1s ecosystems continue to chase hype through narrative cycles and speculative throughput claims, crypto’s most reliable profits have accrued elsewhere.

Stablecoins have quietly evolved into the sector’s dominant revenue engine, driven by their scale, ubiquity, and control over on-chain settlement.

As a result, issuers have been able to convert this structural advantage into sustained cash flows, exemplified by Tether [USDT] generating over $10 billion in profit in 2025.

Stablecoin issuers have evolved into large-scale revenue generators, with Ethereum serving as the dominant settlement layer, anchoring that growth.

In 2025 alone, issuers generated roughly $5 billion in revenue tied to Ethereum-based supply. Quarterly revenue expanded from near $1.2 billion early in the year to about $1.4 billion by Q4.

At the same time, stablecoin supply on Ethereum [ETH] grew by nearly $50 billion, surpassing $160 billion.

As reserves expanded, yield-based income scaled predictably. This dynamic reinforces Ethereum’s financial gravity, deepens liquidity, and strengthens its role as core on-chain monetary infrastructure.

ONDO emerges as a core liquidity hub for tokenized RWAs

ONDO Finance is rapidly consolidating its position as a leading real-world asset platform, with Ondo Finance [ONDO] pushing total value locked (TVL) to roughly $2.5 billion by January 2026.

Earlier in 2025, TVL hovered just above $1 billion. Since then, capital has accelerated sharply, driven by tokenized yield products.

Tokenized U.S. Treasuries account for nearly $2 billion, led by OUSG and Ondo US Dollar Yield [USDY]. Meanwhile, tokenized stocks and ETFs exceed $500 million across more than 200 assets.

As ONDO expands across multiple chains, its scale signals growing institutional confidence in on-chain RWAs.

Is RWA TVL growth following stablecoin liquidity cycles?

RWA TVL growth increasingly tracks stablecoin supply expansion, revealing a clear liquidity-driven relationship.

As the stablecoin market cap climbed toward $280–300 billion by late 2025, RWA TVL simultaneously expanded to roughly $16–19 billion.

This side-by-side growth reflects function, not coincidence. Stablecoins act as settlement rails and yield-bearing inputs for tokenized treasuries and equities.

Consequently, platforms like Ondo doubled TVL beyond $2.5 billion as stablecoin-backed demand intensified.

Therefore, the trend signals structural conviction, though short-term stalls in stablecoin issuance can temporarily cap RWA momentum.


Final Thoughts

  • Stablecoins have become crypto’s most reliable profit engine, translating settlement scale into recurring cash flows while Ethereum reinforces its role as the dominant on-chain monetary layer.
  • Concurrently, RWA growth remains liquidity-driven, with TVL expansion closely following stablecoin supply, positioning platforms like ONDO as beneficiaries of stablecoin-backed demand rather than speculative cycles.

Related Questions

QWhat was Tether's profit in 2025 according to the article?

ATether generated over $10 billion in profit in 2025.

QWhat is the total value locked (TVL) in Ondo Finance by January 2026?

AThe total value locked (TVL) in Ondo Finance reached roughly $2.5 billion by January 2026.

QHow much revenue did stablecoin issuers generate from Ethereum-based supply in 2025?

AStablecoin issuers generated roughly $5 billion in revenue tied to Ethereum-based supply in 2025.

QWhat is the relationship between RWA TVL growth and stablecoin supply, as described in the article?

ARWA TVL growth increasingly tracks stablecoin supply expansion, revealing a clear liquidity-driven relationship where stablecoins act as settlement rails and yield-bearing inputs.

QWhat role does Ethereum play in the stablecoin ecosystem, according to the final thoughts?

AEthereum reinforces its role as the dominant on-chain monetary layer for stablecoins.

Related Reads

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

OpenAI engineer Weng Jiayi's "Heuristic Learning" experiments propose a new paradigm for Agentic AI, suggesting that intelligent agents can improve not just by training neural networks, but also by autonomously writing and refining code based on environmental feedback. In the experiment, a coding agent (powered by Codex) was tasked with developing and maintaining a programmatic strategy for the Atari game Breakout. Starting from a basic prompt, the agent iteratively wrote code, ran the game, analyzed logs and video replays to identify failures, and then modified the code. Through this engineering loop of "code-run-debug-update," it evolved a pure Python heuristic strategy that achieved a perfect score of 864 in Breakout and performed competitively with deep reinforcement learning (RL) algorithms in MuJoCo control tasks like Ant and HalfCheetah. This approach, termed Heuristic Learning (HL), contrasts with Deep RL. In HL, experience is captured in readable, modifiable code, tests, logs, and configurations—a software system—rather than being encoded solely into opaque neural network weights. This offers potential advantages in explainability, auditability for safety-critical applications, easier integration of regression tests to combat catastrophic forgetting, and more efficient sample use in early learning stages, as demonstrated in broader tests on 57 Atari games. However, the blog acknowledges clear limitations. Programmatic strategies struggle with tasks requiring long-horizon planning or complex perception (e.g., Montezuma's Revenge), areas where neural networks excel. The future vision is a hybrid architecture: specialized neural networks for fast perception (System 1), HL systems for rules, safety, and local recovery (also System 1), and LLM agents providing high-level feedback and learning from the HL system's data (System 2). The core proposition is that in the era of capable coding agents, a significant portion of an AI's learned experience could be maintained as an auditable, evolving software system.

marsbit26m ago

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

marsbit26m ago

Your Claude Will Dream Tonight, Don't Disturb It

This article explores the recent phenomenon of AI companies increasingly using anthropomorphic language—like "thinking," "memory," "hallucination," and now "dreaming"—to describe machine learning processes. Focusing on Anthropic's newly announced "Dreaming" feature for its Claude Agent platform, the piece explains that this function is essentially an automated, offline batch processing of an agent's operational logs. It analyzes past task sessions to identify patterns, optimize future actions, and consolidate learnings into a persistent memory system, akin to a form of reinforcement learning and self-correction. The article draws parallels to similar features in other AI agent systems like Hermes Agent and OpenClaw, which also implement mechanisms for reviewing historical data, extracting reusable "skills," and strengthening long-term memory. It notes a key difference from human dreaming: these AI "dreams" still consume computational resources and user tokens. Further context is provided by discussing the technical challenges of managing AI "memory" or context, highlighting the computational expense of large context windows and innovations like Subquadratic's new model claiming drastically longer contexts. The core critique argues that this strategic use of human-centric vocabulary does more than market products; it subtly reshapes user perception. By framing algorithms with terms associated with consciousness, companies blur the line between tool and autonomous entity. This linguistic shift can influence user expectations, tolerance for errors, and even perceptions of responsibility when systems fail, potentially diverting scrutiny from the companies and engineers behind the technology. The article concludes by speculating that terms like "daydreaming" for predictive task simulation might be next, continuing this trend of embedding the idea of an "inner life" into computational processes.

marsbit28m ago

Your Claude Will Dream Tonight, Don't Disturb It

marsbit28m ago

Trading

Spot
Futures
活动图片