Millions Of RLUSD Are Gone Forever After This Major Ripple Burn

bitcoinistPublished on 2026-03-14Last updated on 2026-03-14

Abstract

Ripple's stablecoin RLUSD has undergone significant supply reductions through a series of major burn transactions. A total of 25 million tokens were recently burned in a single transaction, following earlier burns of 8 million, 3 million, and multiple 15 million and 10 million token removals across both the Ethereum and XRP Ledger blockchains. These burns permanently reduce circulation by sending tokens to inaccessible addresses. However, this activity is part of RLUSD's reserve-backed model, where tokens are burned upon redemption to ensure the circulating supply never exceeds dollar reserves. Despite the burns, larger minting events—including recent issuances of 29 million, 14.9 million, 6 million, and 3 million RLUSD—have supported the stablecoin's growth. RLUSD's market cap now exceeds $1.56 billion, reflecting its expanding adoption since launch.

Ripple’s dollar-pegged stablecoin RLUSD is seeing a period of supply reductions, with millions of tokens permanently removed from circulation in a series of burn transactions tied to Ripple’s treasury activity. Blockchain trackers monitoring RLUSD activity show that multiple large burns have taken place recently, eliminating tens of millions of tokens from supply. The most recent burn alone accounted for 25 million tokens in one move, but that figure only tells part of the story.

Latest Burn Eliminates 25 Million RLUSD

The most recent transaction flagged by the Ripple Stablecoin Tracker on X saw 25 million RLUSD burned at the RLUSD treasury, the headline figure in what has been a multi-step reduction of the stablecoin’s circulating supply in recent days. Stablecoin burns permanently remove tokens from circulation by sending them to an inaccessible address, making them impossible to recover or spend again. In the case of RLUSD, the transaction effectively wiped out 25 million tokens from the total supply. That alone would have been notable, but multiple additional burns preceded it.

Before the latest 25 million token burn, Ripple had already destroyed several million RLUSD in separate transactions. These burns were carried out on both the Ethereum blockchain and the XRP Ledger, which are the two blockchains that RLUSD runs on.

Ripple Stablecoin Tracker on X recorded a transaction in which 8 million RLUSD were permanently removed from circulation. That burn did not occur in isolation. It followed another earlier transaction that destroyed 3 million RLUSD, continuing the pattern of supply reductions tied to Ripple’s treasury activity.

Looking further back, the sequence becomes even more notable. Prior to those two burns, the tracker had already flagged a 15 million RLUSD burn, followed by another 15 million RLUSD removal on the Ethereum blockchain. Before that, a separate transaction that eliminated 10 million RLUSD from circulation on the XRP Ledger.

Why These Burns Keep Happening

The volume of burns in recent days is not a red flag but a feature. RLUSD operates under a reserve-backed model in which every token in circulation corresponds to a dollar held in reserve. Ripple burns the tokens to guarantee the circulating supply never exceeds what is backed when holders redeem their RLUSD.

Burns of this scale would only become a concern if they consistently outweighed the number of tokens being created. That does not appear to be the case with RLUSD. Updates from the Ripple Stablecoin Tracker account show that the recent burns have been accompanied by even larger minting activity. In the past few days alone, the RLUSD treasury minted 3 million RLUSD, 6 million RLUSD, 29 million RLUSD, and 14.9 million RLUSD, all of which entered circulation on the Ethereum network.

RLUSD itself has continued growing since its launch and has steadily climbed in size, with the stablecoin now holding a market capitalization of more than $1.56 billion.

Price recovers again | Source: XRPUSDT on Tradingview.com

Related Questions

QWhat is the total amount of RLUSD tokens burned in the most recent transaction mentioned in the article?

AThe most recent transaction burned 25 million RLUSD tokens.

QOn which two blockchains does the RLUSD stablecoin operate?

ARLUSD operates on both the Ethereum blockchain and the XRP Ledger.

QAccording to the article, why are these large-scale burns of RLUSD not a cause for concern?

AThe burns are not a concern because they are a feature of the reserve-backed model to ensure the circulating supply is always backed by dollars in reserve, and recent minting activity has been even larger than the burns.

QWhat is the current market capitalization of the RLUSD stablecoin as stated in the article?

AThe RLUSD stablecoin currently has a market capitalization of more than $1.56 billion.

QWhat mechanism is used to permanently remove RLUSD tokens from circulation?

ATokens are permanently removed from circulation by sending them to an inaccessible address in a process called burning, making them impossible to recover or spend again.

Related Reads

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

OpenAI engineer Weng Jiayi's "Heuristic Learning" experiments propose a new paradigm for Agentic AI, suggesting that intelligent agents can improve not just by training neural networks, but also by autonomously writing and refining code based on environmental feedback. In the experiment, a coding agent (powered by Codex) was tasked with developing and maintaining a programmatic strategy for the Atari game Breakout. Starting from a basic prompt, the agent iteratively wrote code, ran the game, analyzed logs and video replays to identify failures, and then modified the code. Through this engineering loop of "code-run-debug-update," it evolved a pure Python heuristic strategy that achieved a perfect score of 864 in Breakout and performed competitively with deep reinforcement learning (RL) algorithms in MuJoCo control tasks like Ant and HalfCheetah. This approach, termed Heuristic Learning (HL), contrasts with Deep RL. In HL, experience is captured in readable, modifiable code, tests, logs, and configurations—a software system—rather than being encoded solely into opaque neural network weights. This offers potential advantages in explainability, auditability for safety-critical applications, easier integration of regression tests to combat catastrophic forgetting, and more efficient sample use in early learning stages, as demonstrated in broader tests on 57 Atari games. However, the blog acknowledges clear limitations. Programmatic strategies struggle with tasks requiring long-horizon planning or complex perception (e.g., Montezuma's Revenge), areas where neural networks excel. The future vision is a hybrid architecture: specialized neural networks for fast perception (System 1), HL systems for rules, safety, and local recovery (also System 1), and LLM agents providing high-level feedback and learning from the HL system's data (System 2). The core proposition is that in the era of capable coding agents, a significant portion of an AI's learned experience could be maintained as an auditable, evolving software system.

marsbit47m ago

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

marsbit47m ago

Your Claude Will Dream Tonight, Don't Disturb It

This article explores the recent phenomenon of AI companies increasingly using anthropomorphic language—like "thinking," "memory," "hallucination," and now "dreaming"—to describe machine learning processes. Focusing on Anthropic's newly announced "Dreaming" feature for its Claude Agent platform, the piece explains that this function is essentially an automated, offline batch processing of an agent's operational logs. It analyzes past task sessions to identify patterns, optimize future actions, and consolidate learnings into a persistent memory system, akin to a form of reinforcement learning and self-correction. The article draws parallels to similar features in other AI agent systems like Hermes Agent and OpenClaw, which also implement mechanisms for reviewing historical data, extracting reusable "skills," and strengthening long-term memory. It notes a key difference from human dreaming: these AI "dreams" still consume computational resources and user tokens. Further context is provided by discussing the technical challenges of managing AI "memory" or context, highlighting the computational expense of large context windows and innovations like Subquadratic's new model claiming drastically longer contexts. The core critique argues that this strategic use of human-centric vocabulary does more than market products; it subtly reshapes user perception. By framing algorithms with terms associated with consciousness, companies blur the line between tool and autonomous entity. This linguistic shift can influence user expectations, tolerance for errors, and even perceptions of responsibility when systems fail, potentially diverting scrutiny from the companies and engineers behind the technology. The article concludes by speculating that terms like "daydreaming" for predictive task simulation might be next, continuing this trend of embedding the idea of an "inner life" into computational processes.

marsbit49m ago

Your Claude Will Dream Tonight, Don't Disturb It

marsbit49m ago

Trading

Spot
Futures
活动图片