Ripple takes RLUSD multichain: Stablecoin expands to L2s with Wormhole

ambcryptoОпубликовано 2025-12-15Обновлено 2025-12-15

Введение

Ripple has expanded its U.S.-regulated stablecoin, RLUSD, to Layer 2 networks including Optimism, Base, Ink, and Unichain using Wormhole’s Native Token Transfers (NTT) standard. This enables native multichain issuance, reducing bridge risks and ensuring regulatory compliance. RLUSD is the first U.S. Trust-regulated stablecoin on these L2s. Backed by a NYDFS Trust Charter and with a pending OCC application, it aims to be the first dual-regulated stablecoin. The expansion enhances utility for both XRP and RLUSD, strengthening Ripple’s role in institutional DeFi and cross-chain liquidity.

Ripple has expanded its U.S.-regulated stablecoin, Ripple USD [RLUSD], to Layer 2 networks for the first time. This marks a major step toward a multichain strategy ahead of its full launch next year.

The testing rollout begins on Optimism, Base, Ink, and Unichain, and is powered by Wormhole’s Native Token Transfers [NTT] standard — a system designed to move assets across chains without relying on wrapped tokens or traditional bridge architectures.

The move positions RLUSD as the first U.S. Trust-regulated stablecoin to deploy natively on these L2 ecosystems.

RLUSD goes multichain with native issuance on L2s

Unlike bridged assets, Wormhole’s NTT framework enables RLUSD to maintain native issuance and control across every supported chain.

This approach reduces bridge risk, preserves liquidity integrity, and creates a regulatory-compliant pathway for institutional DeFi expansion.

According to the announcement, Optimism serves as the entry point, with Base, Ink, and Unichain interconnected through the same NTT infrastructure — allowing Ripple to scale RLUSD across multiple environments without fragmentation.

Ripple’s SVP of Stablecoin, Jack McDonald, said the expansion reflects rising institutional demand for a fully compliant stablecoin that can move across chains with predictable oversight.

“Stablecoins are the gateway to DeFi and institutional adoption,” he said. “By launching RLUSD on these L2 networks, we are setting the standard where compliance and on-chain efficiency converge.”

Why regulation matters more than ever

RLUSD launched under a New York Department of Financial Services [NYDFS] Trust Charter, one of the most stringent regulatory frameworks in crypto.

Last week, AMBCrypto also reported that Ripple has applied for an OCC charter. If approved, this would make RLUSD the first stablecoin simultaneously overseen at both the state and federal levels.

No existing major stablecoin, including USDC or USDT, operates under this dual structure.

Ripple now holds more than 75 licenses globally, with recent approvals in Dubai and Abu Dhabi further strengthening RLUSD’s international reach.

Utility boost for both Ripple XRP and RLUSD

The multichain expansion is designed to strengthen XRP’s role in cross-chain liquidity.

Hex Trust recently issued wrapped XRP [wXRP] to support interoperability, enabling XRP holders to pair wXRP with RLUSD on supported chains for swaps, payments, lending, or yield-generating applications.

Data from DefilLama shows that Ethereum currently has the highest share of RLUSD [79.2%] worth over $1 billion, while the remaining 20.8% is on XRPL.


Final Thoughts

  • RLUSD’s L2 expansion sets a new regulatory benchmark for multichain stablecoins and positions Ripple as a direct challenger to USDC’s dominance.
  • The move enhances utility for both XRP and RLUSD, creating a deeper role for Ripple in institutional DeFi and cross-chain liquidity.

Похожее

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

In a span of four days, Amazon announced an additional $25 billion investment, and Google pledged up to $40 billion—both direct competitors pouring over $65 billion into the same AI startup, Anthropic. Rather than a typical venture capital move, this signals the latest escalation in the cloud wars. The core of the deal is not equity but compute pre-orders: Anthropic must spend the majority of these funds on AWS and Google Cloud services and chips, effectively locking in massive future compute consumption. This reflects a shift in cloud market dynamics—enterprises now choose cloud providers based on which hosts the best AI models, not just price or stability. With OpenAI deeply tied to Microsoft, Anthropic’s Claude has become the only viable strategic asset for Google and Amazon to remain competitive. Anthropic’s annualized revenue has surged to $30 billion, and it is expanding into verticals like biotech, positioning itself as a cross-industry AI infrastructure layer. However, this funding comes with constraints: Anthropic’s independence is challenged as it balances two rival investors, its safety-first narrative faces pressure from regulatory scrutiny, and its path to IPO introduces new financial pressures. Globally, this accelerates a "tri-polar" closed-loop structure in AI infrastructure, with Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic forming exclusive model-cloud alliances. In contrast, China’s landscape differs—investments like Alibaba and Tencent backing open-source model firm DeepSeek reflect a more decoupled approach, though closed-source models from major cloud providers still dominate. The $65 billion bet is ultimately about securing a seat at the table in an AI-defined future—where missing the model layer means losing the cloud war.

marsbit2 ч. назад

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

marsbit2 ч. назад

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

DeepSeek-V4 has been released as a preview open-source model, featuring 1 million tokens of context length as a baseline capability—previously a premium feature locked behind enterprise paywalls by major overseas AI firms. The official announcement, however, openly acknowledges computational constraints, particularly limited service throughput for the high-end DeepSeek-V4-Pro version due to restricted high-end computing power. Rather than competing on pure scale, DeepSeek adopts a pragmatic approach that balances algorithmic innovation with hardware realities in China’s AI ecosystem. The V4-Pro model uses a highly sparse architecture with 1.6T total parameters but only activates 49B during inference. It performs strongly in agentic coding, knowledge-intensive tasks, and STEM reasoning, competing closely with top-tier closed models like Gemini Pro 3.1 and Claude Opus 4.6 in certain scenarios. A key strategic product is the Flash edition, with 284B total parameters but only 13B activated—making it cost-effective and accessible for mid- and low-tier hardware, including domestic AI chips from Huawei (Ascend), Cambricon, and Hygon. This design supports broader adoption across developers and SMEs while stimulating China's domestic semiconductor ecosystem. Despite facing talent outflow and intense competition in user traffic—with rivals like Doubao and Qianwen leading in monthly active users—DeepSeek has maintained technical momentum. The release also comes amid reports of a new funding round targeting a valuation exceeding $10 billion, potentially setting a new record in China’s LLM sector. Ultimately, DeepSeek-V4 represents a shift toward open yet realistic infrastructure development in the constrained compute landscape of Chinese AI, emphasizing engineering efficiency and domestic hardware compatibility over pure model scale.

marsbit3 ч. назад

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

marsbit3 ч. назад

Торговля

Спот
Фьючерсы
活动图片