World Liberty proposes using 5% of treasury to boost its stablecoin

cointelegraphPublished on 2025-12-18Last updated on 2025-12-18

Abstract

World Liberty Financial, backed by the Trump family, has proposed allocating 5% of its WLFI token treasury—worth approximately $120 million—to expand the supply of its USD1 stablecoin. The goal is to increase adoption and competitiveness in the stablecoin market by forming new CeFi and DeFi partnerships. The team argues that a larger USD1 circulation would drive demand for WLFI-governed services and strengthen the ecosystem. The proposal is currently under community vote, with initial responses showing slight opposition. USD1, launched in March, is the seventh-largest USD-pegged stablecoin with a $2.74 billion market cap, but still trails significantly behind competitors like PayPal’s PYUSD.

Trump family-backed World Liberty Financial has proposed using 5% of the project’s WLFI token treasury to grow the supply of its stablecoin USD1.

The proposal was posted to the World Liberty Financial governance forum on Wednesday, with the team highlighting the importance of increasing USD1 supply to keep up with “an increasingly competitive stablecoin landscape.”

The proposal outlines that the additional supply would help spread “USD1 use cases across select high-profile CeFi & DeFi partnerships,” with increased adoption helping to create more “value capture” opportunities in the WLFI ecosystem.

“As USD1 grows, more users, platforms, institutions, and chains integrate with World Liberty Financial infrastructure. This increases the scale and influence of the network governed by WLFI holders,” the team said.

“More USD1 in circulation leads to more demand for WLFI-governed services, integrations, liquidity incentives, and ecosystem programs,” it added.

Source: World Liberty Financial

World Liberty Financial’s WLFI token started trading on exchanges in September. Leading up to the launch, the project indicated that 19.96 billion of the total WLFI supply would be allocated to the treasury. At current prices, that total sum is worth almost $2.4 billion, with a 5% unlock equating to around $120 million.

The team outlined three potential voting options in the proposal: for, against or abstain. The vote is now live, but it is not explicitly clear how the voting is taking place.

Related: Binance mulls new US strategy, CZ potentially reducing stake: Report

The reaction to the proposal is currently mixed, with “against” slightly edging out those who indicate they support the proposal.

Community responses to the proposal. Source: World Liberty Financial

The project’s stablecoin launched in March and has a market cap of $2.74 billion according to CoinGecko data, making it the seventh-largest USD-pegged stablecoin on the market.

The 5% treasury unlock may help spur growth of the asset; however, it has a lot of catching up to do if it wants to displace competitors, with sixth-placed PYUSD from PayPal having a market cap $1.1 billion larger than USD1.

Magazine: Big questions: Would Bitcoin survive a 10-year power outage?

Related Reads

a16z: AI's 'Amnesia', Can Continuous Learning Cure It?

The article "a16z: AI's 'Amnesia' – Can Continual Learning Cure It?" explores the limitations of current large language models (LLMs), which, like the protagonist in the film *Memento*, are trapped in a perpetual present—unable to form new memories after training. While methods like in-context learning (ICL), retrieval-augmented generation (RAG), and external scaffolding (e.g., chat history, prompts) provide temporary solutions, they fail to enable true internalization of new knowledge. The authors argue that compression—the core of learning during training—is halted at deployment, preventing models from generalizing, discovering novel solutions (e.g., mathematical proofs), or handling adversarial scenarios. The piece introduces *continual learning* as a critical research direction to address this, categorizing approaches into three paths: 1. **Context**: Scaling external memory via longer context windows, multi-agent systems, and smarter retrieval. 2. **Modules**: Using pluggable adapters or external memory layers for specialization without full retraining. 3. **Weights**: Enabling parameter updates through sparse training, test-time training, meta-learning, distillation, and reinforcement learning from feedback. Challenges include catastrophic forgetting, safety risks, and auditability, but overcoming these could unlock models that learn iteratively from experience. The conclusion emphasizes that while context-based methods are effective, true breakthroughs require models to compress new information into weights post-deployment, moving from mere retrieval to genuine learning.

marsbit3h ago

a16z: AI's 'Amnesia', Can Continuous Learning Cure It?

marsbit3h ago

Trading

Spot
Futures
活动图片