Global Banking Regulator Proposes Changes to Criteria That Give Stablecoins Preferential Risk Treatment

CoinDeskPolicyОпубликовано 2023-12-13Обновлено 2023-12-14

Введение

The Basel Committee for Banking Supervision wants to tighten requirements that allow stablecoins to qualify as less risky than unbacked cryptocurrencies like bitcoin.

The Basel Committee for Banking Supervision (BCBS) proposed revisions to the criteria for allowing stablecoins to be treated as less risky than unbacked cryptocurrencies such as bitcoin (BTC) in a consultative document published Thursday.

CoinDesk reported last week that the global banking regulator was looking to revise its classification criteria for stablecoins, which are cryptocurrencies designed to hold their value on par with reserve assets like the U.S. dollar. The consultation, released Thursday, lays out the proposed revisions in detail.

The standard-setter has so far taken a tough stance on crypto, recommending the maximum possible risk weight of 1,250% for free-floating digital assets like bitcoin, which means banks have to issue capital to match their exposure. Banks are also not allowed to allocate more than 2% of their core capital to these riskier assets. The BCBS will not be making any changes to these standards, it said in a statement.

Advertisement
Advertisement

However, cryptos with "effective stabilization mechanisms" qualify for "preferential Group 1b regulatory treatment." This means stablecoins can be subject to "capital requirements based on the risk weights of underlying exposures as set out in the existing Basel Framework," instead of the tougher requirements set for bitcoin and the like.

Right now, stablecoins must be "redeemable at all times" to qualify for this preferential regulatory treatment. This ensures "only stablecoins issued by supervised and regulated entities that have robust redemption rights and governance are eligible for inclusion," the BCBS has said.

This story will be updated.

Edited by Sheldon Reback.

Похожее

Where Is the AI Infrastructure Industry Chain Stuck?

The AI infrastructure (AI Infra) industry chain is facing unprecedented systemic bottlenecks, despite the rapid emergence of applications like DeepSeek and Seedance 2.0. The surge in global computing demand has exposed critical constraints across multiple layers of the supply chain—from core manufacturing equipment and data center cabling to specialty materials and cleanroom facilities. Key challenges include four major "walls": - **Memory Wall**: High-bandwidth memory (HBM) and DRAM face structural shortages as AI inference demand outpaces training, with new capacity not expected until 2027. - **Bandwidth Wall**: Data transfer speeds lag behind computing power, causing multi-level bottlenecks in-chip, between chips, and across data centers. - **Compute Wall**: Advanced chip manufacturing, reliant on EUV lithography and monopolized by ASML, remains the fundamental constraint, with supply chain fragility affecting production. - **Power Wall**: While energy demand from data centers is rising, power supply is a solvable near-term challenge through diversified energy infrastructure. Expansion is further hindered by shortages in testing equipment, IC substrates (critical for GPUs and seeing price hikes over 30%), specialty materials like low-CTE glass fiber, and high-end cleanroom facilities. Connection technologies are evolving, with copper cables resurging for short-range links due to cost and latency advantages, while optical solutions dominate long-range scenarios. Innovations like hollow-core fiber and advanced PCB technologies (e.g., glass substrates, mSAP) are emerging to meet bandwidth needs. In summary, AI Infra bottlenecks are multidimensional, spanning compute, memory, bandwidth, power, and supply chain logistics. Advanced chip manufacturing remains the core constraint, while substrate, material, and equipment shortages present immediate challenges. The industry is moving toward hybrid copper-optical solutions and accelerated domestic supply chain development.

marsbit51 мин. назад

Where Is the AI Infrastructure Industry Chain Stuck?

marsbit51 мин. назад

Autonomy or Compatibility: The Choice Facing China's AI Ecosystem Behind the Delay of DeepSeek V4

DeepSeek V4's repeated delay in early 2026 has sparked global discussions on "de-CUDA-ization" in AI. The highly anticipated trillion-parameter open-source model is undergoing deep adaptation to Huawei’s Ascend chips using the CANN framework, representing China’s first systematic attempt to run a core AI model outside the CUDA ecosystem. This shift, however, comes with significant engineering challenges. While the model uses a MoE architecture to reduce computational load, it places extreme demands on memory bandwidth, chip interconnects, and system scheduling—areas where NVIDIA’s mature CUDA ecosystem currently excels. Migrating to Ascend introduces complexities in hardware topology, communication latency, and software optimization due to CANN’s relative immaturity compared to CUDA. The move highlights a broader strategic dilemma: short-term compatibility with CUDA offers practical benefits and faster adoption, as seen in CANN’s efforts to emulate CUDA interfaces. Yet, long-term over-reliance on compatibility risks inheriting CUDA’s limitations and stifling native innovation. If global AI shifts away from transformer-based architectures, strict compatibility could lead to technological obsolescence. Despite these challenges, DeepSeek V4’s eventual release could demonstrate the viability of a full domestic AI stack and accelerate CANN’s ecosystem growth. However, true technological independence will require building an original software-hardware paradigm beyond compatibility—a critical task for China’s AI ambitions in the next 3-5 years.

marsbit1 ч. назад

Autonomy or Compatibility: The Choice Facing China's AI Ecosystem Behind the Delay of DeepSeek V4

marsbit1 ч. назад

Торговля

Спот
Фьючерсы
活动图片