U.S. Banks Push Congress to Restrict Stablecoins and Crypto Data Access

TheNewsCryptoPublicado a 2026-01-22Actualizado a 2026-01-22

Resumen

U.S. banks, led by the American Bankers Association (ABA), are urging Congress to impose restrictions on stablecoins and financial data access. They advocate for a ban on yield-bearing stablecoins, warning that such products could draw trillillions of dollars from bank deposits, reducing lending capacity and creating financial stability risks. Additionally, banks seek changes to Section 1033 banking rules to impose stronger liability rules and potential fees on data sharing, which currently allows users to connect bank accounts to crypto platforms. Crypto and fintech groups argue these efforts are anti-competitive, designed to protect banks from innovation, and could effectively kill open banking by blocking connections or charging fees. This dispute has delayed a key crypto market structure bill in the U.S. Senate.

The traditional banks in the U.S. are pushing the lawmakers to change the crypto rules that would limit the stablecoins and financial data sharing. This push was led by the American Bankers Association (ABA), which is the major group of U.S. banks.

Why Banks Want Stablecoin Yields Banned

ABA’s are demanding a ban on the stablecoin yield. They say that the yield-bearing stablecoins could pull money out of the banks’ deposits and reduce the banks’ ability to lend, which creates financial stability risks. Brian Moynihan, CEO of Bank of America, warns that “trillions of money” will move from banks into stablecoins if the yield is allowed.

In reply, Crypto and Fintech groups argue that this would protect banks from the competition and make the stablecoins less useful and lock innovations behind the bank-controlled products.

ABA is also pushing to change the banking rules of Section 1033, which allows users have the right to share their financial data with the apps they choose. Right now, under the existing rules, users can connect their bank accounts to crypto wallets, exchanges, stablecoin apps, and fintech tools. But Banks are opposing the current rules and need the stronger liability rules and potential fees or restrictions on data sharing.

Crypto and fintech groups warn that the banks could use these crypto rule changes in their favor by charging fees for the data access, block connections, and slowly kill the open banking without banning it right away.

Stablecoin Yield Dispute Delays Key U.S. Crypto Bill

These disagreements and debate slows the progress on a major crypto market structure bill in the U.S. Senate, which involves who regulates crypto, how the stablecoin works, and how crypto fits into traditional finance. The current debate on stablecoin yield and financial stability sharing has caused a delay in voting from the Senate Banking Committee, and the coinbase has withdrawn its support for the bill.

Overall, banks want to grow crypto under the banking system, but crypto firms prefer decentralization on digital assets, user access, and financial data.

Highlighted Crypto News:

Crypto Analyst Underlines Possible ETH Price High After Revised Monthly Projection

TagsbanksCryptoStablecoin

Lecturas Relacionadas

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

OpenAI engineer Weng Jiayi's "Heuristic Learning" experiments propose a new paradigm for Agentic AI, suggesting that intelligent agents can improve not just by training neural networks, but also by autonomously writing and refining code based on environmental feedback. In the experiment, a coding agent (powered by Codex) was tasked with developing and maintaining a programmatic strategy for the Atari game Breakout. Starting from a basic prompt, the agent iteratively wrote code, ran the game, analyzed logs and video replays to identify failures, and then modified the code. Through this engineering loop of "code-run-debug-update," it evolved a pure Python heuristic strategy that achieved a perfect score of 864 in Breakout and performed competitively with deep reinforcement learning (RL) algorithms in MuJoCo control tasks like Ant and HalfCheetah. This approach, termed Heuristic Learning (HL), contrasts with Deep RL. In HL, experience is captured in readable, modifiable code, tests, logs, and configurations—a software system—rather than being encoded solely into opaque neural network weights. This offers potential advantages in explainability, auditability for safety-critical applications, easier integration of regression tests to combat catastrophic forgetting, and more efficient sample use in early learning stages, as demonstrated in broader tests on 57 Atari games. However, the blog acknowledges clear limitations. Programmatic strategies struggle with tasks requiring long-horizon planning or complex perception (e.g., Montezuma's Revenge), areas where neural networks excel. The future vision is a hybrid architecture: specialized neural networks for fast perception (System 1), HL systems for rules, safety, and local recovery (also System 1), and LLM agents providing high-level feedback and learning from the HL system's data (System 2). The core proposition is that in the era of capable coding agents, a significant portion of an AI's learned experience could be maintained as an auditable, evolving software system.

marsbitHace 56 min(s)

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

marsbitHace 56 min(s)

Your Claude Will Dream Tonight, Don't Disturb It

This article explores the recent phenomenon of AI companies increasingly using anthropomorphic language—like "thinking," "memory," "hallucination," and now "dreaming"—to describe machine learning processes. Focusing on Anthropic's newly announced "Dreaming" feature for its Claude Agent platform, the piece explains that this function is essentially an automated, offline batch processing of an agent's operational logs. It analyzes past task sessions to identify patterns, optimize future actions, and consolidate learnings into a persistent memory system, akin to a form of reinforcement learning and self-correction. The article draws parallels to similar features in other AI agent systems like Hermes Agent and OpenClaw, which also implement mechanisms for reviewing historical data, extracting reusable "skills," and strengthening long-term memory. It notes a key difference from human dreaming: these AI "dreams" still consume computational resources and user tokens. Further context is provided by discussing the technical challenges of managing AI "memory" or context, highlighting the computational expense of large context windows and innovations like Subquadratic's new model claiming drastically longer contexts. The core critique argues that this strategic use of human-centric vocabulary does more than market products; it subtly reshapes user perception. By framing algorithms with terms associated with consciousness, companies blur the line between tool and autonomous entity. This linguistic shift can influence user expectations, tolerance for errors, and even perceptions of responsibility when systems fail, potentially diverting scrutiny from the companies and engineers behind the technology. The article concludes by speculating that terms like "daydreaming" for predictive task simulation might be next, continuing this trend of embedding the idea of an "inner life" into computational processes.

marsbitHace 58 min(s)

Your Claude Will Dream Tonight, Don't Disturb It

marsbitHace 58 min(s)

Trading

Spot
Futuros
活动图片