US Fed Moves to End ‘Reputation Risk’ Rule Amid Crypto Debanking Concerns

TheNewsCrypto2026-02-24 tarihinde yayınlandı2026-02-24 tarihinde güncellendi

Özet

The US Federal Reserve is moving to codify a rule that would eliminate "reputation risk" as a factor in banking supervision, a practice blamed for widespread crypto debanking. Announced on February 23, the proposal seeks public feedback for two months. Fed Vice Chair Michelle Bowman stated that supervisors have been improperly pressuring banks to close accounts based on customers' political views, religious beliefs, or involvement in lawful but disfavored businesses like crypto, calling such discrimination unlawful. Senator Lummis and industry figures praised the move, viewing it as ending "Operation Chokepoint 2.0," a term describing alleged government efforts to cut off crypto firms from banking services.

The US Federal Reserve is looking for codifying a rule eliminating “reputation risk” from banking supervision, which some have condemned for a wave of crypto debanking in the past few years.

In the beginning, the Fed started making changes in June 2025 and publicised that it had directed its supervisors to stop pressuring banks to close client accounts over reputation risk, stating banks can only make decisions on clients based on financial risk management.

On February 23, the Fed announced through a press release that it is asking for feedback on a proposal to turn this into law. The Fed has given a two-month deadline for submitting comments.

Michelle Bowman, the vice chair for supervision, mentioned that we have heard troubling cases of debanking, where supervisors use concerns regarding reputation risk to pressure financial institutions to debank customers due to their political views, religious beliefs, or participation in disfavoured but lawful businesses.

She further went on, adding that discrimination via financial institutions on these bases is unlawful and doesn’t have a role in the Federal Reserve’s supervisory substructure. The same day, Lummis posted on X praising the move and added that it is not the Fed’s role to play both judge and jury for banking digital asset firms.

She wrote, “Happy to see this significant step to permanently eliminate ‘reputation risk’ from Fed policy and put Operation Chokepoint 2.0 to rest so America can be the digital asset capital of the world.”

Alex Thorn, the head of firmwide research of Galaxy Digital, also applauded the move, mentioning via X on Feb 23 that “chokepoint 2.0 rollback carries on.”

The term ‘Operation Chokepoint 2.0’ is used by a lot of members from the crypto industry to describe what they felt was a coordinated effort by the Joe Biden-guided US government and banking sector to prevent crypto companies from leveraging traditional banking services.

Highlighted Crypto News Today:

Crypto.com Secures Conditional OCC Approval to Launch National Trust Bank

TagsCryptoFEDUSA

İlgili Sorular

QWhat is the US Federal Reserve proposing to eliminate from banking supervision?

AThe US Federal Reserve is proposing to eliminate 'reputation risk' from banking supervision.

QWhy has the 'reputation risk' rule been criticized in recent years?

AIt has been criticized for causing a wave of crypto debanking, where banks close accounts based on perceived reputation risk rather than financial risk.

QWhat did Vice Chair Michelle Bowman say about debanking practices?

AShe stated that supervisors have pressured financial institutions to debank customers due to political views, religious beliefs, or participation in disfavored but lawful businesses, calling such discrimination unlawful.

QWhat is 'Operation Chokepoint 2.0' as referred to by the crypto industry?

AIt is a term used by the crypto industry to describe a perceived coordinated effort by the US government and banking sector to prevent crypto companies from accessing traditional banking services.

QHow did Senator Lummis and Galaxy Digital's Alex Thorn react to the Fed's proposal?

ABoth praised the move, with Lummis calling it a step to make America the digital asset capital of the world, and Thorn noting the 'chokepoint 2.0 rollback' continues.

İlgili Okumalar

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

OpenAI engineer Weng Jiayi's "Heuristic Learning" experiments propose a new paradigm for Agentic AI, suggesting that intelligent agents can improve not just by training neural networks, but also by autonomously writing and refining code based on environmental feedback. In the experiment, a coding agent (powered by Codex) was tasked with developing and maintaining a programmatic strategy for the Atari game Breakout. Starting from a basic prompt, the agent iteratively wrote code, ran the game, analyzed logs and video replays to identify failures, and then modified the code. Through this engineering loop of "code-run-debug-update," it evolved a pure Python heuristic strategy that achieved a perfect score of 864 in Breakout and performed competitively with deep reinforcement learning (RL) algorithms in MuJoCo control tasks like Ant and HalfCheetah. This approach, termed Heuristic Learning (HL), contrasts with Deep RL. In HL, experience is captured in readable, modifiable code, tests, logs, and configurations—a software system—rather than being encoded solely into opaque neural network weights. This offers potential advantages in explainability, auditability for safety-critical applications, easier integration of regression tests to combat catastrophic forgetting, and more efficient sample use in early learning stages, as demonstrated in broader tests on 57 Atari games. However, the blog acknowledges clear limitations. Programmatic strategies struggle with tasks requiring long-horizon planning or complex perception (e.g., Montezuma's Revenge), areas where neural networks excel. The future vision is a hybrid architecture: specialized neural networks for fast perception (System 1), HL systems for rules, safety, and local recovery (also System 1), and LLM agents providing high-level feedback and learning from the HL system's data (System 2). The core proposition is that in the era of capable coding agents, a significant portion of an AI's learned experience could be maintained as an auditable, evolving software system.

marsbit29 dk önce

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

marsbit29 dk önce

Your Claude Will Dream Tonight, Don't Disturb It

This article explores the recent phenomenon of AI companies increasingly using anthropomorphic language—like "thinking," "memory," "hallucination," and now "dreaming"—to describe machine learning processes. Focusing on Anthropic's newly announced "Dreaming" feature for its Claude Agent platform, the piece explains that this function is essentially an automated, offline batch processing of an agent's operational logs. It analyzes past task sessions to identify patterns, optimize future actions, and consolidate learnings into a persistent memory system, akin to a form of reinforcement learning and self-correction. The article draws parallels to similar features in other AI agent systems like Hermes Agent and OpenClaw, which also implement mechanisms for reviewing historical data, extracting reusable "skills," and strengthening long-term memory. It notes a key difference from human dreaming: these AI "dreams" still consume computational resources and user tokens. Further context is provided by discussing the technical challenges of managing AI "memory" or context, highlighting the computational expense of large context windows and innovations like Subquadratic's new model claiming drastically longer contexts. The core critique argues that this strategic use of human-centric vocabulary does more than market products; it subtly reshapes user perception. By framing algorithms with terms associated with consciousness, companies blur the line between tool and autonomous entity. This linguistic shift can influence user expectations, tolerance for errors, and even perceptions of responsibility when systems fail, potentially diverting scrutiny from the companies and engineers behind the technology. The article concludes by speculating that terms like "daydreaming" for predictive task simulation might be next, continuing this trend of embedding the idea of an "inner life" into computational processes.

marsbit31 dk önce

Your Claude Will Dream Tonight, Don't Disturb It

marsbit31 dk önce

İşlemler

Spot
Futures
活动图片