Robinhood CEO: Five Years After the GameStop Incident, We Are Committing to Unlocking Real-Time Settlement for Retail Traders Through Tokenization

marsbitPublicado a 2026-01-29Actualizado a 2026-01-29

Resumen

Five years after the GameStop trading halt, Robinhood CEO Vlad Tenev reflects on the event and outlines the company's push for real-time settlement through tokenization. The 2021 trading restrictions were caused by legacy clearinghouse risk rules tied to the slow T+2 settlement cycle, which required massive capital reserves during periods of high volatility. While advocating for and achieving T+1 settlement in the initial response, Tenev argues it remains insufficient in today's 24/7 markets. The solution is tokenization—converting stocks into blockchain-based tokens. This enables real-time settlement, drastically reducing systemic risk, lowering costs, and allowing for features like 24/7 trading, fractional ownership, and DeFi integration. Robinhood has already launched over 2,000 tokenized US stocks in Europe, with plans to enable round-the-clock trading and self-custody. Tenev emphasizes that US adoption requires a clear regulatory framework, noting recent SEC openness to innovation and the importance of the CLARITY Act in establishing modern rules for stock tokenization. The goal is to prevent future trading restrictions and fully unlock real-time settlement for retail investors.

Author: Vlad Tenev, Co-founder and CEO of Robinhood

Compiled by: Hu Tao, ChainCatcher

What exactly happened? And how can we ensure such an event never happens again?

Five years ago today, Robinhood and other brokerages were forced to halt purchases of several "meme stocks," most notably GameStop. It was one of the most bizarre and dramatic stock market crashes in recent memory.

The root cause of this trading halt lay in a complex set of clearinghouse risk management rules designed to mitigate the risks stemming from the then two-day settlement cycle for U.S. stock trades. These rules required brokerages to deposit massive amounts of capital to cover the risk between the trade and settlement of these highly volatile "meme stocks." What happens when slow, antiquated financial infrastructure collides with unprecedented trading volume and volatility in a handful of stocks? Massive deposit requirements, trading restrictions, and millions of angry customers.

Retail investors trying to buy GameStop stock were understandably furious. In their eyes, Robinhood went from hero to villain. I was just one month into my role as CEO of Robinhood, facing my first major crisis. After many team members worked around the clock for 72 hours to address the immediate fire and raise over $3 billion to shore up our capital reserves, we finally had a moment to step back and assess the situation. I vowed to do everything in my power not only to improve Robinhood's resilience in a similar scenario but also to advocate for systemic improvements to ensure such an event never happens again.

We strongly advocated for real-time settlement of U.S. stock trades, ultimately helping to shorten the settlement cycle from 2 days (T+2) to T+1—arguably the most significant achievement during Gensler's tenure as Chair of the U.S. Securities and Exchange Commission (SEC), despite the otherwise regrettable aspects of that period.

But in an era of 24-hour news cycles and real-time market reactions, a T+1 settlement cycle is still too long, especially when it effectively becomes T+3 on Fridays and T+4 over long weekends. Our pursuit of real-time settlement continues, but within the traditional stock market, achieving it has been elusive due to the need to manage numerous legacy stakeholders. Clearly, a new approach was needed.

Enter tokenization. Tokenization is the process of converting an asset, like a stock, into a token that exists on a blockchain. Among its many advantages—such as lower costs, native fractionalization, and 24/7 trading—putting stocks on-chain as tokens allows them to benefit from the real-time settlement properties of blockchain technology. The elimination of long settlement cycles means significantly reduced systemic risk and less strain on clearinghouses and brokers, allowing customers to trade freely, anytime, anywhere.

We've already seen this approach work. In Europe, Robinhood has launched tokens representing over 2,000 U.S.-listed stocks. These tokens allow European traders to invest in U.S. equities and receive dividend payments. In the coming months, we plan to enable 24/7 trading and decentralized finance (DeFi) services, where investors can self-custody their stock tokens and engage in activities like lending and staking.

As the advantages become increasingly clear, I believe it's imperative for the U.S. to embrace this technology. We are already seeing some progress: major U.S. exchanges and clearinghouses recently announced plans for stock tokenization.

But without a clear regulatory framework, these efforts will be in vain. Fortunately, we now have a prime opportunity. The current leadership at the U.S. Securities and Exchange Commission (SEC) is actively embracing innovation and encouraging experimentation with tokenization. Furthermore, Congress is actively considering the CLARITY Act, significant crypto legislation that would require the SEC to continue advancing this technology and establish modern rules for stock tokenization. This act would ensure that subsequent SEC leadership cannot abandon or reverse the progress made by the current commission.

By working with the U.S. Securities and Exchange Commission (SEC) and advocating for sound U.S. stock tokenization guidelines through CLARITY, we can collectively ensure that trading restrictions like those in 2021 never happen again. Let's seize the moment and finally unlock real-time settlement for retail traders.

Preguntas relacionadas

QWhat was the root cause that forced Robinhood and other brokers to halt purchases of meme stocks like GameStop five years ago?

AThe root cause was a complex set of clearinghouse risk management rules designed to mitigate the risk from the two-day settlement cycle (T+2) for US stock trading at the time. These rules required brokers to deposit huge amounts of capital to cover the risk between the trade and settlement of the highly volatile meme stocks.

QWhat specific improvement did Robinhood advocate for and help achieve in the US stock settlement system after the incident?

ARobinhood advocated for real-time settlement and helped achieve a reduction of the US stock settlement cycle from two days (T+2) to one day (T+1).

QAccording to the article, what is tokenization and what is its key advantage for stock trading mentioned in relation to settlement?

ATokenization is the process of converting assets, like stocks, into tokens that exist on a blockchain. Its key advantage for settlement is that it enables real-time settlement, which eliminates the long settlement cycle, drastically reduces systemic risk, and relieves pressure on clearinghouses and brokers.

QWhat has Robinhood already launched in Europe that demonstrates the feasibility of tokenizing US-listed stocks?

AIn Europe, Robinhood has launched over 2,000 tokens representing US-listed stocks, which enable European traders to invest in US stocks and receive dividend payments.

QWhat US legislative act does the article mention as being crucial for establishing a regulatory framework for stock tokenization?

AThe article mentions the CLARITY Act, a significant piece of crypto legislation being actively considered by Congress, which would require the SEC to advance this technology and create modern rules for stock tokenization.

Lecturas Relacionadas

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

OpenAI engineer Weng Jiayi's "Heuristic Learning" experiments propose a new paradigm for Agentic AI, suggesting that intelligent agents can improve not just by training neural networks, but also by autonomously writing and refining code based on environmental feedback. In the experiment, a coding agent (powered by Codex) was tasked with developing and maintaining a programmatic strategy for the Atari game Breakout. Starting from a basic prompt, the agent iteratively wrote code, ran the game, analyzed logs and video replays to identify failures, and then modified the code. Through this engineering loop of "code-run-debug-update," it evolved a pure Python heuristic strategy that achieved a perfect score of 864 in Breakout and performed competitively with deep reinforcement learning (RL) algorithms in MuJoCo control tasks like Ant and HalfCheetah. This approach, termed Heuristic Learning (HL), contrasts with Deep RL. In HL, experience is captured in readable, modifiable code, tests, logs, and configurations—a software system—rather than being encoded solely into opaque neural network weights. This offers potential advantages in explainability, auditability for safety-critical applications, easier integration of regression tests to combat catastrophic forgetting, and more efficient sample use in early learning stages, as demonstrated in broader tests on 57 Atari games. However, the blog acknowledges clear limitations. Programmatic strategies struggle with tasks requiring long-horizon planning or complex perception (e.g., Montezuma's Revenge), areas where neural networks excel. The future vision is a hybrid architecture: specialized neural networks for fast perception (System 1), HL systems for rules, safety, and local recovery (also System 1), and LLM agents providing high-level feedback and learning from the HL system's data (System 2). The core proposition is that in the era of capable coding agents, a significant portion of an AI's learned experience could be maintained as an auditable, evolving software system.

marsbitHace 53 min(s)

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

marsbitHace 53 min(s)

Your Claude Will Dream Tonight, Don't Disturb It

This article explores the recent phenomenon of AI companies increasingly using anthropomorphic language—like "thinking," "memory," "hallucination," and now "dreaming"—to describe machine learning processes. Focusing on Anthropic's newly announced "Dreaming" feature for its Claude Agent platform, the piece explains that this function is essentially an automated, offline batch processing of an agent's operational logs. It analyzes past task sessions to identify patterns, optimize future actions, and consolidate learnings into a persistent memory system, akin to a form of reinforcement learning and self-correction. The article draws parallels to similar features in other AI agent systems like Hermes Agent and OpenClaw, which also implement mechanisms for reviewing historical data, extracting reusable "skills," and strengthening long-term memory. It notes a key difference from human dreaming: these AI "dreams" still consume computational resources and user tokens. Further context is provided by discussing the technical challenges of managing AI "memory" or context, highlighting the computational expense of large context windows and innovations like Subquadratic's new model claiming drastically longer contexts. The core critique argues that this strategic use of human-centric vocabulary does more than market products; it subtly reshapes user perception. By framing algorithms with terms associated with consciousness, companies blur the line between tool and autonomous entity. This linguistic shift can influence user expectations, tolerance for errors, and even perceptions of responsibility when systems fail, potentially diverting scrutiny from the companies and engineers behind the technology. The article concludes by speculating that terms like "daydreaming" for predictive task simulation might be next, continuing this trend of embedding the idea of an "inner life" into computational processes.

marsbitHace 55 min(s)

Your Claude Will Dream Tonight, Don't Disturb It

marsbitHace 55 min(s)

Trading

Spot
Futuros
活动图片