a16z: The 'Super Bowl Moment' of Prediction Markets

marsbitPubblicato 2026-02-09Pubblicato ultima volta 2026-02-09

Introduzione

On February 8th, millions of NFL fans watched the Super Bowl while simultaneously tracking prediction markets, which offered bets on everything from the winner and final score to individual player performances. Over the past year, prediction markets in the U.S. have seen at least $27.9 billion in trading volume, covering not only sports but also economic policies, product launches, and more. These markets function by creating assets tied to specific outcomes; if the event occurs, asset holders profit. The core value lies in aggregating dispersed information through trading, making them more reliable than individual pundits or traditional sportsbooks, which aim to balance bets rather than reflect true probabilities. Prediction markets simplify the extraction of clear signals from complex information. For instance, instead of inferring tariff likelihood from soybean futures—which are influenced by multiple factors—one can directly trade on the event. The concept dates back to 16th-century Europe, but modern prediction markets are built on economics, statistics, and computer science, with academic foundations laid in the 1980s. A market might issue a contract paying $1 if a specific event occurs (e.g., a quarterback passing in a certain zone). The contract price reflects the market’s collective probability estimate. If a trader believes the probability is higher, they buy, pushing the price up and signaling confidence. This mechanism updates in real-time with new information, ...

On February 8th US time (7:30 AM Beijing Time on February 9th), hundreds of millions of NFL fans gathered in front of their screens to watch the Super Bowl, with many also keeping an eye on another screen—closely monitoring the trading dynamics of prediction markets, where betting categories encompass everything from championship outcomes and final scores to the passing yards of each team's quarterback.

Over the past year, the trading volume of US prediction markets reached at least $27.9 billion, covering a vast array of subjects, from sports event results and economic policy decisions to new product launches. However, the nature of these markets has always been controversial: Are they a form of trading or gambling? A tool for aggregating collective wisdom for news, or a means of scientific validation? And is the current development model already the optimal solution?

As an economist who has long studied markets and incentive mechanisms, my answer begins with a simple premise: prediction markets are, in essence, markets. And markets are core tools for allocating resources and integrating information. The operating logic of prediction markets is to launch assets linked to specific events—when the event occurs, traders holding the asset receive a payout. People then trade based on their own judgment of the event's outcome, thereby unleashing the core value of the market.

From a market design perspective, referring to information from prediction markets is far more valuable than trusting the opinion of a single sports commentator, or even looking at the betting odds from Las Vegas. The primary goal of traditional sports betting institutions is not to predict the outcome of games, but to 'balance the betting funds' by adjusting odds, attracting money to the side with less betting volume at any given moment. Las Vegas betting seeks to entice players to bet on underdog outcomes, whereas prediction markets enable people to execute trades based on their genuine judgment.

Prediction markets also make it easier to extract effective signals from vast amounts of information. For example, if you want to gauge the likelihood of new tariffs being imposed, deriving this from soybean futures prices would be an indirect process—as futures prices are influenced by multiple factors. But if you ask this question directly in a prediction market, you can get a more straightforward answer.

The prototype of this model can be traced back to 16th-century Europe, where people even placed bets on 'the next Pope.' The development of modern prediction markets is rooted in contemporary theories of economics, statistics, mechanism design, and computer science. In the 1980s, Charles Plott of Caltech and Shyam Sunder of Yale University established its formal academic framework, and soon after, the first modern prediction market—the Iowa Electronic Markets—was launched.

The mechanism of prediction markets is actually quite simple. Take the bet 'Will Seattle Seahawks quarterback Sam Darnold pass the ball within the opponent's one-yard line?' as an example. The market issues corresponding trading contracts; if the event occurs, each contract pays the holder $1. As traders continuously buy and sell this contract, the market price of the contract can be interpreted as the probability of the event occurring, representing the collective judgment of the traders. For instance, a contract priced at $0.50 implies the market believes there is a 50% chance the event will happen.

If you judge the probability of the event to be higher than 50% (say, 67%), you can buy this contract. If the event ultimately occurs, the contract you purchased for $0.50 yields a $1 payout, resulting in a gross profit of $0.67. Your buying action will push up the market price of the contract, and the corresponding probability estimate will also rise, sending a signal to the market: someone believes the current market underestimates the likelihood of the event. Conversely, if someone believes the market overestimates the probability, selling will drive down the price and the probability estimate.

When prediction markets function well, they demonstrate significant advantages over other forecasting methods. Opinion polls and surveys can only yield the proportion of views; converting these into probability estimates requires statistical methods to analyze the relationship between the survey sample and the overall population. Moreover, such survey results are often static data at a specific moment, whereas information in prediction markets continuously updates with the arrival of new participants and new information.

More crucially, prediction markets have clear incentive mechanisms; traders are truly 'skin in the game.' They must carefully sift through the information they possess and only invest funds and take risks in areas they understand best. In prediction markets, people can convert their information and expertise into profits, which also incentivizes them to proactively delve deeper into relevant information.

Finally, the coverage scope of prediction markets far surpasses that of other tools. For instance, someone with information affecting oil demand can profit by going long or short on crude oil futures. But in reality, many outcomes we wish to predict cannot be realized through commodity or stock markets. For example, specialized prediction markets have recently emerged attempting to aggregate various judgments to predict the solution time for specific mathematical problems—information crucial for scientific development and an important benchmark for measuring the progress of artificial intelligence.

Despite their significant advantages, prediction markets still need to resolve many issues to truly realize their value. First, at the market infrastructure level, there are persistent questions that need clarification: How to verify whether a specific event has truly occurred and achieve market consensus? How to ensure the transparency and auditability of market operations?

Next are the challenges in market design. For instance, there must be participants with relevant information entering to trade—if all participants are uninformed, the market price cannot convey any effective signal. Conversely, various participants holding different relevant information need to be willing to trade; otherwise, the valuation in prediction markets will be biased. The prediction market before the Brexit referendum is a typical counterexample.

Furthermore, if participants with absolute insider information enter the market, new problems arise. For example, the Seahawks' offensive coordinator knows exactly whether Sam Darnold will pass within the one-yard line and can even directly influence this outcome. If such individuals participate in trading, market fairness would be severely compromised. If potential participants believe there are insider traders in the market, they might rationally choose to stay away, ultimately leading to a market collapse.

Additionally, prediction markets also face the risk of manipulation: someone might turn this tool, originally intended for aggregating collective judgment, into a means of manipulating public opinion. For instance, a candidate's campaign team might use campaign funds to influence the valuation in prediction markets to create an atmosphere of 'impending victory.' Fortunately, prediction markets have some self-correcting ability in this regard—if the probability estimate of a contract deviates from a reasonable range, there will always be traders choosing to take the opposite position, bringing the market back to rationality.

Given the various risks mentioned above, prediction market platforms must strive to enhance operational transparency and clearly disclose the rules governing participant management, contract design, market operation, and other aspects. If these issues can be successfully resolved, we can foresee that prediction markets will play an increasingly important role in the future of forecasting.

Domande pertinenti

QWhat is the core premise that defines a prediction market according to the economist's perspective in the article?

AThe core premise is that a prediction market is, in essence, a market. Markets are a core tool for allocating resources and aggregating information.

QHow does the article differentiate the primary goal of traditional sportsbooks (like those in Las Vegas) from the goal of prediction markets?

AThe primary goal of traditional sportsbooks is to 'balance the betting money' by adjusting odds to attract bets to the less popular side. In contrast, prediction markets allow people to trade based on their genuine judgments.

QWhat key advantage do prediction markets have over tools like polls and surveys?

APolls and surveys only capture opinion percentages at a static moment and require statistical methods to convert into probability estimates. Prediction markets are continuously updated with new information and participants, and they have a clear financial mechanism that incentivizes informed trading.

QWhat are two major challenges or risks that prediction markets need to overcome to realize their full potential?

ATwo major challenges are: 1) The potential for manipulation, where entities try to influence market prices to create a false narrative. 2) The problem of insiders with privileged information participating, which can destroy market fairness and deter other participants.

QWhat historical example from the 16th century is given as an early precursor to prediction markets?

AIn the 16th century, people placed bets on outcomes such as 'who would be the next Pope.'

Letture associate

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

OpenAI engineer Weng Jiayi's "Heuristic Learning" experiments propose a new paradigm for Agentic AI, suggesting that intelligent agents can improve not just by training neural networks, but also by autonomously writing and refining code based on environmental feedback. In the experiment, a coding agent (powered by Codex) was tasked with developing and maintaining a programmatic strategy for the Atari game Breakout. Starting from a basic prompt, the agent iteratively wrote code, ran the game, analyzed logs and video replays to identify failures, and then modified the code. Through this engineering loop of "code-run-debug-update," it evolved a pure Python heuristic strategy that achieved a perfect score of 864 in Breakout and performed competitively with deep reinforcement learning (RL) algorithms in MuJoCo control tasks like Ant and HalfCheetah. This approach, termed Heuristic Learning (HL), contrasts with Deep RL. In HL, experience is captured in readable, modifiable code, tests, logs, and configurations—a software system—rather than being encoded solely into opaque neural network weights. This offers potential advantages in explainability, auditability for safety-critical applications, easier integration of regression tests to combat catastrophic forgetting, and more efficient sample use in early learning stages, as demonstrated in broader tests on 57 Atari games. However, the blog acknowledges clear limitations. Programmatic strategies struggle with tasks requiring long-horizon planning or complex perception (e.g., Montezuma's Revenge), areas where neural networks excel. The future vision is a hybrid architecture: specialized neural networks for fast perception (System 1), HL systems for rules, safety, and local recovery (also System 1), and LLM agents providing high-level feedback and learning from the HL system's data (System 2). The core proposition is that in the era of capable coding agents, a significant portion of an AI's learned experience could be maintained as an auditable, evolving software system.

marsbit41 min fa

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

marsbit41 min fa

Your Claude Will Dream Tonight, Don't Disturb It

This article explores the recent phenomenon of AI companies increasingly using anthropomorphic language—like "thinking," "memory," "hallucination," and now "dreaming"—to describe machine learning processes. Focusing on Anthropic's newly announced "Dreaming" feature for its Claude Agent platform, the piece explains that this function is essentially an automated, offline batch processing of an agent's operational logs. It analyzes past task sessions to identify patterns, optimize future actions, and consolidate learnings into a persistent memory system, akin to a form of reinforcement learning and self-correction. The article draws parallels to similar features in other AI agent systems like Hermes Agent and OpenClaw, which also implement mechanisms for reviewing historical data, extracting reusable "skills," and strengthening long-term memory. It notes a key difference from human dreaming: these AI "dreams" still consume computational resources and user tokens. Further context is provided by discussing the technical challenges of managing AI "memory" or context, highlighting the computational expense of large context windows and innovations like Subquadratic's new model claiming drastically longer contexts. The core critique argues that this strategic use of human-centric vocabulary does more than market products; it subtly reshapes user perception. By framing algorithms with terms associated with consciousness, companies blur the line between tool and autonomous entity. This linguistic shift can influence user expectations, tolerance for errors, and even perceptions of responsibility when systems fail, potentially diverting scrutiny from the companies and engineers behind the technology. The article concludes by speculating that terms like "daydreaming" for predictive task simulation might be next, continuing this trend of embedding the idea of an "inner life" into computational processes.

marsbit43 min fa

Your Claude Will Dream Tonight, Don't Disturb It

marsbit43 min fa

Trading

Spot
Futures
活动图片