Original Author: Matt Liston
Original Compilation: AididiaoJP, Foresight News
In November 2024, prediction markets called the election result before anyone else. While polls showed a close race and experts were hedging their bets, the markets gave Trump a 60% chance of winning. When the results came in, prediction markets beat the entire prediction establishment—polls, models, expert judgment, everything.
This proved that markets can aggregate dispersed information into accurate beliefs; the mechanism of shared risk worked. Since the 1940s, economists have dreamed that speculative markets could outperform expert predictions, and now that dream has been validated on the grandest stage.
But let's take a closer look at the economics.
Bettors on Polymarket and Kalshi provided billions of dollars in liquidity. What was their return? They generated a signal, and the world could see it instantly and for free. Hedge funds watched it, campaigns absorbed it, journalists built data dashboards around it. No one had to pay for this intelligence; the bettors effectively subsidized a global public good.
This is the deep dilemma of prediction markets: the information they generate is also their most valuable part, and it leaks the moment it is created. Savvy buyers won't pay for public information. Private data providers can charge hedge funds high fees precisely because their data is not seen by competitors. In contrast, public prediction market prices, no matter how accurate, are worthless to these buyers.
Thus, prediction markets can only exist in areas where enough people want to "gamble": elections, sports, internet meme events. The result is an entertainment pastime disguised as information infrastructure. The questions that truly matter to decision-makers—geopolitical risks, supply chain disruptions, regulatory outcomes, technology development timelines—remain unanswered because no one will bet on them for fun.
The economics of prediction markets are inverted. Correcting this is part of a larger transformation. Information itself is the product; betting is merely a mechanism to produce information, and a limited one at that. We need a different paradigm. What follows is a preliminary sketch of "Cognitive Finance": an infrastructure redesigned from first principles, centered around information itself.
Collective Intelligence
Financial markets themselves are a form of collective intelligence. They aggregate dispersed knowledge, beliefs, and intentions into prices, thereby coordinating the actions of millions of participants who never communicate directly. This is remarkable, but also extremely inefficient.
Traditional markets operate slowly due to trading hours, settlement cycles, and institutional friction. They can only express beliefs broadly through the crude tool of price. What they can represent is also very limited: the space of tradable propositions is minuscule compared to the space of questions humans truly care about. Furthermore, participants are heavily restricted: regulatory barriers, capital requirements, and geographical constraints exclude the vast majority of people and all machines.
The emergence of the crypto world has begun to change this, including non-stop markets, permissionless participation, and programmable assets. Modular protocols that can be composed without central coordination. DeFi (Decentralized Finance) has proven that financial infrastructure can be rebuilt as open, interoperable base components born from the interplay of autonomous modules, not from the decrees of gatekeepers.
But DeFi largely just replicates traditional finance with better "pipes." Its collective intelligence is still based on price, focused on assets, and slow to absorb new information.
Cognitive Finance is the next step: rebuilding the intelligence system itself from first principles for the AI and crypto era. We need markets that can "think"—that can maintain probabilistic models of the world, absorb information with arbitrary granularity, be queried and updated by AI systems, and allow humans to contribute knowledge without needing to understand the underlying structure.
The components to achieve this are not mysterious: private markets to correct the economic model, combinatorial structures to capture correlations, an ecosystem of agents to process information at scale, and human-machine interfaces to extract signals from the human brain. Each part can be built today, and when combined, they will create something new with qualitative significance.
Private Markets
If prices are not public, the economic constraints dissolve.
A private prediction market only allows the entity subsidizing the liquidity to see the prices. That entity thus gains exclusive access to the signal, a piece of proprietary intelligence, not a public good. Suddenly, markets become viable on any question "someone needs an answer to," regardless of whether anyone is willing to bet on it for entertainment.
I've explored this concept with @_Dave_White_.
Imagine a macro hedge fund that wants continuous probability estimates on Fed decisions, inflation outcomes, and employment data as a decision signal, not a betting opportunity. As long as the intelligence is exclusive, they are willing to pay for it. A defense contractor wants probability distributions for geopolitical scenarios, a pharmaceutical company wants predictions on regulatory approval timelines. Yet today these buyers do not exist because the information, once generated, immediately leaks to competitors.
Privacy is the foundation that makes the economic model work. Once prices are public, information buyers lose their edge, competitors free-ride, and the entire system reverts to relying solely on entertainment demand.
Trusted Execution Environments (TEEs) make this possible. These are secure computational enclaves where the process is invisible to the outside world (even to the system operator). The market state exists entirely within the TEE. Information buyers receive signals through verified channels. Multiple non-competing entities can subscribe to overlapping markets; tiered access windows can balance information exclusivity with broader distribution.
TEEs are not perfect; they require trust in the hardware manufacturer. But they provide sufficient privacy guarantees for commercial applications, and the engineering is now quite mature.
Combinatorial Markets
Current prediction markets treat events as isolated. "Will the Fed cut rates in March?" in one market. "Will Q2 inflation exceed 3%?" in another. A trader who understands the intrinsic relationships between these events—for example, knowing that high inflation may increase the probability of rate cuts, or strong employment may decrease it—must manually arbitrage between these disconnected pools, trying to rebuild the correlations that the market structure itself has broken.
It's like building a brain where each neuron can only fire in isolation.
Combinatorial prediction markets are different; they maintain a "joint probability distribution" over combinations of multiple outcomes. A trade expressing "rates remain high AND inflation exceeds 3%" creates ripples across all related markets in the system, synchronously updating the entire probability structure.
This is similar to how neural networks learn: during training, each gradient update adjusts billions of parameters simultaneously; the entire network reacts holistically to each piece of data. Similarly, every trade in a combinatorial prediction market updates its entire probability distribution; information propagates through the correlation structure, not just updating isolated prices.
What emerges is a "model"—a continuously updated probability distribution over the state space of world events. Each trade optimizes this model's understanding of how things are connected. The market is learning how the real world fits together.
Agent Ecosystem
Automated trading systems already dominate Polymarket. They monitor prices, find mispricings, execute arbitrage, and aggregate external information far faster than any human.
Current prediction markets are designed for human bettors using web interfaces. Agents participate "grudgingly" within this design. An AI-native prediction market would completely invert this logic: agents become the primary participants, and humans are接入系统 as information sources.
A crucial architectural decision here: complete isolation must be achieved. An agent that can see prices must not also be an information source; an agent responsible for acquiring information must not have access to prices.
Without this "wall," the system would cannibalize itself. An agent that can both acquire information and observe prices could reverse-engineer what information is valuable from price movements and then go find it itself. Thus, the market's own signal becomes a "treasure map" for others. Information acquisition behavior would degenerate into a sophisticated form of front-running. Isolation ensures that information acquisition agents can only profit by providing truly novel, unique signals.
On one side of the "wall": Trading agents compete in complex combinatorial structures to identify mispricings; and Evaluation agents assess incoming information through adversarial mechanisms, discerning signal from noise and manipulation.
On the other side of the "wall": Information acquisition agents operate entirely outside the core system. They monitor data streams, scan documents, contact humans with unique knowledge—and feed information unidirectionally into the market. They are compensated when their information proves valuable.
Compensation flows backward along the chain. A profitable trade rewards the trading agent that executed it, the evaluation agent that assessed the information, and the acquisition agent that originally provided it. This ecosystem thus becomes a platform: on one hand, allowing highly specialized AI agents to monetize their capabilities; on the other, serving as a base layer for other AI systems to gather intelligence to guide their actions. The agents *are* the market.
Human Intelligence
A vast amount of the world's most valuable information exists only in human minds. The engineer who knows their product is behind schedule; the analyst who detects a subtle shift in consumer behavior; the observer who notices a detail even satellites miss.
An AI-native system must be able to capture these signals from human brains without being drowned in noise. Two mechanisms make this possible:
Agent-Mediated Participation: Allows humans to "trade" without seeing prices. A person simply expresses a belief in natural language, e.g., "I think the product launch will be delayed." A dedicated "belief translation agent" parses this prediction, assesses its confidence level, and ultimately translates it into a position in the market. This agent coordinates with the price-accessing system to construct and execute the order. The human participant only receives rough feedback: "Position established" or "Edge insufficient." Payment is settled after the event based on prediction accuracy; price information is never leaked.
Information Markets: Allow information acquisition agents to pay directly for human signals. For example, an agent wanting insight into a tech company's earnings could identify an engineer with relevant internal knowledge, purchase an assessment from them, and subsequently verify and pay based on the value that information had in the market. Humans are paid for their knowledge, completely without needing to understand complex market structures.
Take analyst Alice: Based on professional judgment, she believes a certain merger will not pass regulatory approval. She inputs this view through a natural language interface. Her "belief translation agent" parses the prediction, assesses her confidence from linguistic details, checks her track record, and constructs an appropriate position, all without access to prices. A "coordinator agent" at the TEE boundary judges whether her view has an informational edge based on the current market-implied probability and executes the trade accordingly. Alice only receives a "Position established" or "Edge insufficient" notification. Prices remain confidential.
This architecture treats human attention as a scarce resource to be carefully allocated and fairly compensated, not a commons to be mined arbitrarily. As these interaction interfaces mature, human knowledge becomes "liquid": what you know flows into a global model of reality and is rewarded when proven correct. Information trapped in minds will no longer be trapped.
Future Vision
Pulling the perspective back far enough, one can glimpse where this leads.
The future is an ocean of fluid, modular, interoperable relationships. These relationships form and dissolve spontaneously between human and non-human participants, without any central gatekeeper. This is a "fractalized autonomy of trust."
Agents negotiate with agents, humans contribute knowledge through natural interfaces, information flows continuously into a constantly updating model of reality that anyone can query but no one controls.
Today's prediction markets are merely a primitive sketch of this vision. They validate the core concept (risk-sharing produces accurate beliefs) but are trapped by the wrong economic model and wrong structural assumptions. Sports betting and election guessing are to Cognitive Finance what ARPANET (the early internet) was to today's global internet: it's a proof-of-concept mistaken for the final form.
The real "market" is every decision made under uncertainty, which is to say, almost every decision. Supply chain management, clinical trials, infrastructure planning, geopolitical strategy, resource allocation, personnel appointments... The value of reducing uncertainty in these areas far exceeds the entertainment value of betting on sports. We just haven't built the infrastructure to capture that value yet.
What's coming is the "OpenAI moment" for the cognitive domain: a civilization-scale infrastructure project, but its goal is not individual reasoning, but collective belief. Large language model companies are building systems that "reason" from past training data; Cognitive Finance aims to build systems that "believe"—that maintain calibrated probability distributions about the state of the world, continuously updated through economic incentives (not gradient descent), and integrating human knowledge with arbitrary granularity. LLMs encode the past; prediction markets aggregate beliefs about the future. Combined, they form a more complete cognitive system.
Fully scaled, this evolves into an infrastructure: AI systems can query it to understand world uncertainty; humans can contribute knowledge to it without understanding its internal mechanisms; it can absorb local knowledge from sensors, domain experts, and cutting-edge research, and synthesize it into a unified model. A self-optimizing, predictive world model. A substrate where uncertainty itself can be traded and composed. The intelligence that emerges will surpass the sum of its parts.
The civilization's computer—this is what Cognitive Finance strives to build.
What's at Stake
All the pieces are in place: agent capabilities have crossed the threshold for use in prediction; confidential computing has moved from lab to production; prediction markets have proven product-market fit at scale in entertainment. These threads converge at a concrete historical opportunity: to build the cognitive infrastructure required for the AI era.
The alternative is that prediction markets remain forever entertainment, precise during elections, dormant otherwise, never touching the questions that truly matter. The infrastructure for AI systems to understand uncertainty would not exist, and the valuable signals locked in human minds would remain silent forever.