Written by: Vaidik Mandloi, thetokendispatch
Compiled by: Plain Talk Blockchain
In January 2026, an anonymous trader placed a series of bets on the cryptocurrency trading platform Polymarket, wagering that Venezuelan President Nicolás Maduro would be captured. The total bet amounted to approximately $34,000. A few days later, U.S. special forces executed the capture operation, and the trader cashed out over $400,000. The Secretary of State later confirmed that the operation was too sensitive to require congressional notification. Think about it: the U.S. Congress, responsible for authorizing military actions, was completely unaware. The American public was also kept in the dark. Yet, someone sitting behind a screen on a cryptocurrency betting platform had enough information to bet real money on it. And their prediction came true.
This has become a common narrative in the prediction market industry today. As Polymarket CEO Shayne Copeland puts it, it's called a "truth machine." The argument is that because traders have skin in the game, their collective betting reflects the future trajectory of the world more accurately than any poll, expert, or commentator (who faces no consequences for being wrong). Arguably, Polymarket's odds are the closest thing you can find to the truth.
This narrative seems to be working. Prediction markets are no longer a niche corner of the internet where a few gamblers place bets for thrills. A recent analysis of a dataset of 364 TikTok videos mentioning prediction markets found that 68% of the videos were not about trading. People aren't gambling; they are citing the odds on these platforms in political debates, much like they used to cite polls. Polymarket appeared in about 70% of these videos. A 22-year-old TikTok user posts political videos, using the odds from a cryptocurrency betting platform to predict real-world outcomes, and a significant number of people agree.
This is incredible. Two years ago, you simply couldn't have believed this would happen. But a question no one is seriously considering is: Are these probabilities really worthy of such trust?
So I ask: How accurate are these markets really? What happens when the odds begin to influence the very events they are supposed to predict? And what does the future look like when the whole world treats betting odds as truth?
How to Grade a Prediction Market?
Before analyzing the data, we first need to understand how to measure the effectiveness of a prediction market. Because most people have never considered this, and ignoring it means all the hype around Polymarket and Kalshi is just marketing fluff.
There's a scoring method called the Brier score. Meteorologist Glenn Brier proposed this method in 1950 to evaluate the quality of weather forecasts, as forecasters were (and still are) among the first professionals who had to take probabilistic predictions seriously and make a living from them. It's very simple. Suppose you predict a 90% chance of rain tomorrow, and it does rain. That's a good prediction, and your Brier score is low. Now suppose you predict a 90% chance of rain tomorrow, but it turns out to be a clear sky. That's a bad prediction, and your Brier score is high. A Brier score of 0 means your prediction was perfectly accurate. A score of 0.25 means your prediction was as good as a coin toss. Any score above 0.25 means you might as well have guessed randomly.
Why is this important? Because when Polymarket tells you their market predicts a 60% chance of Trump winning, and he eventually wins, it sounds amazing in a headline, but statistically, one correct prediction says almost nothing. You need to evaluate the complete history of the market across thousands of questions. This is where the Brier score comes in. It's the only honest way to assess whether these markets are truly good at predicting election outcomes.
A website called Brier.fyi does exactly that. They analyzed over 84,000 questions on platforms like Polymarket, Kalshi, Manifold, and Metaculus. Polymarket's overall Brier score is 0.047. This is indeed a very good score. Simply put, imagine a predictor saying, "I am 90% sure this will happen," and succeeding with that accuracy every time.
But here's where it gets interesting, and the "truth machine" narrative begins to unravel.
The score of 0.044 is the average for all listed markets on Polymarket. And in this case, the average is crucial. If you break down the score based on what people are actually betting on, the ratings fluctuate wildly.
Science & Economics? Polymarket gets an A. The markets are based on CPI data, Fed rate decisions, and GDP figures. These markets perform well because traders tend to be financially literate, the data is verifiable, and there are institutional investors with real knowledge putting real money in.
Politics? B+. Decent, largely propped up by the massive presidential election markets, where billions of dollars flow. Culture & Tech? Worse. Much worse.
Then there's Sports. The overall score for sports prediction markets across all platforms is a mere 0.325, a D-. Remember, a coin toss is 0.25. Sports prediction markets, overall, perform worse than just flipping a coin for every question. Let that sink in.
The category that attracts the most casual bettors, and the one Kalshi has been expanding into aggressively (at one point, about 90% of Kalshi's volume was in sports betting), is the category where the markets have been proven unreliable.
Now, looking at individual markets is where the story gets even more fascinating.
Polymarket had a market on whether Bitcoin would reach $100,000 by January 2025. Bitcoin did eventually hit $100,000. The market predicted the correct outcome, but it misjudged the probability for most of the time, lingering with low confidence for months before skyrocketing to near certainty at the very end. Its Brier score was 0.4909, an F. Remember, after 0.25 (coin toss), you might as well guess randomly. This market's Brier score is almost double that.
The market on Kamala Harris winning the 2024 Democratic presidential nomination was even crazier. She did eventually win the nomination, and the market predicted the correct outcome, ironically. But the Brier score was 0.9098. This number is so bad it can't be overstated. The market was confidently wrong for so long that even being right at the end couldn't save it. If you had relied on this market for decision-making, you would have been misled throughout the entire campaign cycle until the very moment the result was final.
Now for the other side of the story, because it's not a simple one. The 2024 US Presidential election was a genuine win for prediction markets. While all mainstream polls suggested a tight race, Polymarket predicted Trump's chances at around 60%. Research from Vanderbilt University, using a Bayesian time-series model, compared Polymarket's prediction prices with national poll results from seven swing states. The results showed Polymarket was more accurate across the board.
So, what does this tell us? Prediction markets are excellent at forecasting elections. Especially in the largest, most liquid elections, where billions of dollars, tens of thousands of traders, and widespread public attention converge on a single question, they often outperform polls.
But the catch is that election predictions might only account for about 2% of the trading volume on these platforms. The 2024 Presidential election market on Polymarket alone generated over $3.6 billion in volume, with 63,000 unique monthly traders. If you look at congressional elections, state-level referendums, or any cultural, tech, or sports topics, the bid-ask spreads on contracts skyrocket to 20% to 100% of the mid-price. Markets for legislation and crises have spreads approaching 100%. Such wide spreads mean the market knows almost nothing. It's just two people making wildly different guesses on the same question, with barely any money behind either side.
When Fate Starts Writing the Story
If the accuracy problem were confined within the prediction market ecosystem, it might be manageable. Traders betting on bad markets lose money, learn their lesson, and the system improves over time. This is how all financial markets have operated for a century. But the problem is that the odds are no longer just a signal for traders; they have become public information for everyone.
Over the past 18 months, major US news outlets have integrated prediction market data into their political coverage. The Wall Street Journal signed a formal agreement with Polymarket to include its betting data in its news reports. CNN began displaying Kalshi's odds on-screen during its election night coverage. CNBC did the same. In December 2024, even Substack announced a direct partnership with Polymarket, allowing newsletter writers to embed live market data directly into their articles.
This is why the odds eventually appear on TikTok. The numbers travel from Polymarket to The Wall Street Journal, to cable news, then to Twitter, and finally to TikTok. By the time the average user sees these odds, they have been circulated through enough authoritative channels that they feel like facts. What people are accepting are numbers that have been pre-"laundered" by mainstream media.
This is the crux of the prediction market problem: once odds are disseminated as news, they begin to influence the very things they are supposed to predict. There's a specific name for this phenomenon; economists call it endogeneity. Simply put, the act of measuring changes the thing being measured.
Let me give a concrete example. Coinbase CEO Brian Armstrong is on an earnings call. He learns that Polymarket is running a contract on whether he will mention certain specific phrases during the call. So, he alters the wording he was going to use. The market was supposed to predict what he would say. But his knowledge of the market's movement changed what he ultimately said.
Now, let's scale this dynamic up to an election level. In the 2024 US Presidential election, a French trader using the pseudonym "Theo" (whom the media called the "Polymarket whale") bet on Trump winning and ultimately profited over $85 million. This wasn't some lucky gambler. He commissioned a private poll, independent of all public national polls, which showed Trump was performing far better than public polls indicated.
Because of this, his bets drove up trading prices across platforms, which were then reported by the media outlets I mentioned, including The Wall Street Journal, CNN, and political commentators across platforms. Despite polls showing a tight race, the market prediction leaned towards Trump. This single narrative influenced how millions of people viewed the race in the final weeks. Commentators debated whether the "smart money" knew something the polls didn't. Voters absorbed this narrative, and Trump ultimately won.
I am not claiming Theo changed the election outcome. That would be a stretch I cannot prove. What I am saying is that anyone paying attention should be concerned: a trader with access to private poll data unavailable to others was able to move Polymarket's prices, which The Wall Street Journal and CNN then repackaged and disseminated as the collective wisdom of the market. A good prediction market should aggregate a vast amount of information from numerous participants into a clear signal. What happened in 2024 was that one person's exclusive poll data was laundered through Polymarket and re-broadcast as if it represented the consensus of thousands of traders.
If one trader can do this with $85 million, imagine what someone with real money and power could do?
In February 2026, Israeli authorities charged at least two individuals, accusing them of using classified military intelligence to gamble on the Polymarket platform. They placed bets on contracts related to Israeli military actions before those actions became public, with potential profits around $100,000. These individuals had security clearances and used information the public wouldn't have for days to bet on war. This is the first global case of its kind, confirming that prediction markets are fast enough, liquid enough, and anonymous enough to be used to monetize classified information in real-time.
The Maduro trade mentioned at the beginning of this article? The pattern is identical. Someone placed a bet before a secret military operation occurred and won over $400,000. Whoever it was either had insider information or was the luckiest guesser in the history of political gambling. We will never know.
What Happens When Everyone Believes the Odds?
The median question on the Polymarket platform is resolved within four days. The average resolution time is 19 days, but a few long-term markets pull the average up. Most questions on the platform are about what will happen *this week*.
This shows these markets aren't making any meaningful long-term predictions about the future. They are just pricing the near term. Will the vote pass on Friday? Will this person say this thing tomorrow? Will the number released on Wednesday be above or below expectations? This information is useful. But it's a far cry from what people mean when they call prediction markets "truth machines." That phrase usually implies the market can tell you what the world will look like in six months, a year, or even five years. But the data shows it simply can't. Not even close.
99% of the trading volume in prediction markets concentrates in the final few hours before an event is settled. Money pours in during the last hours when the outcome is almost certain. Also, these markets have massive liquidity gaps. By the end of 2025, the combined weekly volume of the two major platforms, Polymarket and Kalshi, is projected to be around $2.5 billion. Sounds impressive, right? But the U.S. options market alone clears about $760 billion *per day*.
Prediction markets represent just 0.05% of that. The entire prediction market industry, across any platform, any contract, any category, is minuscule compared to the markets institutions actually use for decision-making.
Here's the situation: Prediction markets only work for a very specific type of question: binary, high-profile, short-term events involving millions of dollars. But that's a tiny fraction of the questions these platforms actually offer. For the other 98% of questions, the prices are unreliable, liquidity is almost non-existent, and their outcomes are more like Twitter polls than financial instruments.
They are building the default probability source for everything. Just like you open a Bloomberg terminal to check a stock price, their vision is that you open Polymarket to check a probability. Their strategy is that once enough media outlets, newsrooms, financial analysts, and government researchers rely on this data source, regardless of its accuracy, the product becomes irreplaceable.
I think it will work. And I think that should worry everyone.
Because the issue isn't whether prediction markets are useful. The answer is yes. For elections, major economic data, and a handful of high-profile events, they consistently outperform alternatives. That's a fact, and it's crucial. The problem is, what happens when the entire information ecosystem starts treating the output of these markets as truth, even for the 98% of questions the markets are completely incapable of predicting?
Economist Robin Hanson, a decades-long advocate for prediction markets, describes them as a system that forces people to put money behind their beliefs. In his model, the final price will be the best available estimate of the truth at that moment. But that model assumes liquid markets, diverse participants, and resistance to manipulation. The markets we have are dominated by a few "whales," concentrated in two areas (elections and sports), accounting for roughly 80% of the volume. The remaining 20% of volume is spread across thousands of contracts, where a few thousand dollars can move the price by double digits.
These are tools for manufacturing attention. They work when the world is watching; they fail when no one is. The more people believe they are truth-making tools, the more power those who can move prices wield. And those who can move prices aren't a crowd of informed ordinary people; they are a small group of well-funded traders with access to private polls and, in at least two confirmed cases, classified intelligence.
The most dangerous thing about prediction markets isn't that they are wrong; it's that they are right just often enough on critical questions to earn a trust they don't inherently possess. And that trust is slowly being baked into the world's information processing machinery. The Wall Street Journal prints the prediction data, CNN broadcasts it, TikTok shares it. Eventually, some trader with enough capital gets to decide what that number means.
This is the reality of the truth machine. A system that produces numbers the world has decided to call truth.












