2024 Election & Prediction Markets: Lessons Learned
Polymarket processed $3.6 billion on the 2024 US election. A French trader commissioned private polls, identified the shy voter effect, and made $85 million. The markets beat the polls. Then a Vanderbilt study found 67% accuracy on Polymarket vs 93% on PredictIt. What exactly did the 2024 election prove - and what unanswered questions remain?
Key Takeaways
- Polymarket processed $3.6 billion in volume on the 2024 US presidential election. Trump was favored at ~60% on prediction markets while traditional polls showed a near-toss-up. The markets were directionally correct. This is the fact most widely cited as proof of prediction market superiority.
- A Vanderbilt University study found Polymarket was right only 67% of the time across 2,500 markets and $2.5 billion in volume. Kalshi hit 78%. PredictIt 93%. The result depends heavily on what you measure and which markets you include.
- Théo, the French trader, made ~$85 million on Trump - not because markets were accurate, but because he identified a specific mispricing (the shy voter effect) using private polling, and exploited it. This is the case for information-driven alpha, not for “markets are always right.”
- The five things the 2024 election proved: (1) markets update faster than polls, (2) real money creates genuine conviction, (3) single-participant concentration can move prices, (4) niche contract accuracy is substantially lower than headline contracts, (5) calibration matters more than “called it.”
- The five things the 2024 election did NOT prove: (1) prediction markets are manipulation-proof, (2) all prediction market categories are equally accurate, (3) accuracy in high-volume events transfers to thin community markets, (4) markets beat polls in all domains, (5) large positions represent aggregated information rather than a single thesis.
The Narrative and the Reality
The prediction market story from the 2024 US election has been told in a specific, simplified form: the markets said Trump would win while the polls said toss-up, and the markets were right. By election day, a volume of over $500 million in presidential election bets had been traded on Kalshi alone. Polymarket featured over $3.6 billion more in volume. The polls showed Harris and Trump within a few percentage points. Prediction markets showed Trump at ~60%. Trump won.
The simplified narrative is not wrong. But it is incomplete. The 2024 election was simultaneously the most compelling demonstration of prediction market value and the most important stress test of prediction market limitations. Both are true at the same time.
This article separates what the 2024 election actually proved about prediction markets from what it did not - and draws the implications for anyone using prediction markets as forecasting tools or trading venues in 2026 and beyond.
The Five Things the 2024 Election Proved
1. Prediction Markets Update Faster Than Polls
After President Biden’s poor debate performance in 2024, prediction markets adjusted immediately, while polls took days to reflect public reaction. This real-time updating is the fundamental structural advantage of prediction markets over surveys. Polls require design, fielding, collection, and processing. Markets reprice in seconds. Every news event, debate moment, and campaign development is immediately incorporated into the price by financially incentivized participants.
The practical implication for information consumers: prediction market prices are a leading indicator, not a lagging one. The market's reaction to a development tells you something about the probability distribution of outcomes before any analyst has published a response.
2. Real Money Creates Genuine Conviction
The wisdom of crowds mechanism in prediction markets derives its accuracy from one thing: participants have financial skin in the game. When you buy a YES contract at $0.60, you are not expressing a casual opinion. You are committing capital that will be lost if the event does not occur. This asymmetry - between the financial cost of being wrong and the social cost of being wrong - is what makes real-money prediction markets different from polls and survey-based forecasting.
The 2024 election demonstrated this mechanism at scale. Prediction markets correctly predicted 23 of 28 major election results in 2024, including the presidential race where market-implied probabilities tracked within 3% of final vote shares. Traditional polling aggregators showed systematic biases. Real-money conviction corrected for those biases, at least at the level of the winner call.
3. Single-Participant Concentration Can Move Prices
Théo, a French trader with a banking background, accumulated $85 million in Trump victory positions across multiple Polymarket markets between August and early November 2024. He used four accounts (Fredi9999, Theo4, PrincessCaro, Michie) to build his position, placing hundreds of small transactions over hours to minimize price impact. At peak, his accounts represented approximately 25% of the Trump winning the Electoral College contracts and approximately 40% of the Trump winning the popular vote contracts.
This single-participant concentration is not evidence of manipulation - Polymarket investigated and found no evidence of foul play. But it is evidence that prediction markets can be heavily influenced by a single high-conviction participant, and that the resulting price is not necessarily an aggregated crowd estimate. When Théo’s Trump odds rose while polls showed a toss-up, part of that divergence reflected his private polling methodology, not a crowd consensus. He had commissioned private “neighbor polls” - asking people who they thought their neighbors would vote for, rather than who they themselves would vote for - and identified the shy Trump voter effect that polls were systematically missing.
4. Niche Contract Accuracy Is Substantially Lower
While 93% of PredictIt markets correctly predicted outcomes better than chance, accuracy fell to 78% on Kalshi and 67% on Polymarket. The Vanderbilt researchers (Clinton and Huang) found that the accuracy gap between platforms was largely driven by the types of markets each listed, not just the platform quality. Polymarket lists substantially more “niche or low-information events that are more akin to speculation or entertainment” - contracts on whether a candidate would say a specific word, exact winning margin contracts, and other speculative instruments.
When the researchers controlled for market type, Kalshi and Polymarket were not significantly different in accuracy on comparable markets. The headline accuracy numbers reflect what each platform chose to list, not just how well the markets predicted outcomes.
5. Calibration Matters More Than ‘Called It’
The most common claim about the 2024 election is that prediction markets “called it.” This is less precise than it sounds. A market that shows 60% probability for the eventual winner has “called it” in the sense that the probability exceeded 50%. But it has also implied a 40% chance of being wrong, which is substantial. The meaningful question is not “did the market call the winner?” but “was the market’s probability estimate calibrated?
CEPR research analyzing over 300,000 Kalshi contracts found a clear favourite-longshot bias: low-price contracts win far less often than required to break even, while high-price contracts win more often and yield small positive returns. This is the systematic miscalibration that exists even in the most liquid prediction markets. The direction of the 2024 call was correct; the exact probability was imprecise.
The Five Things the 2024 Election Did NOT Prove
1. Prediction Markets Are Not Manipulation-Proof
Théo’s case is the clearest illustration of this limitation. A single participant with $85 million in positions was moving the Trump probability at a time when the market was being cited by major media outlets as the leading forecasting tool for the election. The question of whether large-position traders are discovering truth or creating it is unanswered by the 2024 data.
The concern is not that Théo manipulated the market - he was expressing a genuine probability estimate based on superior research. The concern is that any market where a single participant holds 25–40% of the open interest is not aggregating diverse crowd wisdom. It is primarily expressing one person’s view, multiplied by their capital. The “wise crowd” becomes a “wise whale.”
2. Accuracy in One Domain Does Not Transfer to All Domains
The 2024 US presidential election is the highest-volume, highest-information political prediction market in history. It attracted institutional traders, professional researchers, campaign operatives, and sophisticated international participants. The information aggregation conditions were uniquely favorable.
The 81% accuracy on electoral binary outcomes does not transfer to crypto protocol governance votes, sports match results, scientific milestone contracts, or local event duels with thin participation. Accuracy in prediction markets is a function of liquidity, participant diversity, and information quality - all of which vary enormously by market type.
Market Category | Approximate Accuracy(2024–2026 data) | Primary Driver ofInaccuracy |
Major national elections | 81%(Polymarket electoral binary) | Adequate liquidity; systematic polling bias visible |
Sports (major leagues) | 69%(Polymarket, 2024) | Information flow robust; thin markets degrade fast |
Politics (niche contracts) | Below 67% | Low information; speculation-driven pricing |
Crypto/DeFi milestones | Varies widely | Thin liquidity; insider information asymmetry |
Local/community events | Below 70% | Minimal participation; small samples |
High-volume economic (Fed) | Near perfect(Kalshi FOMC record since 2022) | Institutional-grade participation; clear resolution criteria |
3. Large Positions Are Not Always Aggregated Information
The Théo case illustrates a critical distinction: a large position in a prediction market represents one participant’s high-conviction estimate, not a crowd’s aggregated estimate. When Théo’s accounts held 25–40% of the Trump contracts, the market price was heavily influenced by his private research. Observers reading the Polymarket price as “the crowd’s view” were actually reading “one French trader’s view, amplified by capital.”
This is not necessarily wrong - Théo’s research was better than the crowd’s, and his price was more accurate than the crowd’s initial consensus. But it means the market mechanism worked because of one informed participant, not because of crowd wisdom. In a market dominated by a single participant, the prediction market is an information discovery mechanism for that participant’s views, not for collective intelligence.
4. Prediction Markets Did Not Beat Polls on All Questions
The 2024 election narrative focuses on the presidential winner call. Less discussed: how did prediction markets perform on Senate races, House seats, gubernatorial contests, and ballot measures? The answer is considerably more mixed.
Researchers found that niche or low-information markets are the least accurate - and once they controlled for market type, Kalshi and Polymarket were not significantly different in accuracy from each other on comparable markets. The presidential winner market was high-information, high-liquidity, and heavily researched. Most sub-election markets were not.
5. The Implied Probability Was Not a Precision Forecast
The 2024 markets showed Trump at approximately 60% probability on election day. He won. This is cited as evidence that the markets were right. But a 60% probability explicitly implies a 40% chance of the other outcome. If you repeated the 2024 election 10 times with the same conditions, a well-calibrated market at 60% would expect Harris to win 4 of those 10 times. The single realization - Trump winning once - tells you approximately nothing about whether the 60% estimate was correct, overconfident, or underconfident.
Proper evaluation of prediction market accuracy requires large samples of resolved predictions across time, not a single high-profile call. The Brier score and calibration curve analysis are the correct tools; “called the winner” is not.
The Théo Case Study - Information Alpha at Scale
The Théo case is the most instructive single trade in prediction market history, not because of the $85 million profit, but because it illustrates the information advantage mechanism precisely.
Element | Detail | Implication |
Initial position (August 2024) | ~$30M Trump, Polymarket | Early entry when market showed near-even odds |
Information method | Private “neighbor polls” (social circle polling) | Identified shy voter effect invisible to standard polls |
Final position (early November) | ~$85M across 11 accounts | Bet majority of liquid assets on the thesis |
Market vs. poll gap | Polymarket 66% Trump; polling ~48–52% | 18+ percentage point divergence from mainstream consensus |
Resolution | Trump won; net profit ~$78.7M (Chainalysis) | Single trade generated more than most hedge funds earn in a year |
Post-election finding | Polymarket investigated; no manipulation found | High-conviction research, not coordination |
What Théo did is exactly what this article series describes throughout: he found a gap between the consensus (polls showing near-toss-up) and the market (Polymarket initially showing ~50%), identified why the gap existed (systematic polling bias for the shy voter effect), and entered a position at the opening price before the market reflected his information. He built his position strategically using hundreds of small transactions over 10-hour periods to minimize price impact.
The 2024 election did not prove that prediction markets are a magic forecasting mechanism. It proved that a sufficiently motivated and well-resourced participant can identify systematic mispricings and profit from them at scale. That mechanism is what makes prediction markets valuable - the accuracy of the aggregate price is a byproduct of individual participants seeking profit from mispricing.
What 2024 Means for Prediction Market Participants in 2026
The 2024 election established prediction markets in the mainstream financial and media landscape in a way that earlier election cycles had not. The implications for 2026 are specific:
Lesson | 2024 Evidence | 2026 Application |
High-volume political markets are reasonably accurate | Polymarket 23/28 major electoral calls | Use major election markets as directional signal, not precision forecast |
Niche markets are less accurate | Vanderbilt: 67% Polymarket overall | Apply higher skepticism to low-volume, speculation-driven contracts |
Whale concentration is a signal, not noise | Théo’s position was directionally correct | Large position accumulation by sophisticated participants is worth investigating |
Markets beat polls on direction; not always on magnitude | 60% Trump probability vs. actual result | Use markets for direction; polls for magnitude checks |
Arbitrage opportunities persist even in liquid markets | Price divergence peaked final 2 weeks | Cross-platform arbitrage remains viable in less efficient markets |
Calibration matters more than winner prediction | Favourite-longshot bias persists | Track Brier scores, not headline calls |
Conclusion: The Right Lessons and the Wrong Ones
The 2024 US election taught prediction markets two things: that real-money aggregation produces better-calibrated directional signals than polls in high-information environments, and that a single high-conviction participant with superior research can identify and capture systematic mispricings at scale.
What the 2024 election did not teach: that prediction markets are infallible, that all market types are equally accurate, that large positions represent collective wisdom, or that a single spectacular call validates the methodology for all future applications.
The honest picture is more nuanced than the headline narrative. Prediction markets are a powerful information aggregation mechanism that works well when liquidity is deep, participants are diverse, and information is broadly distributed. They work less well when markets are thin, dominated by a single participant, or focused on speculative niche questions. The 2024 US presidential election represented the best-case scenario for prediction market accuracy. Most markets - community duels, niche events, local questions - operate in more challenging conditions.
The implication for DuelDuck creators: use the 2024 election’s lessons on information advantage, not its legend of infallibility. Design markets where your community has genuine domain expertise, price them where you have identified a real information gap versus the consensus, and track calibration over time. Théo’s $85 million was earned by being right in one specific domain with specific research. Your equivalent is a community that prices its domain better than anyone else prices it.
Start Predicting. Start Earning
DuelDuck - P2P prediction market on Solana. No KYC. USDC payouts. Apply the information advantage framework from the 2024 election to any binary event your community prices better than the consensus - and earn up to 10% creator fee on every pool you design.
Create your first duel today


