The Precursor Awards
Every year, before the Oscars, a constellation of other film industry awards announce their own winners. These precursor awards have historically shown predictive power for Oscar outcomes.
Using
futuresearch*, we researched 5,298 award winners spanning 26 years of ceremony data (2000–2025), covering all 24 Oscar categories and 15 precursor award shows. But how reliable are they, really? The answer varies wildly depending on which award you’re looking at and which Oscar category you care about.
Industry Guilds
Voted on by professionals in the relevant craft. The strongest predictors for their corresponding Oscar categories.
- DGA (Directors Guild)
- PGA (Producers Guild)
- SAG (Screen Actors Guild)
- WGA (Writers Guild)
- ACE Eddie Awards
Major Award Shows
High-profile ceremonies with broad category coverage.
- BAFTA
- Golden Globes
- Critics Choice Awards
Critics Circles
Smaller bodies of film critics whose picks sometimes diverge significantly from Oscar outcomes.
- NYFCC (New York Film Critics Circle)
- LAFCA (Los Angeles Film Critics Association)
- NSFC (National Society of Film Critics)
- NBR (National Board of Review)
Festivals
Early signals from months before the Oscars, though with weaker predictive power overall.
- Venice (Golden Lion)
- Cannes (Palme d’Or)
- Toronto (People’s Choice)
* futuresearch is offering $20 in free credits if you want to run your own analyses
Which Oscar Categories Are Most Predictable?
For each category, the single most accurate precursor over 26 years. Categories at the top are near-locks; those at the bottom are genuinely hard to call.
So some categories have a standout predictor while others are a toss-up. What does that mean for this year’s races?
2026 Awards Season at a Glance
Who is winning what this awards season? Each colour represents a precursor award. Click the chips to show or hide columns.
What the Data Suggests for 2026
This is not a forecast. It’s a question: if historical patterns hold, what do they suggest? For each category, we look at which precursor awards each nominee won, then weight those wins by historical accuracy. A nominee who won the DGA (85% accurate for Best Director) gets a much stronger signal than one who won the NBR (4%). Nominees who won no precursors share a “dark horse” probability derived from how often Oscar winners in that category historically had zero precursor wins. Scores are normalised across all nominees in each category to produce win probabilities.
A simple base-rate model. It doesn’t account for narrative momentum, campaign spending, or voter psychology. Categories where one nominee swept the precursors show high confidence; split races produce more uncertain estimates. Hover over a nominee to see which precursors contributed to their score.
Which Precursors Actually Matter?
For each precursor, the percentage of years where its winner went on to win the Oscar in the same category (2000–2025, minimum 5 years of data)
Accuracy alone doesn’t tell the full story. Does winning more precursor awards actually improve a nominee’s chances, or is one strong signal enough?
Does Sweeping the Precursors Guarantee an Oscar?
If a nominee wins N precursor awards, how likely are they to win the Oscar?
Individual precursor accuracy is useful, but the real signal comes from specific combinations. Some are nearly bulletproof.
Announcing the Winners
Which specific sets of precursor wins have historically led to Oscar victories?
When the Awards Agree and When They Don’t
Percentage of years where two precursor awards picked the same winner in a given category