Evaluating Predictions for the General Election: 2024 Leaderboard

Back to homepage

Using the data we've got, there are several ways we can evaluate the performance of a forecast:

Why does this matter? Forecasting sets narratives; and people made decisions based on these forecasts.

Find all the code for this analysis in build_leaderboard.py.

Seat totals

How closely did each forecast predict the overall distribution of seats?

The score is the root mean squared error of the number of seats predicted for each party.

Model Score
Britain Predicts (New Statesman) 7.4
Manifold 8.9
Exit Poll 9.9
YouGov 10.2
More in Common 10.4
electionmaps 11.1
The Economist 11.2
JLP 13.0
Sam Freedman 14.8
Focaldata 14.9
The FT 16.2
Ipsos 19.4
Electoral Calculus MRP 26.4
WeThink 26.9
Electoral Calculus STM 29.6
Survation 34.6
Savanta 45.0

Correct seat calls

How many seats did each forecast predict correctly?

The score is the percentage of seats that were called correctly.

Model Score
YouGov 573 (90.8%)
electionmaps 569 (90.2%)
Sam Freedman 564 (89.4%)
The Economist 555 (88.0%)
Manifold 555 (88.0%)
Exit Poll 552 (87.5%)
Britain Predicts (New Statesman) 551 (87.3%)
Focaldata 551 (87.3%)
More in Common 548 (86.8%)
JLP 540 (85.6%)
Electoral Calculus STM 538 (85.3%)
The FT 538 (85.3%)
Ipsos 538 (85.3%)
Survation 531 (84.2%)
Electoral Calculus MRP 508 (80.5%)
Savanta 497 (78.8%)
WeThink 497 (78.8%)

Vote share error

How accurately did each forecast predict vote shares across all parties across all seats?

The score is root mean squared error of the vote shares across all parties across all seats.

Model Score
YouGov 3.2
The Economist 4.0
electionmaps 4.1
Focaldata 4.5
JLP 4.5
Electoral Calculus STM 4.5
More in Common 4.5
Survation 4.8
The FT 4.9
Britain Predicts (New Statesman) 5.0
Ipsos 5.1
Electoral Calculus MRP 5.3
WeThink 5.3
Savanta 5.8