Just after Labor Day, American University Professor and Harvard Griffin GSAS alumnus, Allan Lichtman predicted a victory for Democratic candidate Kamala Harris in the 2024 presidential election. It was a source of some encouragement for Harris's supporters, given that Lichtman had correctly predicted the winner of 9 of the last 10 elections based on his historical analysis of campaign trends since 1860.
Despite his track record, Lichtman has been scorned by election forecasters like Nate Silver, who build probabilistic models based on weighted averages from scores of national and state-level polls. But are these quantitative models really any more reliable than ones that leverage historical fundamentals, like Lichtman's, or, for that matter, a random guess?
The Stanford University political scientist Justin Grimmer, PhD ’10, and his colleagues, Dean Knox of the University of Pennsylvania and Sean Westwood of Dartmouth, published research last August evaluating US presidential election forecasts like Silver's. Their verdict? Scientists and voters are decades to millennia away from assessing whether probabilistic forecasting provides reliable insights into election outcomes. In the meantime, they see growing evidence of harm in the centrality of these forecasts and the horse race campaign coverage they facilitate.
This month on Colloquy: Justin Grimmer on the reliability of probabilistic election forecasts.