Gretchen Whitmer dominated the Michigan governor’s race, winning by a large margin. Our polling firm, Cygnal, set out with an experiment to test our Momentum tracking methodology against traditional tracking polls.
We publicly released the traditional approach results in Michigan while we instead released the Momentum tracking results in a similar experiment we were running in Ohio at the same time.
Interestingly, Cygnal Momentum tracking, which had Whitmer winning by 7, was closer to the actual Michigan gubernatorial election results than the traditional approach that had Whitmer by 3.
Traditional tracking conducted by other firms works by polling a subset of voters each round of surveying, but not a full sample. You then take the three most recent rounds of survey responses, look at enthusiasm, voter history, and a few other factors, then weight the survey to an expected turnout. This approach showed a growing Republican enthusiasm gap that resulted in a closer ballot on the poll but didn’t materialize on Election Day.
Cygnal Momentum tracking, which we used internally for comparison but didn’t release in Michigan, uses the same survey response data but treats it differently by looking at sub-trends within voter groups. It then projects forward those trends, essentially forecasting what changes will happen within the electorate. Think of it as polling meets predictive data science.
At the end of the day, Momentum tracking was more accurate than the traditional tracking poll approach. This is evidenced also by our Cygnal Momentum tracking experiment in Ohio where we showed J.D. Vance would win by 6.2%, and as of Wednesday morning, he won by 6.6%.
My two biggest takeaways from the Michigan and Ohio experiments are that enthusiasm doesn’t matter as much as vote history, and that well-done polling with data science is to be trusted better than traditional polling approaches.