ALSen Polling Wasn’t All Wrong; How We Did It Differently

Election Day in Alabama has passed. Thank GOODNESS! The yard signs are disappearing; the headlines (believe it or not) are fading; and Roy Moore’s horse, Sassy, is back in the barn.

Republican strategists are left to make sense of the chaos and explain how things went so horribly wrong on December 12th – or really in the entire election process leading up to it.

After the Republican Primary Runoff, Moore had a clear path to victory. Alabama is one of the most conservative states in the country, Moore was the Republican nominee with an abnormally faithful voting base, and he was facing a liberal Democratic opponent.

By the first of November, polls were showing Moore’s lead over Jones topping 10 points. Then the Washington Post article hit.

In the week leading up to the election, polls of every methodology were released that showed mixed results from Moore +9 to Jones +10. The disparity in polling even lead popular aggregator FiveThirtyEight to publish an article questioning the methodologies (IVR vs. live) used by pollsters in their Alabama surveys.

Live collection surveys that had huge cell phone components typically skewed for Jones, while IVR-only collection showed Moore leading. Monmouth University’s December 9th poll that showed the Senate race tied has been hailed as the survey closest to election day results.

Although not conducted for public release, Cygnal’s own internal polling for private clients showed the Moore-Jones race tied – down to the tenth of a percent – a full week prior to the Monmouth survey. In addition, our models showed Jones sneaking out a win if turnout exceed 1.25 million.

So why were so many pollsters wrong, and how did we get it right? The bottom line comes down to turnout modeling and approach to sample collection.

Varying Turnout Models

Predicting turnout in this election was difficult to say the least. A special election in December for a federal office with one candidate damaged by allegations of sexual misconduct…the possibilities – and jokes – were endless.

Varying turnout models were possibly the most significant factor in the inconsistent results among pollsters. Very few pollsters (and politicos) predicted turnout would be as large as it was on Election Day. The Alabama Secretary of State’s turnout prediction was just over 800k (25%).

Turnout modeling is as much of an art as it is a science. Our process led us to have one of the closest turnout predictions to real results. Our final survey was based on a turnout of 1.2 million. This was partially due to our belief that this election would mirror the 2010 general election.

Our turnout modeling methodology is based around a process that allows us to somewhat “let the sample float” while also putting up guardrails to not let it get out of hand, a problem seen in some traditional survey sampling techniques. Thanks to this approach, we were able to see a huge jump in screen ins of lower propensity voters during the month of November without having a “bad” sample full of just wishful-to-participate voters. Apparently that trend continued, and it’s what beat Roy Moore.

We then created Election Day models based on voter propensity. While we had the race tied if 1.2 million voter showed up, we knew that turnout higher than that would result in a Jones victory.

Outdated Methodologies

One of FiveThirtyEight’s major critiques was the IVR-only methodology used by multiple pollsters. This is a valid concern as IVR calls can’t be sent to cell phones due to FCC regulations.

An IVR-only methodology leads to a sample that, in our experience, skews older, whiter, and more Republican – but I repeat myself. This largely explains why multiple pollsters showed Moore with a lead outside of the margin of error.

While using IVR-only can produce somewhat useable results in a Republican primary, its use in a general election should be avoided.

We instead curated an accurate sample by using our hybrid methodology – calling IVR to landline phones and collecting a representative sample of cell phones with live operators. This allowed us to account for any discrepancies in sample collection methodologies.

Conclusion

This Senate race – and its unexpected (to other people) turnout – was unlike any we’ve ever seen in Alabama. Despite the unprecedented nature of the race, Cygnal’s proven methodologies won. While the rest of the world held their breath watching returns, our clients rested easy on election night knowing that Alabamians were going to say goodbye to Moore and the horse he rode in on.