What Makes for a Good Pollster in a Crazy Election Season?

 

Polling is the killer forecasting app of any election season, even one as turbulent as the U.S. primary season in 2016. Experts such as Nate Silver, editor-in-chief at the website FiveThirtyEight, make their forecasts largely based on their interpretations of polling data.

Polls are also among the most closely scrutinized forecasting tools because inaccurate election predictions make for big headlines, especially at the presidential level. This year, the most striking example so far is the victory that Democratic candidate Bernie Sanders won over Hillary Clinton in Michigan. In the month leading up to the election in that state, not a single poll had Clinton leading by less than five percentage points. Many polls had her winning by much more. Largely as a result, FiveThirtyEight gave Clinton more than a 99% chance of winning.

In June 2016, FiveThirtyEight published an article on the state of polling and rated pollsters on a scale of A+ to F. According to its ranking, only five pollsters received A+ ratings:

So, what qualities do these five organizations have in common?

  • First and foremost, they are more predictive. The predictive rating is based a “projection of how accurate a pollster’s survey will be in future elections relative to other polls, based on a combination of a pollster’s historical performance, the number of polls it has in the database and our proxies for methodological quality.” It is this data point that determines the grade that FiveThirtyEight gives. In the case of all the organizations with A+ ratings, the Predictive +/- score was -1.1. Negative scores indicate higher quality.
  • They use real people, not robots, to call cellphones when polling.
  • They do not use Internet-based polls. However, that doesn’t necessarily mean that Internet polls are all bad. One pollster that uses Internet polling, Ipsos, received a grade of A-. FiveThirtyEight not longer imposes an explicit penalty on organizations that use Internet polls.
  • They have high standards, including a willingness to release their raw data. They are members of the National Council on Public Polls (NCPP), are supporters of the American Association for Public Opinion Research (AAPOR) Transparency Initiative, or release their raw data to Roper Center Archive. As FiveThirtyEight explains, “AAPOR and NCPP do some vetting of applicants and require them to meet certain disclosure and professional standards before joining. This vetting process, along with self-selection in which firms choose to participate in these groups, tends to screen out firms with poorer methodological standards.”
  • They call most races correctly. This doesn’t mean every race. Although the Field Poll called 100% of races correctly, the ABC News/Washington Post polls only got 78% right.
  • Their average errors tend to be low. An average error is the difference between the polled result and the true result for the margin separating the top two finishers in the election.
  • Their average error scores compare favorably with those of other polling firms surveying the same races.

In short, quality pays. More accurate pollsters tend to use more expensive methods (such as using real humans to make calls) and they are more willing to disclose and share raw data. When it comes to forecasting election results, cheap shortcuts tend to cloud the future.

Leave a Reply