The poll that matters is the one that happens on Election Day
Political prognosticators have had a rough year. Brexit stands out, of course. As do a number of upsets in the U.S. primary season. But the degree to which the large majority of pollsters and analysts got the U.S. presidential race wrong takes the cake.
Of 67 national polls tracking the race since October, only four gave Trump the lead, according to the website RealClearPolitics.
What went wrong? The professionals will be investigating the question for some time to come, but here are ten leading suspects right now.
Issue One: More People Use Caller ID
People are using caller ID to screen out pollsters. This is a problem for several reasons. First, it’s simply harder to get anyone to take a poll. Second, pollsters can’t really tell if the people they finally speak with do a good job of representing the population as a whole. Because so many people are screening themselves out, the random sampling methods (so integral to proper polling) are no longer as valid.
Issue Two: It Was Hard to Determine Likely Voters This Year
Donald Trump was a very unusual candidate, and he was able to bring in a larger group of historically unlikely voters.
Issue Three: Close Elections Are Tough to Call Because of Polling Errors
In today’s polling world, an error of 2 to 3 percentage points is normal. When races are close, as this one was, a 2 or 3 point advantage falls within the margin of error, making a confident prediction virtually impossible. Nate Silver is one of the best known analysts in the nation. Leading up to this election, Silver’s website, 538, noted that Trump was just a normal polling error behind Clinton. They wrote on November 4, “If that 2.7-point error doesn’t sound like very much to you, well, it’s very close to what Donald Trump needs to overtake Hillary Clinton in the popular vote.”
Issue Four: Trump Voters Were Reluctant to Admit Who They Voted For
For whatever reasons, pollsters found that Trump voters were more likely to share their voting intentions when speaking to a recorded voice rather than a live one. This was especially true with women. This flips conventional wisdom on its head because polls are generally considered to be of a higher quality when they use real people to ask questions.
A similar trend was found with some Internet polls.
Issue Five: Democrats Did Not Turnout to the Degree They Said They Would
“The turnout models appear to have been badly off in many states,” said Matt Towery of Opinion Savvy.
Titus Bond of the Remington Research Group stated, “What happened in Ohio is Republicans turned out like they usually do and Democrats did not.” Some people who said they were supporting her in the polls apparently never turned up at the voting booth.
Or, in some cases, Republicans may have turned out with more enthusiasm than expected. Statistician Andrew Gelman writes, “The claim has been made that Trump’s supporters had more enthusiasm for their candidate. They were part of a movement (as with Obama 2008) in a way that was less so for Clinton’s supporters. That enthusiasm could transfer to unexpectedly high voter turnout, with the twist that this would be hard to capture in pre-election surveys if Trump’s supporters were, at the same time, less likely to respond to pollsters.”
Issue Six: There Wasn’t Enough Polling
Polling can be expensive, and sometimes there wasn’t enough. “We probably should have had more polling in Wisconsin and Michigan,” said Joshua Dyck, political science professor and co-director of the Center for Public Opinion at the University of Massachusetts-Lowell.
Issue Seven: It Was Hard to Establish Weights
Because of the difficulties of reaching a truly random sample (see Issue One), pollsters and analysts had to use statistical weighting to compensate. But this can be hard to do well, and when the weights are wrong, the error rates are increased.
Issue Eight: They Might Not Have Been Measuring the Right Things
Some experts are beginning to argue that traditional polling just can’t get at the truth, for all the reasons mentioned above. Maybe an analysis of social media would work better? Some media outlets have pointed to a couple of engineering students who co-founded a company call Tweetsense. It’s based on a system that combines natural language processing and machine learning to derive public opinion insights from social media. They predicted Trump’s win, as well as the Brexit vote.
In a different vein, the creators of an app also found it predicted Trump’s win. The app asks just one question: “Which candidate are you going to vote for?” NPR reports, “The app seemed to work. It has nearly 200,000 verified users — not just signups or Twitter bots or trolls, but citizens the startup has crosschecked with voter registration records to confirm identity.”
Issue Nine: The Third Party Collapse
At the last second, many people who said they were going to vote for Gary Johnson decided to vote for one of the primary candidates. For now, the guess is that most of the advantage went to Trump.
Issue Ten: The Polls Themselves
Given that virtually all the polls were calling the race for Clinton, it’s possible that some Democrats didn’t feel the need to vote, believing Clinton couldn’t lose.
A lot needs to change in the polling world. Pollsters need to get a better handle on enthusiasm gaps, statistical weighting, combining various polling techniques, and experimenting with new methods such as social media analysis. Even then, though, there will likely be more big surprises in political races. Despite all the new technologies and gigabytes of data, the future remains a hauntingly elusive and often surprising place.