Norman Ornstein and Alan Abramowitz of the American Enterprise Institute call for an end to the “polling insanity.”
In this highly charged election, it’s no surprise that the news media see every poll like an addict sees a new fix. That is especially true of polls that show large and unexpected changes. Those polls get intense coverage and analysis, adding to their presumed validity.
The problem is that the polls that make the news are also the ones most likely to be wrong. And to folks like us, who know the polling game and can sort out real trends from normal perturbations, too many of this year’s polls, and their coverage, have been cringeworthy.
Take the Reuters/Ipsos survey. It showed huge shifts during a time when there were no major events. There is a robust scholarship, using sophisticated panel surveys, that demonstrates remarkable stability in voter preferences, especially in times of intense partisan preferences and tribal political identities. The chances that the shifts seen in these polls are real and not artifacts of sample design and polling flaws? Close to zero.
What about the neck-and-neck race described in the NBC/Survey Monkey poll? A deeper dig shows that 28 percent of Latinos in this survey support Mr. Trump. If the candidate were a conventional Republican like Mitt Romney or George W. Bush, that wouldn’t raise eyebrows. But most other surveys have shown Mr. Trump eking out 10 to 12 percent among Latino voters.
The Quinnipiac polls have their own built-in problems. In each of the battleground states, their samples project electorates even whiter than the states had in 2012 (as shown in exit polls taken at the time), even though these states have seen significant increases in minority numbers.
Part of the problem stems from the polling process itself. Getting reliable samples of voters is increasingly expensive and difficult, particularly as Americans go all-cellular. Response rates have plummeted to 9 percent or less.
The alternative, online surveys, may have promise, but as the data journalist Nate Cohn has pointed out, the inability to ask nuanced questions can distort results. Many surveys of both varieties do not ask questions in Spanish, muddying the results among Latino Americans. Question wording and question order can have big effects on outcomes as well.
With low response rates and other issues, pollsters try to massage their data to reflect the population as a whole, weighting their samples by age, race and sex. But that makes polling far more of an art than a science, and some surveys build in distortions, having too many Democrats or Republicans, or too many or too few minorities. If polling these days is an art, there are a lot of mediocre or bad artists.