Do polls need a "warning label" this year?
Side effects may include panic, irrational exuberance, and overconfidence.
Quick housekeeping note: later this week is "Ask Away" time! Send in your questions!
We are less than fifty days from the 2022 midterm elections, and this means the cadence of public polling releases is about to pick up - big time. More and more outlets and organizations want to get their numbers out into the public to inform or to shape the conversation in some way.
I'm very glad that there is a robust market for polling data. It is my career! Thank you to everyone who consumes polling data. (And an even bigger thank you to those who TAKE polls.)
But I think it is worth putting forward a few of my general guidelines for how to be a smart consumer of publicly-released polling data. As Nate Cohn has wisely pointed out, there's always a lurking possibility of a big industry-wide polling miss. I'm a believer in the value of polls, but I'm never dismissive of those who are skeptics. As a pollster myself, I know all-too-well the struggles of our industry and the myriad things that can go wrong with a survey.
There will be a lot of people opining about numbers or sharing only those that support their worldview in the coming weeks. Here's how to be smart about what you read.
1) Don't look for individual data points. Look for trends. Picking an individual poll and letting it drive your view of how a particular race might go is very fraught - especially when there is a wide spread of polls in a race. Polls in late August in Arizona, for instance, were pretty favorable toward incumbent Sen. Mark Kelly (D-AZ) versus his opponent Blake Masters. My firm was among those pollsters showing Kelly with a substantial lead during Labor Day weekend. However, that poll is now two weeks old and in a fast moving environment with more and more money being pumped into campaign advertising, things can change rapidly. More recent polls from well-rated pollsters show the race considerably closer. Even within-pollster trends show the same thing; Trafalgar, for instance, shows Masters having closed up a bit since their late August poll. Focus more on which direction things are moving.
2) Remember that even good pollsters generate outliers. So yeah...about that poll showing Kelly up by 15! Yikes!
Even a pollster that is doing everything they can to do good work can get a result from time to time that is...wonky.
I'll turn the mic back over to Nate Cohn for a second...
Echelon Insights, a Republican polling firm, released some of the most eye-popping numbers of the campaign season on Tuesday. Democratic Senate candidates led by jaw-dropping margins, including 15 points in Arizona, 21 points (!) in Pennsylvania, 10 points in Georgia and six points in Ohio. You can see the full results here.
The wildly favorable figures for Democrats are strange enough, but what’s really perplexing is that the numbers often seem eminently reasonable in the context of the last presidential race in the very same states. Take Ohio: In the Echelon poll, Donald J. Trump led Mr. Biden, in a projected 2024 rematch, by a very 2020-like eight-point margin.
My business partner Patrick talked to Nate at length in this article about his theories on why our poll showed a few results that were in the "yikes!" bucket. Note: we are not out there claiming that we actually think that the Democratic candidate will win by 21 in Pennsylvania. (Though, we are not alone with such an "eye-popping" result...this, from fellow GOP polling firm Public Opinion Strategies found something similar in mid-August.)
Not only is the poll from more than two months before the election, but we transparently note that we are using a methodology that could make our underlying samples more likely to have been exposed to message testing as part of online panels.
Patrick says these methods are likely the future of polling. Cohn concurs in part but says these methods may not be ready for prime-time yet. That's totally fine! A good debate to have. It's also possible that most of our numbers are on target for where the race stood over Labor Day weekend, but a few of the states where pollsters are badgering people the most to take surveys are the ones where the sample well is tapped driest.
3) Pay attention to question wording. This is the one that doesn't necessarily require any special methodological training. Apply common sense to discern whether a poll finding actually means what it is being sold as.
For instance, there's been a lot of hubbub made about the Wall Street Journal poll showing a change in people's view on a 15-week abortion ban from the spring until now. But the the question is worded differently in the most recent iteration, axing wording specifically outlining various exceptions. Given that we know such exceptions are quite popular, of course you'll get a lower number without including them, irrespective of the news environment!
Meanwhile, Fox News, asking about a different matter (whether Roe should be upheld or overturned) but with consistent wording, shows very modest movement from 2020 and basically the same result as in 2019. While it wouldn't necessarily surprise me to see a change in public opinion around abortion in light of the Dobbs decision, there are lots of ways to generate big headlines that overstate the matter - and all require readers to not peek too closely at question wording.
Bottom line? Honestly, just take everything with a grain of salt.
In 2020, some polls were right and some were wrong. There wasn't a clear pattern to which states were off (unlike 2016 where error was concentrated in the Midwest). There wasn't a clear pattern to which types of polls were off.
By 2021, there were some green shoots and polls getting it right. (Ahem: my firm's poll getting it pretty darn close in the Virginia's governor's race.)
But whether through panelist exhaustion, ever-declining response rates, or certain types of voters (read: educated progressives) being amped to take polls while others (read: low social trust voters) opt out, plenty could still be off, even with the best efforts taken to fix it. We are always learning.
Consume polls, but do so thoughtfully, carefully, and without letting any one data point drive your entire view of what is going on in the midterms.