Skip to content

Polls: Helpful Tools for Reducing Uncertainty

In light of AAPOR's recent analysis of the 2020 election polls, we consider an important question: Have polls outlived their usefulness?

Berwood A Yost
4 min read

Dear Readers,

This month’s newsletter considers an important question that some have started asking in light of the American Association of Public Opinion Research’s recent analysis of the 2020 election polls, have polls outlived their usefulness?

Look for a special edition of our newsletter on August 19th that discusses the findings from our August 2021 Franklin & Marshall College Poll, which includes our first look at the 2022 US Senate Primary races.

Sincerely,

Berwood Yost

American Association of Public Opinion Research review of the 2020 polls

In an editorial on August 4th the Times News asked a question that I’m sure is on many people’s minds: have political polls outlived their usefulness? The author’s criticism emerged in response to the American Association of Public Opinion Research (AAPOR) review of the 2020 polls[i] that found the polls overstated President Biden’s margin of victory and, as in 2016, were generally “off the mark.” The AAPOR report could not pinpoint any specific reasons the polls were wrong in 2020, unlike in 2016 where the evidence supported a number of conclusions that the pollsters largely addressed in 2020, but its most consequential conclusion is probably that pollsters and those who report about polls must do a much better job of setting expectations about what polls are capable of telling us.

As I’ve recently written, measuring the “accuracy” of election polls is a more complicated notion than conversations about whether “the polls got it right” or whether “the polls can be trusted” allow. The complications are many: What’s the right way of measuring accuracy? How close to the final margin does a poll need to be to be considered accurate; is being within the poll’s margin of error close enough? What’s the lifespan of a polling estimate; how close to Election Day should a poll be conducted for it to count as a prediction?[ii]

These complications appear early within the AAPOR report on the 2020 polls. The AAPOR report reminds us that larger polling errors have appeared in past elections (for example in 1980) and that the error in 2020 was similar to 1996. The difference is that in 1996, “expectations were not high at the time and the 1996 performance was considered acceptable” (p. 15). This, I think, is the fundamental issue that all of us who produce and report on polling confront—what’s the best way to talk about the results of a poll or what we now collectively think of as “the polls” so that expectations about what the polling means are reasonable?

Takeaways & Future Actions

If there is anything to take away from polling in recent elections, it is that everyone should be more careful about making predictions based on a single indicator of who is ahead, particularly when there is so much other data we can use to tell the story. This means not only polling data, but other indicators that might tell us what’s happening. For example, leading up to the 2020 election, we made it a point to not only discuss our polling indicators, but to also present data about unemployment rates, campaign spending, COVID deaths, early voting, and changes in voter registration as supplemental indicators worth thinking about.

It also means pollsters must do a better job of reporting on the uncertainty of their estimates and discussing other potential sources of error beyond sampling, including the tendency for polls to share similar biases in any given election cycle.

This need for pollsters and reporters to work together so people more clearly understand the limitations of polls is an often overlooked part of the 2020 AAPOR post-mortem. References to this are sprinkled throughout the report, but the conclusions make it clear that,

“Polls are often misinterpreted as precise predictions. It is important in pre-election polling to emphasize the uncertainty by contextualizing poll results relative to their precision. Considering that the average margin of error among the state-level presidential polls in 2020 was 3.9 points, that means candidate margins smaller than 7.8 points would be difficult to statistically distinguish from zero...putting poll results in their proper context is essential; whether or not the margins are large enough to distinguish between different outcomes, they should be reported along with the poll results. Most pre-election polls lack the precision necessary to predict the outcome of semi-close contests” (p. 71).

Trying to assess the polls’ performance by looking at a single indicator is like judging the quality of a car by its paint color–easy to judge but meaningless until you understand the rest of the vehicle’s components and how the owner plans to use it. It is baffling that so many people focus on a single indicator to assess accuracy, in this instance the horse race question that measures candidate preference, when a good poll can provide essential context for understanding an election. Truth is, no one trying to forecast a future event is wise to rely on a single indicator to make their judgments, so why should polls be treated any differently? Would we feel differently about the polls if we allowed ourselves to broaden our perspective and think about more than just the horse race?

In the end, those who look at polls will make their own judgments about the polls’ performance. Undoubtedly, many of these assessments will be motivated more by partisanship and ideology than methodological criteria. In the long run, polling has more often advanced our understanding than misled us, including in 2016 and 2020. The task for everyone who understands polling and its limitations is to do more to make sure that others understand them, too.

Everyone needs to remember that polling is a helpful tool for reducing uncertainty, not eliminating it. In my mind, polls haven’t outlived their usefulness; it is just that their primary users have tended not to use them in the ways they can be most informative.

Resources & References

i. 2020 Pre-Election Polling: An Evaluation of the 2020 General Election Polls - AAPOR


ii. You can read much more about our assessment of Pennsylvania’s 2020 election polls in a previous newsletter and our subsequent report, which can be found using the link below.

Assessing the 2020 Pennsylvania Election Polls

Assessing the 2020 Pennsylvania Election Polls

Poll Performance

Related Posts

Members Public

Untangling and Explaining Voter Preferences in Our 2022 Midterm Surveys

We review the 2022 F&M polls about the Pennsylvania midterm elections, discuss how the choices we made affected how we thought about the races, and consider the lessons we will remember for future surveys.

Members Public

Polling in the 2022 PA Midterms

Analysis of the 2022 Pennsylvania midterm election polls, with an eye to understanding political poll performance and polling error.

Members Public

New Resources about State Politics and Public Opinion

Learn about resources for understanding election polls and public opinion in Pennsylvania, including how to find survey data in the F&M Poll archive. Also read about a new book analyzing whether politics were more nationalized in the 2020 elections.