Skip to content

A Better Mousetrap?

As we head into an important mid-term election, a reminder that all polls have their limitations, including those that rely on new methods currently being touted as superior.

Berwood A Yost
4 min read

Dear Readers,

As we enter an election year where we are sure to encounter a great deal of polling data, I think it is helpful to think about the way that poll findings and methods are reported and discussed in the media. Today’s newsletter uses the rollout of a recently announced polling partnership to remind us that all polls have their limitations. I hope you find it interesting and a useful guide for consuming polling data in the coming months.

I’m also writing to let you know that the next Franklin & Marshall College Poll will be released on March 5, 2026.

Thank you for reading,

Berwood Yost

New Isnt Necessarily Better

 All research has limitations and researchers will include a general statement acknowledging the unmeasured error associated with all forms of public opinion research.[1] - The American Association for Public Opinion Research’s Transparency Initiative Standards

The mistrust and skepticism many feel about modern polling comes as much from the way polls are written and talked about in the media as it does from the polling methods and poll results. Pollsters and those who report on polling results often do a terrible job of communicating effectively about what polls can and cannot do. This is a theme I’ve written about many times, as in this review of the 2024 polling in Pennsylvania and in this discussion about using polls better.

Reminding ourselves about the need for humility in poll reporting is going to be important as we head into a mid-term election cycle that will decide control of both the state and federal governments. Pennsylvanians will find themselves reading all kinds of polls, performed using many different approaches, each purporting to tell us with some level of certainty what’s going to happen.

I was reminded of this need when I read a local media outlet’s rollout of its new polling partnership, Why we hung up the phone on election polling, and what we learned by doing so (PennLive, January 15, 2026), which describes an approach that uses non-randomly selected participants to answer online surveys. The description was marketing malarkey dressed as methodological ingenuity. The basic implication was that someone had built a better mousetrap (a new way of polling) while all the other mousetrap manufacturers had been satisfied with their tired old product (because they use the telephone to reach people who don’t want to be reached). That characterization is nonsense.

Throughout its history, the polling industry has engaged in highly public self-study that has led to innovation and methodological adaptation. In nearly every election since the emergence of public election polling in the 1930s, pollsters themselves have gathered to review their methods, discuss what they could do better, and revise accordingly.

Don’t take my word for it. The Pew Center wrote about changes in polling methods and found that pollsters have become less reliant on telephone-only polls and more reliant on panels. Pew reported that more than three out of every five pollsters in the United States collected their data differently in 2022 than they did in 2016. That doesn’t sound like pollsters are “clinging to outdated methods” as the article suggests.  

Contrary to the article’s suggestions, most reputable pollsters these days aren’t in fact relying on a single way of gathering data—the current best practice is to allow those we interview to choose how they want to participate. Some people still prefer to talk on the phone to a live interviewer while others want to answer online. The choice should be theirs.

Allowing survey participants to choose how they want to respond is a good way to improve cooperation, but it is also a good way to determine how different data collection techniques can produce different results. Because just as there are limitations to collecting data by phone, there are also limitations to collecting data from online respondents as this organization is doing.

Online Non-Probability Surveys Have Their Own Limitations

The most consequential problems for online surveys are inattentive and fraudulent respondents. Inattentive survey participants have contributed to wildly incorrect estimates, with two notable examples being a sizable overestimate of the share of adults that believe political violence is acceptable, and incredible (and incredibly inaccurate) reports of people drinking bleach to prevent COVID.

More troubling perhaps is that a marketing research industry group suggests that thirty to forty percent of online survey panelists are fake, which is a significant problem for the kinds of non-probability panels that are being produced by this outlet’s polling partner. Non-probability panels are constructed by finding people who are willing to join the panel, meaning these panels rely on voluntary, self-selected participants who are often paid to participate. In contrast, a probability sample, which is the basis of modern polling, begins with a listing of the population that includes all or almost all the population of interest and the survey participants are selected from that list at random.

The basic difference here is important: the assumptions underlying a probability sample are based on random sampling theory, while the assumptions underlying a non-probability sample are based on the beliefs of those conducting the research about what the sample should look like.

An analysis of the 2024 polls in Pennsylvania found these methodological choices had notable effects on the results the surveys produced and, consequently, on the story they told about the election. And prior research shows that opt-in samples of the kinds being touted by their designers as an innovation that is built on methods “for how people actually live, communicate, and participate” are about half as accurate as probability-based surveys.  

The sad result of the rollout of this polling partnership is that it undercuts efforts to communicate clearly about the strengths and limitations of polling at a time of deep partisan mistrust. It also undersells the importance of how much a survey’s methods affect its results. Having a variety of tools for assessing public opinion is good, but failing to acknowledge the limitations and uncertainties of any given method is marketing, not journalism.

Caveat emptor.


[1] This quote is from the American Association for Public Opinion Research’s Transparency Initiative, https://aapor.org/standards-and-ethics/transparency-initiative/#1667862853284-9f9268de-36df (accessed February 4, 2026).

Polling MethodologyPoll PerformancePoll Transparency

Related Posts

Members Public

2024 Pennsylvania Polling: Focusing on Performance in the Horse Race isn’t Good Enough

We explore how oversimplified postmortem polling assessments can be more harmful than helpful.

Members Public

Thinking About Likely Voters

There have been many comments and questions about the likely voter data reported in the October F&M Poll, and how we define likely voters. In this post, we address the two most common questions.

Members Public

Polling the Presidential Race: Does Methodology Matter?

We explore how various methodologies may affect poll results, looking particularly at how different approaches to sampling and data collection seem to produce slightly different descriptions of the 2024 presidential race.