Wick’s 2022 Methodology

Forecasts will favor the bold this November. There is a need to challenge current perceptions about what constitutes as a representative sample. 2022 will present new strains of bias sources for pollsters, and accuracy is going to come from the agencies who recognize that yesterday’s secret sauce is today’s Achilles heel- and that good methodology starts with a culture that embraces a little curiosity.

David Burrell

by David Burrell, CEO at Wick

4 min read

A foolish consistency is the hobgoblin of little minds...

Ralph Waldo Emerson

The Light Version

  • Through much of 2020, Wick’s battleground polling looked a lot like much of the polling you saw in the news- Biden was up pretty big.

  • This gut feeling made us start to ask, “Is something new causing a non-response bias in our polling that we aren’t catching?” To answer this question we started looking at what was new about 2020.

  • Nothing was noticeably wrong with the polling methodology, But something didn’t feel right- How could Trump have historic levels of Black and Hispanic support, historic grassroots energy and numbers, but be losing by historic margins?

  • We hypothesized that the propaganda-like effect stemming from the 2020 coordination of mainstream media and big tech could be creating non-response bias amongst their shared political enemy (Trump and the Trump voter).

  • If this hypothesis was true, then we could have our quota met on republicans… but maybe not always getting a representative sample of the different types of republicans.

  • We ultimately adjusted our methodology and corrected for some of this non-response bias by looking for symptoms and adjusting our sampling methods to treat those symptoms. The result was very accurate polls of the 2020 results.

  • The lesson from 2020 isn’t about the specific quotas and sampling methods we adjusted. It’s about the need to challenge current perceptions about what constitutes as a representative sample. 2022 will present new strains of bias sources for pollsters, and accuracy is going to come from the agencies who recognize that yesterday’s secret sauce is today’s Achilles heel- and that good methodology starts with a culture that embraces a little curiosity.

Part 1: The Problem

Until October 2020, Wick’s polls showed Trump losing by margins similar to what the world saw in the news. At that point, there wasn’t a good reason to question those margins- we were hitting all the traditional segment quotas and we felt comforted by the fact that our results were in lock-step with most of our polling peers.

But there were some things that didn’t sit right with our team. As a starting point, we had long noticed our polls show a historic shift towards Trump amongst both Hispanic and African American voters; groups that have two main shared characteristics: minority and largely working class. Since there was not a mainstream narrative that painted Trump as the champion of minorities, we started to assume that there could be a connection between this increase in support and the economic data related to Trump’s policies benefiting the working-class. If this assumption was true it should follow that there is a counterpart of white voter support with the same socioeconomic status as these African Americans and Hispanics. But we didn’t see it anywhere in our polls.

This should have been enough to challenge the idea that our polls were representative of the voting population, but the final nudge to act on this feeling didn’t come until mid-October when we were watching a Biden speech on TV and I couldn’t hear him over the sound of Trump supporters honking their horns. I joked that we needed to tally the honks because out of the hundreds of polls I’ve run this year, this is the first I have heard from this group of voters.

But why weren’t they taking our polls?

Part 2: The Hypothesis

Accurate public opinion polling is only possible in western-style democracies where people trust the democratic process and feel free to express their beliefs and opinions. If it seems like sorcery when 700 respondents in a survey accurately predict the election day behavior of millions, the source of that magic is a healthy democracy.

Imagine the difficulty in achieving an accurate political poll — one that’s supposed to be representative of the honest beliefs of an entire population— in Communist China or North Korea. Would you trust it?

China and North Korea may seem to be extreme examples, but they’re the easiest modern-day example to illustrate that undemocratic societies have characteristics, such as limited freedom of expression and the use of censorship/propaganda, that make it difficult or impossible to obtain a set of survey respondents that is representative of a whole population.

In western democracies like America, having your beliefs and opinions represented through polling has been a long-standing component of participating in the democratic process. And thus, like the debate commission and the media, pollsters have been fixtures in the democratic process. But in 2020, America started to demonstrate some pretty undemocratic characteristics that could be putting stress on the magic behind the ability for public opinion research to be truly representative. America has always had biased media on each side of the spectrum, but it has also been a free media. Can you imagine the impact on society if the government controlled the distribution of information so that Americans only saw what CNN or FOX News reported?  Go back and read that sentence, but replace the word “government” with “big tech.”  Now, big tech companies can’t make Fox News illegal. Still, they can make sure that their viewpoints are censored, demoted, and somewhat limited to the few million viewers who watch their programming… while CNN’s viewpoints are being promoted as truth to 300 million online viewers.

To put this impact plainly: 

  1. If one belief group is championed for its beliefs and another is continually shamed, attacked, or threatened, which group do you think is more likely to honestly share its viewpoints in a poll?

  2. If, for political purposes, big tech and the media intentionally censor information they know to be true and promote information they know to be false, how does that affect people’s perceived worth of polls they see in the news via their social media feeds? Could that affect their likelihood to associate polls with a democratic process that they trust? If they don’t see polling as a part of democracy’s feedback loop, what is the incentive to take polls in the first place?

Part 3: The Test

The problem with testing this hypothesis is that the core nonresponse issue couldn’t be controlled for directly by using new quotas and sampling strategies. We couldn’t reasonably ask “Are you one of those Republicans that doesn’t take polls anymore because you don’t trust them?” and do anything with that data. Instead, we decided to look for symptoms, adjust our quotas mid-fielding to correct for the symptoms, and then add weights where we fell short in our data collection. 

We chose six battleground states to run this test and collected 1,000 completes in each from a random sample of likely and newly registered voters and all surveys were closed by November 27th.  IVR and Text-to-Web survey methods were used to collect the responses.

Here are a couple of the symptoms that we identified:

Symptom 1: Too many respondents with postgraduate degrees.

In 2016, the breakdown of the education levels amongst survey respondents was a root cause of inaccurate polls. Most pollsters tried to adjust for this in 2020, but if a pollster was grouping people with graduate degrees with those who have post-graduate degrees in a “college grad or higher” value (like we were!), then there is still a big problem. We made graduate and postgraduate degrees separate response values in our surveys and kept a close eye on “education”. Midway through our first day of fielding, 23% – 31% of day 1 survey respondents reported having post-graduate degrees (they represent 13-16% of the turnout). These voters answered 71% for Biden whereas those with graduate degrees answered 53% for Biden. Grouping these voters together, as we and many other pollsters did, was a symptom that needed to be corrected.

Symptom 2: Too many early voters taking our polls.

There is not a historical model with which to compare the 2020 voter turnout. Still, early voters (and absentee ballot voters in particular) were extremely overrepresented after our first day of fielding our study. Respondents who answered they had “already voted” were on average 16% higher than the actual percentage of people that had voted early in their respective states. Across the 6 battleground states, 63.5% of this group reported they voted for Biden and, in turn, it was a variable that left unaccounted for would heavily push projections in his favor.

Part 4: The Result

The punchline is that our accuracy increased when we corrected for these symptoms and we ended the cycle at the top of the list amongst our polling peers.

But the result is less important than the lesson. In 2016 our company very accurately predicted Trump’s primary and general election victories. And we thought that our unique 2016 methodology would be a secret sauce in elections to come. But we were wrong. The lesson is that the definition of a “representative survey sample” is changing, and the pollsters that get it right in 2022 and beyond are the ones that are willing to sprinkle a little common sense and curiosity into their sampling processes.

Get The Whole Truth

Subscribe to The Whole Truth, our email newsletter that makes sure you don’t miss a single insight and gets you special access to deeper dives and exclusive content.


You may opt out any time. Terms and Conditions and Privacy Policy

Let’s get in touch.

Are you a media representative wishing to interview our CEO and Chief Pollster, David Burrell? Let’s get something scheduled. Email: stephanie@wick.io

Schedule now