Premium

Frank Luntz: I apologize to Trump's pollster for doubting him on how tight these races would be

We’re kicking the pollsters a lot on the site today but they deserve all of it and more. If you missed it earlier, go look at the comparison Ed posted this morning between the final RCP averages in swing states and the actual results last night. Pollsters missed by nearly five points in Texas, by six points in Wisconsin and Iowa, and by seven points in Ohio. Nearly every poll of Lindsey Graham’s Senate race taken in October had Democrat Jaime Harrison within two points or so. The biggest lead Graham saw in any survey was six. He won by 11.

It can’t be stressed enough: This wasn’t just one or two polling firms having terrible nights. The averages across dozens of polls were off dramatically. The whole point of using an average is to control for outlandish outliers and try to produce a more realistic sense of each race. The failure was total, systemic. The disgrace is ineffable. Particularly since they had four years to refine their methods and make sure they were capturing Trump voters accurately after 2016.

It wasn’t just the bad pollsters who contributed either. FiveThirtyEight gives six firms an A+ rating: Marist, Monmouth, ABC/WaPo, NYT/Siena, Selzer, and Muhlenberg. The great success story among that batch was Selzer, which had Trump winning big in Iowa a few days before he went on to take the state easily. Marist, by contrast, had Biden winning North Carolina by six and Florida by four. Monmouth had Biden taking Pennsylvania by seven and Florida by six. ABC/WaPo also had the Democrat up seven in Pennsylvania and Michigan and — wait for it — by 17 in Wisconsin. NYT/Siena saw Biden winning Florida by three, Arizona and Pennsylvania by six, and Wisconsin by 11. Muhlenberg, which sticks to Pennsylvania, found Biden ahead there by five.

What’s left of this industry?

The most notorious failure last night was probably Quinnipiac, which completely botched its polling in Florida in 2018 when it found Ron DeSantis and Rick Scott each losing by seven. Last night they had Biden winning by five there. The Federalist emailed them today to ask, essentially, why they still exist. The firm says it’s assessing what went wrong:

When The Federalist inquired if any of Quinnipiac’s staff would be resigning over the severe mistakes, Associate Vice President and Director of the Quinnipiac University Poll Douglas Schwartz ignored our question, and instead provided a statement promising to do better in the future.

“A full examination of what went wrong with polls this year is going to take a while. At the moment, I still need to see the final election results and final exit poll results, and without those I’m not able to make even preliminary hypotheses about what exactly the issues are,” Schwartz said in an email to The Federalist. “After the 2016 election, it took 6 months for the American Association of Public Opinion Research to release their findings about polling errors; I would expect a full evaluation of 2020 to take at least as long, though we might have some idea of the situation before then.”

It wasn’t just the public polls that failed. Although Trump pollster John McLaughlin saw a tight race, plenty of internal district-level polls that the parties themselves rely on to guide them on where to spend money also wet the bed:

The night’s success story was Trafalgar, the Republican-leaning firm that’s bet heavily on the idea that “social desirability bias” is warping the data that pollsters get. Trafalgar pollster Robert Cahaly believes that a certain cohort of “shy Trump voters” out there won’t tell pollsters the truth about supporting him. To get to that truth, Cahaly asks them instead how they think their friends and community are voting. (There was already evidence that asking people about how their social circle will vote provides more predictive data than asking them how they’ll vote themselves.) You can see Trafalgar’s final polling across various battlegrounds here. They ended up overestimating Trump’s support by a few points in states like Arizona, Georgia, and Michigan, but they got much closer to the mark than the competition did.

For what it’s worth, my pet theory to explain the systemic collapse last night is non-response bias. Social desirability bias is where you answer the phone and then don’t tell the pollster the truth for fear of feeling embarrassed by your answer. Non-response bias is where you simply refuse to answer the phone. A pollster might need to dial 100 people just to get one or two to take their survey; after 100,000 calls they’ll have a sample of 1,000 people. There’ll be plenty of Trump voters among that 1,000, but it may be that the type of person who’ll talk to a pollster nowadays is sufficiently unlike the average American voter that pollsters can’t get an accurate picture of the electorate from them. Distrust of institutions like the media and its polling arms may be so rampant among Trump voters in 2020 that there’s no longer a way to sample them accurately. In a random group of 1,000 voters willing to take a pollster’s call, there may be, say, 430 who support Trump. But in a random group of 1,000 voters that includes those who won’t talk to a pollster, we might find that there are more like 480 Trump voters. That difference — 50 people, presumably hardcore MAGA-ites who won’t give a pollster the time of day — is the difference between a poll showing Trump at 43 percent and an election-night result of 48 percent.

Gonna be a lot of navel-gazing about this among data nerds in the days, weeks, months, years, and decades to come. As for how anyone’s going to cover the 2022 election now that polling’s been all but discredited, God only knows. Haruspicy, maybe.

Trending on HotAir Videos

Advertisement
Ed Morrissey 12:40 PM | November 21, 2024
David Strom 11:20 AM | November 21, 2024
Advertisement
Advertisement
Advertisement