Is polling science, art or witchcraft?

One of the stories I’ve been following here at good ole’ Hot Gas this month has been Allahpundit’s ponderings on precisely what value – if any – the polls have in gauging the temperature of the electorate. (Here and here this week.) I’ve not only read AP’s questions and coverage, but many of your responses on the subject, and I’ve got to be honest here… I was still completely confused on a couple of fundamental points.

Advertisement

One of the first has to do with a subject which the current Oval Office occupant likes to trumpet… arithmetic. (Or, as Joe Biden might put it, a simple three letter word: MATH.) I mean, given all of the time that we spend obsessing over the polls here – as well as on every cable news channel – you’d think there was some actual, er… science behind it, wouldn’t you? Polling has been going on for longer than I’ve been alive and there are major elections every two years. Surely by this time somebody could have looked at the results of the various polling agencies and compared them to the final vote totals in the myriad races and determined who was hitting on all cylinders, right?

Apparently not. But some of AP’s questions have raised the specter of criteria we could use to figure out if anyone is putting their thumb on the scale and – more to the point – when. For the polls which publish their cross tabs, is there any metric where we could see if some of them are running an incredible D+ 1 bazillion in September and early October, but then suddenly push the margins down toward reality during the final two weeks so their “final” predictions would be more in line with reality? It seems to me like that would be a useful piece of information which could have been compiled by now. But has it?

Advertisement

Over the last couple of days I’ve been trying to track down some answers. This effort included speaking with polling analysts and pollsters themselves. The answers I received ran the gamut from things which sort of made sense when I heard them to contradictory responses which left me scratching my head. The last interview I did, and one of the most enlightening, was with Brad Coker of Mason-Dixon Polling and Research. The following are a few of my conclusions about the questions above when it comes to the science of polling.

First, there are varying claims about how pollsters arrive at the total number of people from each party affiliation (plus independents) who are to be surveyed in any given poll. One analyst said that it’s barely even a concern; they just pick a target number of interviews and let the chips fall where they may. (Within reason.) Brad wasn’t exactly that “hands off” about it. He said that they know from experience and previous results how many people of each affiliation are out there – more or less, because it does shift – but it’s one of the less reliable demographics. If he gets a number that’s totally out of reality in either direction and he has the time, he’ll try to get some more interviews to even it out. But the general consensus was that pollsters don’t start out shooting for “x” number of Democrats, “y” number of Republicans and “z” independents. Take that as you will.

Advertisement

Second, everyone seems to agree that poll results shift as you get closer to this election. But as opposed to some nefarious plot to influence the vote, the virtually unanimous response from these industry insiders is that polling is extremely tenuous a year in advance of an election. Identifying “likely voters” at that point is impossible to pin down for a variety of reasons, including people who move, people who only come of age to vote shortly before the election, people who die, and folks who simply aren’t paying attention that early and may not have any clue if they’ll be voting or not. In the final weeks before the election you can structure a much better likely voter model, and this will tend to shift the numbers when it happens.

Next, the subject of “weighting” was addressed by a couple of people. This, in my opinion, is where we really get into the “man behind the curtain” mystery ride. The people I spoke with were pretty much in agreement that weighting is done primarily as a matter of experience in the field and it happens on an ad hoc basis. For example, young people are harder to get a full interview with than seniors, so they typically get less responses from them. In response, they will “weight” the results to increase the influence of younger voters and decrease that of seniors if the number of responses is too far off from the usual turnout numbers. The precise numbers for that weighting don’t come from any handbook or specific formula… you just have to know how to do it.

Advertisement

The only thing everyone seemed to agree on was that no legitimate pollsters are influenced by the media, by partisan bias or money. (The term “legitimate” in this case is meant to exclude campaign push polls and marketing calls.) They’re just producing data based on research and they all seem to feel that their results have panned out pretty closely to actual results over the years, with the notable exceptions of 1980 and 2000, and to a lesser degree, 1996.

As usual, we leave it up to you to judge these explanations.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
David Strom 11:20 AM | April 24, 2024
Advertisement