Many of us have puzzled over the latest results from Gallup’s regular look at the generic Congressional ballot from its surveys. Most pollsters — including Gallup — have had the race a tie or Republicans in the lead most of 2010. Suddenly this week, Gallup showed a big gain for Democrats in the gap between the two parties, even though it also showed Republican enthusiasm peaking. How did this happen?
It turns out that Gallup may have mixed their sampling types without acknowledging the difference. Red State calls this a lie:
The Republicans lead with a sample of Registered Voters, but the Democrats lead with a sample of Adults. Someone who trusted Gallup’s pretty, but lying, picture would never have noticed. Real Clear Politics noticed, and actually recorded the polls differently. Friends noticed this and alerted me.
It is terribly dishonest for Gallup to string together two different polls as one series, as Gallup does not only in their graphs, but in their write-ups as well. Here’s an example from the July 19 release:
The Democrats’ six-point advantage in Gallup Daily interviewing from July 12-18 represents the first statistically significant lead for that party’s candidates since Gallup began weekly tracking of this measure in March.
Notice, they call the series one measure, even though it’s at least two different kinds of polls with two different kinds of sampling pools. You cannot pretend that a poll of all adults and a poll filtered by registered voters are part of the same series, even if the same questions are asked. That’s Polling 101, and whoever’s responsible for the Gallup release should have known this, and certainly whoever’s responsible for oversight of the Gallup releases would know this.
I’m not sure I’d go so far as to call it a lie, but it’s obviously a mistake. Polls can only be considered a series if they use the same techniques and sampling for the surveys within them. I’ve not seen any indication that Gallup does this on a regular or even occasional basis, and it simply could have been a mistake between two different analysts within Gallup. Pollsters do survey general populations and registered voters within the same survey, but they usually report them separately and with proper annotation, although to my recollection the generic Congressional ballot at Gallup has always been a registered-voter sample.
It serves as a good reminder on analyzing polls: always check the sample. That’s not to find deliberate biases (although it’s pretty easy to spot that in sampling techniques and question structure), but to understand the data and its predictive value. The predictive value of general-population surveys on electoral questions is usually poor, which is why pollsters generally use registered voters, or better yet, likely voters, a sample which generally has the most predictive value of behavior in the election.
This should have alerted Gallup that something was wrong in its report:
Simultaneous with increased support for Democratic congressional candidates, Gallup polling last week found Republican voters expressing significantly more enthusiasm about voting in the 2010 midterms. The 51% of Republicans saying they are “very enthusiastic” about voting this fall is up from 40% the week prior, and is the highest since early April — shortly after passage of healthcare reform. Democratic enthusiasm is unchanged, at 28%.
So Democratic enthusiasm was unchanged, Republican enthusiasm shot upwards … and somehow Democrats got a five-point boost? That’s a big red flag right there, and it’s one that should have had Gallup reviewing its conclusions.
Update: And the mystery continues. At the same Gallup link, the editors say that the sample was misreported as general population and really was registered voters all along:
Editor’s note: The original version of this story inadvertently referred to national adults rather than registered voters in the survey methods statement. The results reported here and in all Gallup generic ballot trends so far this year are based on registered voters; the survey methods statement now correctly reflects that.
So we still have no explanation of the counterintuitive swing in the polling results — and we’ll all await the results from next week’s survey to see if this is an outlier.