RCP poll averages predicted 49 of 50 states

One last 2012 poll post for old times’ sake before we start in bright and early tomorrow morning with 2016 polls. (Joking, joking.) There was one basic meta-question in the sturm and drang over Nate Silver and statistical models: Were the state polls showing Obama sweeping the midwest correct or were the national polls, some of which had Romney ahead even at the very end, right in predicting a photo finish? Now we know. Sean Trende thoughtfully weighed the case for both sides a few days ago. My assumption, like many other people’s, was that O couldn’t duplicate the turnout he generated in 2008. All the polls showing, say, a D+6 advantage simply had to be wrong, as that was way too close to the D+7 he grabbed back then. There was too much stacked against him this time — unemployment near eight percent, liberal disillusionment with the pace of “progress,” a fervently negative campaign that demolished his Hopenchange brand, etc etc. Plus, Republicans couldn’t wait to get to the polls to beat him, and had even come to genuinely like Romney as he campaigned down the stretch. Logically, I thought, the best O could do was maybe D+3 and probably it’d be closer than that. Actual result, per the national exit poll: D+6. That’s why last night is so alarming. 2008 could be dismissed as a fluke but last night smells more like realignment. He essentially duplicated the results of his first “experiment,” which, in science, means we have validation of a hypothesis. Hypothesis: If Democrats can muster that sort of advantage even in the worst of times (albeit with heavy GOTV help from Obama’s superb organization), then the GOP’s starting each election in the near term in a hole.

So some of the national polls like Gallup and Rasmussen were wrong and the state polls were right. See for yourself by checking the final averages of the toss-up states on RCP’s election page. They accurately predicted the winner everywhere except Florida, and in fact frequently underestimated Obama’s final margin. (In other words, the polls were slightly skewed for, er, Romney.) If the state polls were right then the models based on state polls were also bound to be right, which meant that Silver’s probability model was right for all 50 states — as was the model developed by Stanford prof Simon Jackman for HuffPo, as was the Votomatic model developed by Drew Linzer. Jackman’s model, in fact, nailed the margin in Florida, and Linzer predicted flat out the morning of the election that Obama would net 332 electoral votes, which looks to be spot on. Impressive stuff, but not surprising given their baseline: They were modeling based on the state polls, so if the state polls looked good for O, then so would the model. For that reason, I never understood the ferocious antagonism online lately to Silver. He wasn’t the one churning out the data showing an Obama win, the state pollsters were. Says Linzer:

I’ll add, though, that on the eve of the election, averaging the polls, or running them through any sort of sensible model, isn’t all that hard. We are all using the same data (more or less) and so it doesn’t surprise me that we’re all reaching similar conclusions. The real challenge is producing meaningful and accurate forecasts early in the campaign. My model is designed to be robust to short-term fluctuations in the polls, and converge in a stable and gradual manner to the final, Election Day estimates. It appears that in this regard, the model has worked as intended.

But from a broader perspective, my model has been predicting that Obama will win 332 electoral votes – give or take – since June. If all of us are correct today, the next question to ask is when each model arrived at the ultimate outcome.

Note that last paragraph. If you thought there was poll angst this time, imagine what it’ll look like in two years when Linzer or Jackman or Silver starts showing a victory for one side or the other in the midterms four months out from election day and that side’s supporters have to confront this track record. If you’re a baseball fan and think the sport’s too top heavy with sabermetric analysis, wait until political junkies fully absorb the Jackman/Silver/Linzer record this time. You’ll need a statistics degree to follow elections soon.

Oh, and because I know you’re curious: The single most accurate pollster of the cycle, according to Fordham University’s polisci department, was PPP, a.k.a. Kos’s pollster. Out of 28 polling orgs, Rasmussen and Gallup tied for 24th.

Update: Good point from the comments about the saber-metrizing of politics:

If Silver or Tom Tango or Bill James or Keith Law projects a 3.2 fWAR, .240/25/102 slash line, .810 OPS and 130 OPS+ for, say, Ryan Braun, that’s not going to make his real life numbers suddenly decline. He’s not going to despair over getting bad projections from Fangraphs.

But if Linzer is accurate this year 4 months out, and he calls it for Cuomo/Hillary/Warren/Booker in June of 2016, it could have a depressing effect on GOP GOTV efforts. The lines of causality go both ways.

Trending on HotAir Video
Jazz Shaw 5:01 PM on March 22, 2023