As the unfavorable national environment began to close in around them, a cottage industry began to spring up on the left which propagated the notion that the polls were skewed against them. Republicans recognized this argument — it was a familiar one that many deployed in 2012 when the polls began to turn against Mitt Romney. It did not serve Republicans well then, but many on the left did not appear to have learned conservatives’ lessons.

In The New York Times, Nate Cohn performed a thorough dissection of the pre-election surveys and determined that Democrats were being systematically under-sampled. His was a modest analysis which suggested that the problems facing the polling industry as a whole had manifested in the discounting of a large portion of the Democratic electorate. While Cohn noted that it his was certainly not a prediction, he concluded by noting that “the sources of potential bias will tend to increase the likelihood that the polls overstate the standing of Republicans.”

This sounded to Republicans a whole lot like the “unskewing” craze that overtook the conservative punditry industry in 2012. The Washington Post’s Plum Line blogger Paul Waldman insisted that Democrats were not engaging in the unskewing of 2012 because Democrats were simply not as invested in the process as Republicans were in that year.

“What Democrats are doing is arguing that whatever the polls now say, they’ve got a great turnout operation this year, and that’ll make a big difference come election day,” he wrote.

Just days later, Waldman contradicted himself when he noted that the polls just might not be sampling the correct electorate in order to show Democrats winning.

A poll trying to capture the notoriously under-sampled Latino vote in Colorado shows Mark Udall leading Cory Gardner by a point. The question remains: Are most of the polls getting the Latino vote wrong?

It turns out the polls were wrong and key voters were being under sampled: Republicans.

“For much of this election cycle, Democrats complained the polls were biased against them,” FiveThirtyEight’s Nate Silver wrote on Wednesday. “Based on results as reported through early Wednesday morning — I’ll detail our method for calculating this in a moment — the average Senate poll conducted in the final three weeks of this year’s campaign overestimated the Democrat’s performance by 4 percentage points. The average gubernatorial poll was just as bad, also overestimating the Democrat’s performance by 4 points.”

This type of error is not unprecedented — instead it’s rather common. As I mentioned, a similar error occurred in 1994, 1998, 2002, 2006 and 2012. It’s been about as likely as not, historically. That the polls had relatively little bias in a number of recent election years — including 2004, 2008 and 2010 — may have lulled some analysts into a false sense of security about the polls.

Interestingly, this year’s polls were not especially inaccurate. Between gubernatorial and Senate races, the average poll missed the final result by an average of about 5 percentage points — well in line with the recent average. The problem is that almost all of the misses were in the same direction. That reduces the benefit of aggregating or averaging different polls together. It’s crucially important for psephologists to recognize that the error in polls is often correlated. It’s correlated both within states (literally every nonpartisan poll called the Maryland governor’s race wrong, for example) and amongst them (misses often do come in the same direction in most or all close races across the country).

This is something we’ve studied a lot in constructing the FiveThirtyEight model, and it’s something we’ll take another look at before 2016. It may be that pollster “herding” — the tendency of polls to mirror one another’s results rather than being independent — has become a more pronounced problem. Polling aggregators, including FiveThirtyEight, may be contributing to it. A fly-by-night pollster using a dubious methodology can look up the FiveThirtyEight or Upshot or HuffPost Pollster or Real Clear Politics polling consensus and tweak their assumptions so as to match it — but sometimes the polling consensus is wrong.

There is a bit of cosmic justice in the fact that left is experiencing the pain associated with the collapse of self-delusion today in nearly the precise fashion conservatives did in 2012. But the polling industry’s inconsistency virtually ensures that the process of “unskewing,” or parsing a poll’s assumptions in order to view it with a healthy skepticism, will continue.