Here’s an example of what I mean by that. Suppose that in a certain election, the Republican really leads by 8 percentage points. The conventional wisdom, for reasons that aren’t entirely clear, wrongly insists that the race is a tie. The pollsters mostly stick to their guns, but they compromise by publishing a poll showing the Republican with a 6-point lead instead. Under conditions like these, the conventional wisdom will pull the polls slightly in the wrong direction. So if you think the conventional wisdom is worthless,4 you should guess that polls will err in the opposite direction of what the conventional wisdom expects.
What was that about the pollster being influenced by the conventional wisdom? Aren’t pollsters supposed to be objective? Well, yes, they’re supposed to be. And the best pollsters trust their data even when it comes to an unpopular conclusion. (By “unpopular,” I mean a conclusion that differs from what journalists and other elites expect.) But pollsters also have a lot of choices to make about which turnout model to use, how to conduct demographic weighting, what to do with undecided voters, and so forth. This can make more difference than you might think. An exercise conducted by The New York Times’s Upshot blog last year gave pollsters the same raw data from a poll of Florida and found that they came to varied conclusions, showing everything from a 4-point lead for Clinton in Florida to a 1-point lead for Trump.
Now suppose you’re conducting a poll of France. It’s a tricky election — lots of voters are undecided or say they’ll abstain, and neither candidate is from one of France’s traditional major parties. With one reasonable set of assumptions, you might show Macron ahead by 23 percentage points. With another, he might be up by 30 points. Which set of numbers are you going to publish?