What should you do about these seeming outliers? If you’re a pollster, you should follow Monmouth’s lead and publish them!! In fact, printing the occasional expectations-defying result is a sign that a pollster is doing good and honest work. Plus, sometimes those “outliers” turn out to be right. Ann Selzer’s final poll of Iowa’s U.S. Senate race in 2014, which showed Republican Joni Ernst ahead by 7 percentage points over her Democratic opponent, might have looked like an outlier at the time, but it was the only one that came close to approximating her 8.5-point margin of victory there. The small handful of polls that showed Donald Trump leading in Pennsylvania in 2016 look pretty good too, even though most Pennsylanvia polls had Hillary Clinton leading.
In the long run, failure to publish results that pollsters presume to be outliers can yield far more embarrassment for the industry than the occasional funky-looking set of topline numbers. Suppressing outliers is a form of herding, a practice in which pollsters are influenced by other polls and strive to keep results within a narrow consensus. Herding makes polling averages less accurate, and it makes polling less objective. And more often than you’d think, it winds up being a case of the blind leading the blind. One recent example comes from Australia, where despite the Labor Party holding only a narrow and tenuous lead, pollsters declined to publish polls showing the conservatives narrowly ahead instead. The conservatives went on to a modest win, yielding a national controversy about polling that could have been avoided if the pollsters had trusted their numbers instead of the conventional wisdom.