Many of Rogers arguments dealt not with forecasting but with issue polling — the question of whether elected leaders should mind the polls in deciding which policies to adopt. But another set of his complaints centered on the intractable difficulties involved in obtaining truly objective information from survey methods at all. By now it was well known that the nature of sampling, the wording of questions, the kinds of answers that respondents were allowed to offer and the methods of tabulating them were all capable of introducing errors or producing misleading outcomes.
Methods could of course be tweaked and even improved (though it should be noted that Gallup and other pollsters went on to misjudge, by margins wide and small, the elections of 1952, 1968, 1976, 1980, 1996, 2000, 2004, and 2012 — hardly a proud record). At bottom, though, Rogers’ critique wasn’t methodological. At a philosophical level, he rejected the very idea that public opinion was measurable in the concrete way that the pollsters alleged. Public opinion was too inchoate to lend itself to precise measurement, even when fine-tuned with open-ended questions, scales of intensity and other methodological tweaks that had been introduced over the years. Public opinion, he said, wasn’t like distance or mass or other scientifically measurable phenomena; it had no freestanding existence apart from the operation of measuring it. Polling thus pretended to quantify the unquantifiable. Like others in the increasingly data-driven social sciences, Rogers charged, the public opinion analysts were following false gods of methodology. Properly understanding the public required not pseudo-scientific methods but human insight.
Join the conversation as a VIP Member