“We never have anything approaching perfect measurement for anything, and we always have to make decisions with imperfect information,” Bhattacharya said. “The problem with the COVID-19 policy response is that we are making difficult-to-reverse policy decisions with momentous consequences in the face of uncertainty, without having made the attempt to resolve that uncertainty.”

This uncertainty is part of what has paralyzed America. State and federal policymakers have relied heavily on epidemiological models to predict the course of the disease, a piece of information they need in order to tell businesses when they can expect to reopen. Daniel Crouch, a postdoctoral researcher in genetics and statistics at Oxford, told the Free Beacon that such models involve plugging in estimates of the disease’s prevalence and “reproduction rate,” the rate at which it reproduces by spreading itself, then accounting for how herd immunity and public policy affect spread.

Such models, however, hinge on variables like the share of patients who are asymptomatic and the share who have already recovered—all of which are unmeasurable with current data. As Harvard economist James Stock noted in a paper released Monday, the nonrandomized approach to testing means that “surprisingly little data exists” to fill in these variables, leaving modelers to take shots in the dark.