OpinionJournal publishes the best deconstruction yet of the Johns Hopkins Lancet study on Iraqi mortality due to the war. It is not based on denial or wishful thinking, but on the thing most glaringly absent from the study: science.

The group–associated with the Johns Hopkins Bloomberg School of Public Health–employed cluster sampling for in-person interviews, which is the methodology that I and most researchers use in developing countries. Here, in the U.S., opinion surveys often use telephone polls, selecting individuals at random. But for a country lacking in telephone penetration, door-to-door interviews are required: Neighborhoods are selected at random, and then individuals are selected at random in “clusters” within each neighborhood for door-to-door interviews. Without cluster sampling, the expense and time associated with travel would make in-person interviewing virtually impossible.

However, the key to the validity of cluster sampling is to use enough cluster points. In their 2006 report, “Mortality after the 2003 invasion of Iraq: a cross-sectional sample survey,” the Johns Hopkins team says it used 47 cluster points for their sample of 1,849 interviews. This is astonishing: I wouldn’t survey a junior high school, no less an entire country, using only 47 cluster points.

Neither would anyone else. For its 2004 survey of Iraq, the United Nations Development Program (UNDP) used 2,200 cluster points of 10 interviews each for a total sample of 21,688. True, interviews are expensive and not everyone has the U.N.’s bank account. However, even for a similarly sized sample, that is an extraordinarily small number of cluster points. A 2005 survey conducted by ABC News, Time magazine, the BBC, NHK and Der Spiegel used 135 cluster points with a sample size of 1,711–almost three times that of the Johns Hopkins team for 93% of the sample size.

What happens when you don’t use enough cluster points in a survey? You get crazy results when compared to a known quantity, or a survey with more cluster points. There was a perfect example of this two years ago. The UNDP’s survey, in April and May 2004, estimated between 18,000 and 29,000 Iraqi civilian deaths due to the war. This survey was conducted four months prior to another, earlier study by the Johns Hopkins team, which used 33 cluster points and estimated between 69,000 and 155,000 civilian deaths–four to five times as high as the UNDP survey, which used 66 times the cluster points.

The 2004 survey by the Johns Hopkins group was itself methodologically suspect–and the one they just published even more so.

Read the whole thing. It’s worth your time.

Johns Hopkins is a very fine school, known for exceptional prowess in three fields: medicine (through what may be the world’s best hospital, Johns Hopkins Hospital in Baltimore), astronomy (it’s the home of the Hubble Space Telescope’s office on earth and employs some of the best researchers and professors in the field), and political science. But this study by the Bloomberg School of Public Health (its namesake is New York Mayor Michael Bloomberg, a JHU alum who has donated vast sums of money to the university) brings much discredit on such a great institution. What we’re seeing here is scientific mission creep, and it’s widespread. While the Hubble effort remains free of political taint (full disclosure–I worked there for 8 years), from earth science to psychology to public health, leftwingers are making science their advocate for political and public policy, and they’re tainting the science to do it, and their manipulation of science is by no means limited to JHU. The risk here isn’t just that we adopt bad policy based on bad information by misinforming the public, but that science itself is eventually discredited. That’s the fastest way I know of to stop progress in its tracks, and every thinking person regardless of politics should be appalled by it.

Tags: New York