"Follow the science" doesn't work when the science is this bad

I work on a small team of researchers who do what one might call “forensic peer review.” In the standard process for scientific publishing, peer reviewers take a manuscript mostly at face value: They ensure that the study makes sense as it’s described. We do something else: We check everything, and try to ferret out any potential biases in reported patterns of digits, statistical impossibilities, inconsistencies between what researchers said they’d do and what they actually did, and plagiarized sentences or paragraphs. And we often find fatal flaws hidden behind a veil of two-dollar words and statistical jargon.

The ivermectin literature has been no exception. Over the past six months, we’ve examined about 30 studies of the drug’s use for treating or preventing COVID-19, focusing on randomized studies, or nonrandomized ones that have been influential, with at least 100 participants. We’ve reached out directly to the authors of these studies to discuss our findings, sometimes engaging in lengthy back-and-forths; when appropriate, we’ve sent messages to the journals in which studies have been published. In our opinion, a bare minimum of five ivermectin papers are either misconceived, inaccurate, or otherwise based on studies that cannot exist as described. One study has already been withdrawn on the basis of our work; the other four very much should be…

Most problematic, the studies we are certain are unreliable happen to be the same ones that show ivermectin as most effective. In general, we’ve found that many of the inconclusive trials appear to have been adequately conducted. Those of reasonable size with spectacular results, implying the miraculous effects that have garnered so much public attention and digital notoriety, have not.