Is a lot of scientific research just... crap?

To be fair, I suppose the use of “crap” in the title might be a bit strong, but if you’re interested in seeing society get the most it can out of scientific research it’s an important question. What set me off on this particular jag this weekend was a very long and well assembled piece by Dr. James Joyner at Outside the Beltway. In it, he analyzes some of the findings in a recent Economist article which looked into the number of published scientific papers which apparently weren’t worth the virtual paper they weren’t printed on.

Advertisement

Over the past few years various researchers have made systematic attempts to replicate some of the more widely cited priming experiments. Many of these replications have failed. In April, for instance, a paper in PLoS ONE, a journal, reported that nine separate experiments had not managed to reproduce the results of a famous study from 1998 purporting to show that thinking about a professor before taking an intelligence test leads to a higher score than imagining a football hooligan.

A few years ago scientists at Amgen, an American drug company, tried to replicate 53 studies that they considered landmarks in the basic science of cancer, often co-operating closely with the original researchers to ensure that their experimental technique matched the one used first time round. According to a piece they wrote last year in Nature, a leading scientific journal, they were able to reproduce the original results in just six. Months earlier Florian Prinz and his colleagues at Bayer HealthCare, a German pharmaceutical giant, reported in Nature Reviews Drug Discovery, a sister journal, that they had successfully reproduced the published results in just a quarter of 67 seminal studies.

I suppose the question here isn’t so much one of how so many respected scientists can get something wrong, (who doesn’t make mistakes from time to time?) but how the errors make it into mainstream publication and acceptance, lasting for ages. Dr. Joyner has some experience in the area of analytical statistics and offers some sensible answers.

Advertisement

The use of statistics to make academic research, even in “soft” fields like psychology and political science, more “scientific” has become the norm over the last half century. Unfortunately, most of us in those fields—and for that matter, most chemists, physicists, and physicians—don’t truly understand the increasingly complicated statistics we’re employing. That is, we roughly understand what they’re supposed to do but not the math behind them. And that makes us oblivious to errors.

Joyner identifies a few major items where these problems could be alleviated to some degree if the will existed to do it. Three of them break down as follows:

– The pressure to publish something … anything with your name on it is incredibly intense if you want to advance in your field. This problem has been a known issue for a long time, leading to the Publish or Perish dynamic in academia, and it opens the door to all sorts of errors.

– The perceived need to employ statistical mathematics to support research, particularly in the “soft sciences” leads to problems when attempting to force fit rather hazy measurements into the hard discipline of mathematics.

– Too many of the people involved in a variety of areas of research don’t have a full – or in some cases, even a fundamental – grasp of the difficult mathematics required to truly prove a hypothesis. And there is little incentive for those who do understand it to go through the strenuous, time consuming work of reproducing experiments or thoroughly dissecting their math just to further the career work of somebody else.

Advertisement

The second two of these problems are highlighted in a story which Ed pointed out to me this morning. It deals with Nick Brown, a man who embarked on what was basically an amateur exploration of psychology in his retirement years. He wound up not only challenging some accepted, published information in that field, but essentially overturning the opinions of the entire scientific community.

The majority of the cases that Joyner is discussing deal with fields of hard science which are at least terrestrial in nature and lend themselves to solid testing in the laboratory. None of this gets into the massive bodies of work which are regularly published in less measurable fields, particularly astrophysics and it’s related, nearly science fictional relatives. One of the hot topics there is the entire question of so called dark matter and dark energy, just for one example, which has led some scientists to already begin asking if these things are real at all. A lot of this may rise from the question of whether or not we really even understand what gravity is and how it is propagated. (We still have scientists being featured on Science Channel shows who think that gravity may be so weak in comparison to the other three primal forces because it’s leaking through to or from other dimensions we can’t perceive.)

The chief argument in favor of the current way of doing things is basically that it will all come out in the wash. Presumably, a significant experimental error, once published, will be exposed as later work attempts to verify or build upon it. But as Joyner notes, there is very little effort being put into challenging these sorts of things once they are published, become embedded in the “common knowledge” and start generating money for people. All of this should give us pause, and prompt more people to be willing to stand up and speak out when we’re told something that just doesn’t seem to pass the smell test. It may turn out to be valid after all, but it’s always worth asking the question.

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Advertisement