Vox published an interesting story today about the replication crisis in science. If you’re not familiar with this idea, it’s the fact that a significant percentage of all of the peer-reviewed social science that has been published in the past can not be replicated by other scientists, which is a pretty clear sign that the claims made in the original papers aren’t true. How bad is the situation. This bad:
In an attempt to test just how rigorous scientific research is, some researchers have undertaken the task of replicating research that’s been published in a whole range of fields. And as more and more of those attempted replications have come back, the results have been striking — it is not uncommon to find that many, many published studies cannot be replicated.
One 2015 attempt to reproduce 100 psychology studies was able to replicate only 39 of them. A big international effort in 2018 to reproduce prominent studies found that 14 of the 28 replicated, and an attempt to replicate studies from top journals Nature and Science found that 13 of the 21 results looked at could be reproduced.
So we’re not talking about a few bad papers we’re talking about half or more of all of the peer-reviewed papers being junk that can’t be replicated. In some cases, these bogus papers can spawn a whole cottage industry within social science or psychology which later turns out to have been a waste of time for everyone involved. In 2016 Vox published a story on this topic which included one particular example of this.
In 1998, psychologists found evidence of a tantalizing theory: We all have a finite mental store of energy for self-control and decision-making. Resisting temptations, or making tough decisions, saps this energy over time…
Over the years, the theory has been tested in hundreds of peer-reviewed studies, with countless stand-ins for the chocolate, radishes, and the quiz. Scientists have shown how diminished willpower can affect our ability to hold on to a handgrip, sap our motivation to help another in need, and even negatively impact athletic performance.
This huge body of research has helped ego depletion, as psychologists call it, and its offshoot decision fatigue, become the basis for best-selling books, TED talks, and countless life hacks. In an age where temptations and decisions pummel us at warp speed, it’s become an empowering concept. If we know how the system works, we can game it: President Obama famously doesn’t pick out his suits, for fear that it might deplete some of his decision-making capabilities.
But the whole theory of ego depletion may be on the brink of collapse.
Slate’s Daniel Engber reports on an upcoming study in the journal Perspectives on Psychological Science that found in a test with more than 2,000 participants across more than 20 labs, “a zero-effect for ego depletion: No sign that the human will works as it’s been described, or that these hundreds of studies amount to very much at all.”
So if you were wondering why Barack Obama ended up wearing that awful tan suit that one time, now you know. It was probably the result of bad social science. Who knows what other terrible decisions were made as a result of this “science.”
Another example of bad social science which doesn’t get mentioned by Vox but which has had a definite negative impact on the real world is implicit bias testing, which is the unspoken “science” which some groups have suggested should become mandatory for public officials and which has been discussed with regard to policing for years.
Today’s story at Vox focuses on Alvaro de Menard who is part of a DARPA project looking at this problem. He says that despite knowing the problem exists, we don’t seem to be making much progress toward improvement:
A recent write-up by Alvaro de Menard, a participant in the Defense Advanced Research Project’s Agency’s (DARPA) replication markets project (more on this below), makes the case for a more depressing view: The processes that lead to unreliable research findings are routine, well understood, predictable, and in principle pretty easy to avoid. And yet, he argues, we’re still not improving the quality and rigor of social science research…
“I frequently encounter the notion that after the replication crisis hit there was some sort of great improvement in the social sciences, that people wouldn’t even dream of publishing studies based on 23 undergraduates any more … In reality there has been no discernible improvement,” he writes.
One of the things I found most interesting in this piece is that evidence suggests peer-reviewed papers that are bogus are just as likely to be cited by other scientists as those that can be replicated. It’s almost as if once the paper is granted peer-reviewed publication it is above suspicion. And it turns out that kind of credentialing of science is probably making things worse. Once something has a stamp of approval from some journal, it’s treated as serious even if it clearly shouldn’t be and winds up impacting other meta-studies of the topic.
Vox doesn’t get into this, but much of what it published today could also apply to the so-called “grievance studies” currently fashionable on college campuses. The hoax perpetrated on several journals in 2018, suggests that almost anything can be given the imprimatur of legitimacy in fields like feminist studies. So long as you know the underlying concepts and some of the lingo, you can get published in these journals, even if your underlying text was borrowed from Mein Kampf. And all of that nonsense is spilling out into the real world.
Join the conversation as a VIP Member