The study was originally published in 2012 and had five authors. The basic thrust of it was this: If people are asked to sign “an honesty declaration” at the top of a form instead of at the bottom, they are more likely to answer honestly. As Buzzfeed points out, a lot of organizations including the US government took the findings very seriously:
A seemingly cheap and effective method to fight fraud, it was adopted by at least one insurance company, tested by government agencies around the world, and taught to corporate executives. It made a splash among academics, who cited it in their own research more than 400 times.
Alas, this is another one of those prominent studies that seems to bolster the idea of a replication crisis. Others who tried to replicate the findings with the help of the original authors couldn’t do so.
A few other researchers had been unsuccessfully trying to replicate and build upon some of the experiments, so they joined forces with the five original scientists, including Ariely, to run one of those tests again. This one also involved signatures on the top versus bottom, but on tax return forms and with a bigger group of participants in a lab environment. They failed to see any difference in honesty between the two groups.
The original scientists admitted that this was a problem but initially only two of the five wanted to withdraw the study. Instead they published an update last year which included some of their original data. That data hadn’t been included when the study was first published and it turned out to be the key to the study’s downfall.
Poring through the Microsoft Excel spreadsheet, Simmons and the other data detectives unearthed a series of implausible anomalies that pointed to at least two kinds of fabrication: Many of the baseline, preexperiment mileages appeared to be duplicated and slightly altered, and all the mileages supposedly collected during the forms test looked like they were made up. Much of this data seemed to be produced by a random number generator, they wrote in their Data Colada post.
In the first sign of something amiss, the 13,488 drivers in the study reported equally distributed levels of driving over the period of time covered in the study. In other words, just as many people racked up 500 miles as those who drove 10,000 miles as 40,000-milers. Also, not a single one went over 50,000. This pattern held for all the drivers’ cars (each could report mileage for up to four vehicles).
“This is not what real data look like, and we can’t think of a plausible benign explanation for it,” Simmons, Nelson, and Simonsohn wrote. (Most of the issues, they added, were initially raised by a group of researchers who asked to remain anonymous.)
There’s a lot more to it. If you want to go on a deep dive into the reasons the data is bogus, here’s the blog post that revealed it. But the bottom line is that everyone involved admits the data is clearly bogus and can’t possibly be real.
So who is responsible? Well, four of the original authors have said they had nothing to do with gathering the specific data involved. That leaves one author, Dan Ariely, (pictured above) who admits he handled the data but claims what was presented in the Excel spreadsheet is exactly what he received from the insurance company. Only, he has told different stories at different times about when that happened.
Asked by BuzzFeed News when the experiment was conducted by the insurance company, he first replied, “I don’t remember if it was 2010 or ’11. One of those things.”
The Excel file that was publicly posted — the file with the original data, according to him and his team — was created by Ariely in February 2011, its metadata shows. But Ariely discussed the study’s results in a July 2008 lecture at Google and wrote an essay about it, though with slightly different results, in 2009 for the Harvard Business Review. That would suggest the data file was created up to three years after the experiment was conducted.
It all looks very sketchy to me. Why would it take 3 years to put the data together after you’ve spoken about the results?
There’s more to the story but the bottom line is a well-known study on human honesty and the ability to “nudge” people toward good behavior appears to have been built on lies.
Join the conversation as a VIP Member