Statcheck: valuable asset or self-appointed data police?
Last modified: May 22, 2017
In 2015 a group of scientists from the Tilburg University and the University of Amsterdam (The Netherlands) conducted a study on reporting errors in a sample of over 250,000 p-values reported in eight major psychology journals. For their research the group used a new program called “Statcheck”, an R package which automatically extracts statistical results and recomputes p-values.
Statcheck soon proved to be a valuable asset; in a rather short period of time the researchers managed to demonstrate the inconsistencies in a large amount of literature (half of the 30.000 papers checked contained statistical errors). Although Statcheck tended to overestimate the prevalence of inconsistencies and could not replace the final verdict and careful consideration of an expert, the program was immediately accepted by the scientific community as a promising tool to detect statistical errors.
Two years later Statcheck once again reached the news. However, this time the reactions were less cheerful. Chris Hartgerink, a young and talented Dutch scientist (Tilburg University), had succeeded to modify Statcheck up to the point that the program could now catalogue individual errors. Hartgerink decided to post his individualized Statchek-findings online, causing vibrations across the scientific community. Read the Guardian longread The high-tech war on science fraud to learn more about Statcheck and the discussion connected to it.
Read also this post on Statcheck.