The Guardian released an online article on 28th August 2015 written by John Ioannidis (a professor of medicine) in which he provides strong criticism on the study conducted by Open Science Collaboration Estimating the reproducibility of psychological science failing to support it sufficiently with appropriate arguments. The title of the article is an exaggerated statement: Psychology experiments are failing the replication test – for good reason.
The author presents the claim that a large portion of replications produced weaker evidence. This is not ‘necessarily’ a statistical majority as per the guardian article “one-third to one-half” is a huge leap to “a large portion.” Therefore, the guardian article has already altered the basic conclusion.
The scientific paper states that: “Reproducibility is not well understood because the incentives for individual scientists prioritize novelty over replication” (Open Science Collaboration, 2015). The entire claim of Ioannidis’ argument is that the ‘irreproducibility of experimentation’ is leading to a whole host of false positives in regard to social experiments. He is not so bold as to suggest that “irreproducibility is rarely an issue of fraud” (Ioannidis, 2015).
Analysing the accuracy and integrity of a media article of the scientific research, John Ioannidis’ discourse and language are derogatory. It is not methodologically and scientifically informed and does not address fundamental issues in studies that are common knowledge such as incorrect tests as the study simulates the perfect testing environment which is impossible to obtain in social sciences. Moreover, this is not a report on the study but an article written to gain people’s attention and it is argumentative. From one analysis of the study which is very basic he cannot make a proper report. His article is a commentary on the other article written by him on the study and he does not even link the study for the reference. He disregards the studies claiming that they are not valid as the actual studies but the fact that they were replicated does not mean it was not accurate. Furthermore, the study proves that they were conducted in a perfect scenario. In the perfect situation, the test can be replicated and it has valuable information. The title is wrongly formulated. The tests are not failing the experiment. He argues with his own headline to gain attention but he contradicts his own arguments.
Are psychology experiments always replicable?
The study conducted by Open Science Collaboration Estimating the reproducibility of psychological science aims to assess whether the replication and the original experiment yielded the same result according to several criteria, they find that about one-third to one-half of the original findings were also observed in the replication study. The study was conducted very thoroughly and precisely as researchers conducted “replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available.” (Open Science Collaboration, 2015)
No single indicator sufficiently describes replication success, and the five indicators examined by the author are not the only ways to evaluate reproducibility. The study specifically suggests that not one single indicator can be used to come to any significant conclusion (inference).
Moreover, the scientific paper introduces statistic suggested “if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects” (Open Science Collaboration, 2015). Therefore, if no bias is assumed then the simulation of the experiment is hugely flawed. The researchers’ simulations are more accurate and valid than the actual investigations themselves. This is a perfect simulation.
Replication of studies can increase validity of reproduced findings and promote innovation. Further research is encouraged for verification of psychological research.