Academic journal article Psychonomic Bulletin & Review

The Frequency of Excess Success for Articles in Psychological Science

Academic journal article Psychonomic Bulletin & Review

The Frequency of Excess Success for Articles in Psychological Science

Article excerpt

Published online: 18 March 2014

© Psychonomic Society, Inc. 2014

Abstract Recent controversies have questioned the quality of scientific practice in the field of psychology, but these concerns are often based on anecdotes and seemingly isolated cases. To gain a broader perspective, this article applies an objective test for excess success to a large set of articles published in the journal Psychological Science between 2009 and 2012. When empirical studies succeed at a rate much higher than is appropriate for the estimated effects and sample sizes, readers should suspect that unsuccessful findings have been suppressed, the experiments or analyses were improper, or the theory does not properly account for the data. In total, problems appeared for 82 % (36 out of 44) of the articles in Psychological Science that had four or more experiments and could be analyzed.

Keywords Statistical inference . Statistics . Probabilistic reasoning

It is widely recognized that a bias exists across articles in the field of psychology (Fanelli, 2010; Sterling, 1959; Sterling, Rosenbaum, & Weinkam, 1995). These studies have noted that approximately 90 % of published experiments are report- ed to be successful, which suggests that many unsuccessful experiments remained unpublished. However, it is not clear what such a bias means with regard to believing a specific reported experimental finding or theory. Bias across articles may merely reflect a desire among authors and journals to publish about topics that tend to reject the null hypothesis with typical experimental designs and such a bias does not neces- sarily cast doubt on the findings or theories within any specific article. When judging the quality of scientific work, a finding of bias within an article is more important than bias across articles, because the presence of bias within an article under- mines that article's theoretical conclusions. Recent investiga- tions (Bakker, van Dijk, & Wicherts, 2012;Francis,2012a, 2012b, 2012c, 2012d, 2012e, 2013a, 2013b; Renkewitz, Fuchs, & Fiedler, 2011; Schimmack, 2012) have used an objective bias analysis to indicate that some articles (or closely related sets of articles) in the field of psychological science appear to be biased. However, such analyses of individual studies do not indicate whether the appearance of bias within an article is rare or common in psychology.

Partly to estimate the within-article bias rate, I have applied the bias analysis to articles published over the last several years of the journal Psychological Science, which is the flag- ship journal of the Association for Psychological Science, has enormous reach to scientists and journalists, and presents itself as an outlet for only the very best research in the field. Although perhaps the journal is not representative of the field of psychological science in general, it would be valuable to know what proportion of findings (and which specific find- ings) appear to be biased in a journal that seeks to publish the field's best work. This article summarizes the analyses of the investigated articles from the journal; the article selection criteria and a full description of the analyses (and accompanying computer code) are provided in the Electronic Supplementary Material.

In lay usage, the term "bias" means unfair prejudice, but that is not the intended meaning in this article. Here, the term "bias" is used in a statistical sense-namely, to indicate that the frequency of producing a significant result, or an effect's magnitude, is systematically overestimated. A prejudicial bias by authors may produce a statistical bias, but it is not neces- sary, because statistical bias can be introduced despite good intentions from researchers. Moreover, it is not necessary to know the exact cause or source of statistical bias for a reader to be skeptical about the published empirical findings and their theoretical conclusions.

Analyzing the probability of experimental success

The bias analysis is based on the "test for excess significance" (TES) proposed by Ioannidis and Trikalinos (2007). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.