Academic journal article The Journal of Parapsychology

The Validity of the Meta-Analytic Method in Addressing the Issue of Psi Replicability

Academic journal article The Journal of Parapsychology

The Validity of the Meta-Analytic Method in Addressing the Issue of Psi Replicability

Article excerpt

Meta-Analysis and Replication in Psi Research

Replication is critical in demonstrating that a given result is not due to chance or artifact (Lykken, 1968) and, indeed, most traditional philosophies of science list replicability as a requisite for scientific study (Attmanspacher & Jahn, 2003; Godfrey-Smith, 2003). Within psychology, much of the controversy surrounding both the existence of psi and parapsychology's scientific status has centred on a purported lack of repeatable results in psi research (Beloff, 1994; Irwin & Watt, 2007; Milton & Wiseman, 2001). Given this, it seems imperative that parapsychologists seek replicability of psi effects. Parapsychologists are acutely aware of this need and, historically, much energy has been devoted to this end (Utts, 1991).

Meta-analysis has played a prominent role in this goal: it has found application across a range of experimental domains in ESP (e.g., Bem & Honorton, 1994; Bern, Palmer, & Broughton, 9001; Haraldsson, 1993; Honorton, 1985; Honorton & Ferrari, 1989; Honorton et al., 1990; Honorton, Ferrari, & Bern, 1998; Hyman, 1985; Lawrence, 1993; Milton, 1997a; Milton & Wiseman, 1999; Radin, 9005; Sherwood & Roe, 2003; Stanford & Stein, 1994; Steinkamp, Milton, & Morris, 1998; Storm, 2000; Storm & Ertel, 2001; Storm, Tressoldi, & Di Riso, 2010) and PK (Bosch, Steinkamp, & Boller, 2006a; Braud & Schlitz, 1997; Radin, 1997; Radin & Ferrari, 1991; Radin & Nelson, 1989, 2003; Schmidt, Schneider, Utts, & Walach, 2004) research, and its results are held in high esteem (e.g., Palmer, 2003). Storm (2006), for example, describes meta-analysis as a "Godsend for parapsychologists" (p. 37) and one critic has suggested that the arguments for the consistency of ganzfeld results rest solely on meta-analytic evidence (Hyman, 2010). There is no doubt that meta-analysis has played a major role in the ganzfeld debates (Palmer, 2003), and the importance of the technique in other experimental domains appears to be growing.

Given the widespread enthusiasm for meta-analysis, it is of critical importance to enquire as to the extent to which the technique yields valid and reliable evidence bearing on the psi replicability question. The present paper will describe some of the most pertinent limitations and advantages of meta-analysis in the context of psi research and evaluate the extent to which they have respectively undermined and enhanced the technique's contribution to addressing the issue of whether there is replicable evidence for psi.

Meta-analysis is used to obtain a quantitative synthesis of the individual (primary level) studies relevant to a given research question. To a first approximation, the enthusiasm for meta-analysis in addressing psi replicability would appear to be entirely justified. This is because the technique can both summarise the average size of an effect across multiple studies in a single index and provide a rich set of auxiliary statistics pertaining to effect size moderators, confidence intervals, consistency across studies, statistical significance, and indications of the likelihood of results being due to publication bias (Borenstein, Hedges, Higgin, & Rothstein, 2009; Palmer, 2003). Each of these, directly or indirectly, provides a means of evaluating replicability. Meta-analysis, therefore, seems to offer myriad riches when it comes to addressing the question of psi replicability. These sources of evidence are discussed in more detail below.

The most fundamental source of evidence for replicability offered by meta-analysis is a nontrivial effect size abstracted from several occasions of asking the same research question (Rosenthal, 1991). Were effects not replicable, the resulting abundance of null or chance negative results would act to decrease this combined effect size to a negligible magnitude.

As random errors will cancel out with conglomeration, meta-analysis also overcomes the problem of noise and pseudofailure to replicate at the primary research level when studies are underpowered (Bayarri & Berger, 1991; Broughton, 1991; Rosenthal, 1986; Storm, 2006). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.