Academic journal article Cognitive, Affective and Behavioral Neuroscience

Could Millisecond Timing Errors in Commonly Used Equipment Be a Cause of Replication Failure in Some Neuroscience Studies?

Academic journal article Cognitive, Affective and Behavioral Neuroscience

Could Millisecond Timing Errors in Commonly Used Equipment Be a Cause of Replication Failure in Some Neuroscience Studies?

Article excerpt

Abstract Neuroscience is a rapidly expanding field in which complex studies and equipment setups are the norm. Often these push boundaries in terms of what technology can offer, and increasingly they make use of a wide range of stimulus materials and interconnected equipment (e.g., magnetic resonance imaging, electroencephalography, magnetoencephalography, eyetrackers, biofeedback, etc.). The software that bonds the various constituent parts together itself allows for ever more elaborate investigations to be carried out with apparent ease. However, research over the last decade has suggested a growing, yet underacknowledged, problem with obtaining millisecond-accurate timing in some computer-based studies. Crucially, timing inaccuracies can affect not just response time measurements, but also stimulus presentation and the synchronization between equipment. This is not a new problem, but rather one that researchers may have assumed had been solved with the advent of faster computers, state-of-the-art equipment, and more advanced software. In this article, we highlight the potential sources of error, their causes, and their likely impact on replication. Unfortunately, in many applications, inaccurate timing is not easily resolved by utilizing ever-faster computers, newer equipment, or post-hoc statistical manipulation. To ensure consistency across the field, we advocate that researchers self-validate the timing accuracy of their own equipment whilst running the actual paradigm in situ.

Keywords Replication . Millisecond timing accuracy . Millisecond timing error . Experiment generators . Equipment error

There appears to be a growing unease within the field of neuroscience, and across psychology in general, that some findings may not be as stable, repeatable, or valid as the academic literature describing them might indicate. Some have suggested that a proportion of studies might not be repeatable at all (Pashler & Wagenmakers, 2012). Given that published studies typically represent only positive findings, due to the infamous "file drawer problem," this could be a cause for concern (Rosenthal, 1979).

We acknowledge that replication failure is a multifaceted issue with many potential causes and a myriad of solutions, especially in a field as complex as neuroscience. Factors including choice of equipment, software tools, statistical analysis, overstated effect sizes, and inferences that go beyond the available data can have a compounding effect.

However, on the basis of our own research carried out over the last decade, we feel that we can account for at least a proportion of the problem. We believe that millisecondtiming errors residing within researchers' equipment, closely followed by what might be termed "human error," could account for some unstable findings.

Given continual advances in the available hardware and software, combined with the complexity of many paradigms and their equipment setups-in which computers are used to present stimuli, synchronize with other equipment, and record responses-without doubt some degree ofunquantified timing error will affect the data. This can mean that stimuli are not presented when requested, that different pieces of equipment are not synchronized correctly, that event markers are not temporally aligned, and that responses may be longer than indicated. As a consequence, the conclusions drawn by researchers may not be valid. An extremely troubling thought is that such problems may go unnoticed because there is no obvious indication that anything has gone or is going wrong. On these grounds, we suggest that the scale of such timing errors is considerably greater than might be suspected.

As regards researcher fallibility, we believe that some of the current generation may lack in-depth knowledge on all of the equipment that they use on a daily basis, as studies can now be constructed with relative ease. This is especially true when using some of the more advanced software packages and newer techniques in the context of functional brain imaging (fMRI). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.