Academic journal article Vanderbilt Law Review

You Get What You Pay For: An Empirical Examination of the Use of MTurk in Legal Scholarship

Academic journal article Vanderbilt Law Review

You Get What You Pay For: An Empirical Examination of the Use of MTurk in Legal Scholarship

Article excerpt

Introduction

Legal scholars have long been criticized for their propensity to navel-gaze.1 The degree to which legal scholarship is useful or even relevant to anyone outside a narrow slice of legal academia has been questioned by both academics2 and judges, including chief Justice Roberts of the U.S. Supreme Court.3 Perhaps partly in response to these concerns, there has been a move in recent years to inject some "realworld" grounding into legal scholarship. One common example of this is the rise of the use of Amazon's Mechanical Turk ("MTurk") platform as a means of tethering academic ideas to the lives, beliefs, and reactions of ordinary individuals.

over the last ten years, law reviews and other scholarly legal publications have published dozens of articles that rely on data gathered using MTurk.4 Some of these applications are purely surveybased-where the primary objective is to collect information about the thoughts, perceptions, and beliefs of ordinary individuals-while others are more experimental in nature.5 There are compelling reasons for MTurk's popularity: it is both faster and cheaper than most survey or experimental techniques, often allowing researchers to obtain hundreds of results in a few hours for only a few hundred dollars.6 Despite its relatively recent origins,7 MTurk has increasingly gained acceptance among legal scholars, and articles relying on MTurk data have been published in some of the leading law reviews.8

Questions remain, however, about the quality of the data obtained through MTurk. We believe that two concerns are particularly acute. The first relates to the compensation offered to MTurk participants. While part of what makes MTurk an attractive platform is precisely the fact that it is far cheaper than other available options, this advantage may come with its own nonpecuniary costs. For example, individuals who are being paid substantially below minimum wage may not be particularly invested in the questions and may provide answers that do not reflect their true preferences or beliefs. Moreover, given the extensive literature on the sensitivity of individual behavior to incentives,9 the structure of any compensation offered, in addition to its level, is likely to have important implications for participant behavior. In particular, we distinguish between questions and tasks that require individuals to exert effort (that is, to think hard about their answers) and those that require them to pay attention (that is, to read the text of the question carefully). We contend that the optimal compensation structure may vary across these question types.

Our second, and more subtle, concern relates to the way in which the findings of studies relying on MTurk are often presented in law reviews. At present, there appear to be no widely accepted norms regarding what information authors are expected to provide about their empirical methodologies. While the amount of disclosure varies widely across articles, the vast majority provide very little discussion of how the survey or experiment was actually conducted. While there may be good reasons why authors chose to limit this discussion-including a desire not to clutter the body of the article with details that many readers may view as extraneous to the article's main argument-this opacity makes it very difficult for other scholars to interpret or evaluate the results of these studies, limiting their potential impact.

Many law review articles use MTurk to ask questions that are subjective in nature-questions about the respondents' opinions, their feelings about a particular topic, or questions for which there is no obviously correct answer. Even in this context, to the extent that the researcher cares about collecting responses from participants who have paid attention to the questions, the level of compensation may matter. Even if an answer is not wrong per se, answers from inattentive participants may introduce noise or bias to data. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.