The Stakes Matter: Empirical Evidence of Hypothetical Bias in Case Evaluation and the Curative Power of Economic Incentives

Article excerpt

INTRODUCTION

Jury research plays a critical role in the modern legal environment. In the private dispute context, trial consulting companies commonly run mock trial simulations in order to determine the effect of facts or issues particular to a client's case.1 Additionally, a growing number of courts employ a technique known as a summary jury trial that makes use of a surrogate jury to provide information on the relative strength of each party's case in order to motivate settlement.2 In the academic context, experimental studies on jury behavior are undertaken to uncover the biases, preconceptions and emotional triggers that influence the behavior of juries.3 Such studies are critically important to the judicial system for determining the best ways to mitigate (or at least anticipate) the effects of racism, sexism, and economic bias.4

However, the use of simulated or "mock" juries has serious limitations and disadvantages. Most significantly, researchers in the field acknowledge the possibility that the hypothetical nature of jury simulation studies leads to subject behavior that differs from that of real jurors.5 This so-called "consequentiality" or "hypothetical bias" manifests as a barrier to eliciting reliable responses from participants in a laboratory setting.6 It occurs because the real-world decision-making incentives that flow from the impact of the decision are lacking. Simply put, a participant may make choices other than what he or she would if the study conditions were real; the stakes can matter, and the failure to account for them can be very problematic. Interestingly, the potential for skewed results in jury studies is concern enough that even the U.S. Supreme Court has commented on the problem.7 While the actual impact of hypothetical bias on the reliability of mock jury studies is open to question,8 it is clear that research into the issue is necessary and relevant.

The phenomenon of hypothetical bias is well characterized in the experimental literature of many social science fields.9 Not surprisingly, various methods of reducing its effects have been developed.10 Generally, the standard model involves the use of a reward or compensation that is directly linked to a participant's response.11 Unfortunately, existing economic models for such amelioration are not useful in the context of jury studies due to their unique incentive structure. Specifically, juries12 are not motivated by the potential for personal gain, but instead are called upon to make just decisions for others. A novel approach to capture this incentive is necessary.

This paper proposes a remedy for the problem of hypothetical bias in jury studies that is translatable to existing modalities. We begin in Part I by providing background from the relevant literature, exposing hypothetical bias in other, limited contexts. In Part II, the paper describes a mechanism specifically designed to align incentives in jury studies, articulating its economic basis and components. Finally, in Part III, the paper presents the results of two experiments that demonstrate the utility of the incentive structure and provides a roadmap for future work in this area.

I. DESCRIPTIONS OF HYPOTHETICAL BIAS IN THE SOCIAL SCIENCE LITERATURE AS A PARTIAL ROADMAP FOR A REMEDY

Whenever an individual is called upon to essentially predict what he or she would do in a given circumstance, there is a potential for hypothetical bias. In the context of jury simulation research, the phenomenon is certainly acknowledged, but infrequently studied.13 According to a recent survey of the literature, only five studies have directly addressed the issue, and the results are now at least twenty years old.14 Unfortunately, this limited body of research is ultimately inconclusive.15 Four of the five studies found a direct or interacting effect of role-playing and consequences on jury behavior,16 one found no effect,17 and all suffer from some methodological shortcomings that limit the objective reliability of the results. …