Within the field of psychometrics, it is widely acknowledged that test-taking speed and reasoning ability are separate abilities with little or no correlation to each other. The LSAT is a univariate test designed to measure reasoning ability. Test-taking speed is assumed to be an ancillary variable with a negligible effect on candidate scores. This Article explores the possibility that test-taking speed is a variable common to both the LSAT and actual law school exams. This commonality is important because it may serve to increase the predictive validity of the LSAT. The author obtained data from a national and a regional law school and followed the methodology of a typical LSAT validity study, with one important exception: student performance was disaggregated into three distinct testing methods with varying degrees of time pressure: (1) in-class exams, (2) take-home exams, and (3) papers. Consistent with the hypothesis, the data showed that the LSAT was a relatively robust predictor of in-class exams and a relatively weak predictor of take-home exams and papers. In contrast, undergraduate GPA (UGPA) was a relatively stable predictor of all three testing methods.
The major implication of this study is that the current emphasis on time-pressured law school exams increases the relative importance of the LSAT as an admission criterion. Further, because the performance gap between white and minority students tends to be larger on the LSAT than UGPA (the other important numerical admissions criteria), heavy reliance on time-pressured law school exams is likely to have the indirect effect of making it more difficult for minority students to be admitted through the regular admissions process. The findings of this study also suggest that when speed is used as a variable on law school exams, the type of testing method, independent of knowledge and preparation, can change the ordering (i.e., relative grades) of individual test-takers. The current emphasis on time-pressured law school exams, therefore, may skew measures of merit in ways that have little theoretical connection to the actual practice of law. Finally, this study found some preliminary evidence that the performance gap between white and minority students may be smaller on less time-pressured testing methods, including blind-graded, take-home exams. Definitive evidence on this issue will require a larger sample size.
The Law School Admission Test (LSAT) is a cultural lightning rod. While some prominent scholars attack the test as a poor predictor of law school success that is biased in favor of the privileged,1 others praise it as a valuable tool for social mobility.2 With each admissions season, the LSAT also creates a raft of winners and losers, as acceptance letters3 and scholarship money4 often turn on relatively small differences in test scores. Integrally related to this process is the ranking of law schools by U.S. News & World Report.5 Despite a methodology that attempts to consider a variety of substantive factors, including faculty reputation, library resources, faculty-student ratios, and bar passage, these rankings move in virtual lockstep with a school's median LSAT score.6 Because students, legal employers, and alumni are often swayed by these rankings, competition between law schools has become an LSAT "arms race."7 Although many within the legal academy lament the "overreliance on the LSAT,"8 law faculties have generally been unwilling to bear the consequences of taking a different path, at least by themselves.9 As one law school dean aptly noted, the situation has become a "classic 'prisoners dilemma.'"10
The LSAT also presents a special set of problems for minority students, who have historically posted significantly lower scores than their white counterparts.11 If the Supreme Court's recent decision in Grutter v. Bollinger12 had struck down the use of racial preferences in law school admissions, it is at least plausible that the legal academy would have finally mustered the collective will to confront its own admissions practices. …