Academic journal article College and University

Employing Quantitative Models of a Qualitative Admissions Process: Uncovering Hidden Rules, Saving Time, and Reducing Bias

Academic journal article College and University

Employing Quantitative Models of a Qualitative Admissions Process: Uncovering Hidden Rules, Saving Time, and Reducing Bias

Article excerpt


Admission into highly selective institutions of higher education most often relies on the judgments of panelists who first independently assign applicants to ordinal categorical or numerical scales. Such initial decisions are then used in a multi-step winnowing process of discussion and group decision. Our study quantitatively models this inherently subjective process using five years of graduate admissions

data encompassing 592 candidates and 72 raters. Logistic regression models are found to be well-fitting and parsimonious. They allow for an analysis of the contribution of each stage of the process of which the initial blind rating of candidates accounts for roughly 40 percent of the variance in admissions. Our models also reveal the tacit rules governing admissions decisions, including the relative power of each classification category and implicit thresholds. We find that extended discussion and deliberation phases requiring the entire committee can be of limited productivity when inter-rater agreement is high. Both rater bias and the use of linear scaling are explored as threats to fairness and remedies are examined to make admissions decisions more equitable and efficient.


Since the first statutes governing admissions were drawn up by Harvard College, admission into American higher education has only become more complicated. In 1642, knowledge of Latin and Greek insured entry into this struggling college. By 1734, examinations were required. In 1807, mathematics, geography, and proof of "good moral character" were added. By 1875, knowledge of a foreign language, English literature, physical science, geometry, and algebra boosted the requirements further (Broome 1903). Conditions now are certainly quite different from 350 years ago when Harvard was constantly under the threat of financial collapse. Competition has grown to the extent that at many highly selective schools, entrance is only offered to 10 percent of applicants. A similar situation is mirrored by many graduate programs.

While many colleges and universities use a formulaic approach in combining standardized test scores and academic achievement to select an entering cohort, others use a mix of measures, often relying on evidence not prone to exact quantitative encoding. Ivy League and other highly competitive schools generally employ a multi-stage process for undergraduate admissions in which applications are individually rated by admissions officers and the pool winnowed by committee (Hernandez 1997).

Graduate admissions can follow a very different process than that for undergraduates. Undergraduate admission most commonly is carried out by admissions counselors with little involvement by faculty. When an interview is part of the admissions process, alumni often conduct it far from campus. By contrast, graduate admissions committees are often made up of junior and senior faculty members, an admissions office representative, and, in some schools, graduate students (Sacks 1978) because selecting doctoral students is a long term investment, often viewed as an opportunity to match the resources and needs of a school with an applicant's interests and talents, impressions and intuitions must substitute for the comfort of numerical scores. We have studied the Harvard University Graduate School of Education's Doctoral Program in Learning and Teaching over a five year period to characterize a three-stage admissions process that relies heavily on judgments of quality based in complex data.

Prior work using a researcher's systematic tools to examine admissions has taken the form of examining the effective use of quantitative selection criteria in predicting later performance in graduate school. Academic success in graduate programs has been examined utilizing measures of undergraduate college caliber, college transcripts, and standardized test scores (MAT, GRE, MCAT, GMAT). Medical (Hall 1992), nursing (Rhodes 1994), business (Fisher 1990; Graham 1991; Zwick 1993), physical therapy, (Seymour and Gramet 1995) and veterinary schools (Stuck 1990) have used such data to predict graduate school grades and, in addition, scores on post-graduate professional examinations (Fletcher 1989; Mitchell 1990). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.