Academic journal article Higher Education Studies

Using Student Ability and Item Difficulty for Making Defensible Pass/Fail Decisions for Borderline Grades

Academic journal article Higher Education Studies

Using Student Ability and Item Difficulty for Making Defensible Pass/Fail Decisions for Borderline Grades

Article excerpt

Abstract

The determination of Pass/Fail decisions over Borderline grades, (i.e., grades which do not clearly distinguish between the competent and incompetent examinees) has been an ongoing challenge for academic institutions. This study utilises the Objective Borderline Method (OBM) to determine examinee ability and item difficulty, and from that reclassifying each Borderline grade as a Pass or Fail. Using the OBM, examinees' Borderline grades from a clinical examination were reclassified into Pass or Fail. The predictive validity of this method was estimated by comparing the examination original and reclassified grades to each other and to subsequent clinical examination results. The new model appeared as more stringent (p<.0001) than the original decisions. Implications for educators and policy makers are discussed. The OBM2 is found to provide a plausible solution for decision making over borderline grades in non-compensatory assessment systems.

Keywords: borderline grades, board of bxaminers, examinations, decision making

1. Introduction

1.1 Standard Setting and Decision Making in Higher Education

One of the most challenging tasks in clinical assessments is the Pass/Fail decision for borderline performance (Kramer et al., 2003; Patrício et al., 2009; Schoonheim-Klein et al., 2009; Shulruf, Turner, Poole, & Wilkinson, 2013; Wood, Humphrey-Murto, & Norman, 2006). This challenge is particularly difficult since many types of clinical examination include a "Borderline performance" in their marking sheets (Boursicot, Roberts, & Pell, 2007; Roberts, Newble, Jolly, Reed, & Hampton, 2006; Schoonheim-Klein et al., 2009; Wilkinson, Newble, & Frampton, 2001). As many clinical examinations are of high stake (Shumway & Harden, 2003), it is essential to make an accurate Pass/Fail decision which does not fail a competent student and does not pass an incompetent student, particularly given evidence that borderline students tend to remain underachieving throughout their studies (Pell, Fuller, Homer, & Roberts, 2012). The General Medical Council (2009a, 2009b) also expressed its concerns about assessment and standard setting practices in medical programmes in the United Kingdom. To address this critical issue a plethora of standard setting methods have been introduced and implemented in a range of clinical examinations (Boulet, De Champlain, & McKinley, 2003; Jalili, Hejri, & Norcini, 2011; Shulruf et al., 2013; Wass, Vleuten, Shatzer, & Jones, 2001; Wilkinson et al., 2001). Nonetheless despite this range of methods concerns about reliability, validity and acceptability remain (Ben-David, 2000; Brannick, Erol-Korkmaz, & Prewett, 2011) particularly within the context of clinical assessment where clinical examiners tend to avoid failing students and trainees (Cleland, Knight, Rees, Tracey, & Bond, 2008; Dudek, Marks, & Regehr, 2005; Morton, Cumming, & Cameron, 2007; Rees, Knight, & Cleland, 2009).

Most standard setting methods determine a Pass/Fail decision for Borderline grades by identifying a cutoffscore within the borderline range by statistical/mathematical calculations deemed to be objective (Ben-David, 2000; Cizek, 2012; Cizek & Bunch, 2007). Among the most commonly used methods are the Nedelsky, Ebel, Angoff, Hofstee, Borderline Group, and Regression methods (Ben-David, 2000; Cizek, 2012; Cizek & Bunch, 2007). Nedelsky, Ebel, Angoffand Hofstee methods use expert panels to estimate what a cutoffscore should be (Cusimano & Rothman, 2003; Geisinger & McCormick, 2010; Hurtz & Auerbach, 2003; Kaufman, Mann, Muijtjens, & Vleuten, 2000; Kramer et al., 2003; Verheggen, Muijtjens, Van Os, & Schuwirth, 2008; Wass et al., 2001; Wayne et al., 2005), whereas the Borderline Group and Regression methods use only the test scores without any additional post examination judgment (Boursicot et al., 2007; Shulruf et al., 2013; Smee, 2001; Wilkinson, Frampton, Thompson-Fawcett, & Egan, 2003). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.