Academic journal article Journal of Geoscience Education

The Oceanography Concept Inventory: A Semicustomizable Assessment for Measuring Student Understanding of Oceanography

Academic journal article Journal of Geoscience Education

The Oceanography Concept Inventory: A Semicustomizable Assessment for Measuring Student Understanding of Oceanography

Article excerpt

INTRODUCTION

Concept inventories test for conceptual understanding rather than factual recall (Hestenes et al., 1992). That is, they test individuals' abilities to apply fundamental first principles to answer new questions or problems that they had not previously encountered. Although studies in geoscience concept inventory development can be traced back several years (e.g., Dodick and Orion, 2003; Libarkin and Anderson, 2006; Parham et al., 2010), to our knowledge, such an instrument is lacking for oceanography courses. The existence and use of such an instrument would facilitate systematic analysis of students' prior knowledge (National Research Council, 2000) and individuals' conceptual change over the instructional period (Driver and Odham, 1986; Boyle and Monarch, 1992; Pearsall et al., 1997; Savinainen et al., 2005; Vosniadou, 2007; Lewis and Baker, 2010). When administered as a preinstruction test, a concept inventory can provide instructors with feedback about the students' preexisting knowledge and help inform instructional decisions about how much time to dedicate to certain concepts and how to teach those concepts. When administered as both a preinstruction and postinstruction test, concept inventories can measure learning gains (Hake, 1998; Thomson and Douglass, 2009).

Concept inventory instruments are multiple-choice tests that target a particular construct. A construct is ''the concept or characteristic that a test is designed to measure'' (American Educational Research Association et al., 1999, p. 5). Two examples of such constructs for cognitive instruments are ''the understanding of astronomical concepts'' and ''the ability to design a scientific instrument'' (Briggs et al., 2006, p. 38). These tests are developed based on student thinking and language, rather than being based solely on predetermined content (Hestenes et al., 1992). This is illustrated by the fact that incorrect answer options are not based on instructor speculation, assumptions, or anecdotal experiences but, instead, are developed through research into students' alternate conceptions or misconceptions (Arnaudin and Mintzes, 1985; Thijs, 1992; Arthurs, 2011). As such, the goal in crafting the incorrect answers in a concept inventory is to produce plausible ''distractors'' (Libarkin, 2008).

Concept inventories are currently available for a number of disciplines, such as astronomy (e.g., Lindell and Sommer, 2004; Lindell, 2005), biology (e.g., Odom and Barrow, 1995; Anderson et al., 2002; Knudson et al., 2003; Garvin-Doxas et al., 2007), chemistry (e.g., Tan et al., 2008), geology (e.g., Dodick and Orion, 2003; Libarkin and Anderson, 2006; Parham et al., 2010), and physics (e.g., Hestenes et al., 1992; Chabay and Sherwood, 2006). Within these disciplines, some of these concept inventories are ''conceptually extensive,'' such as the Geoscience Concept Inventory (Libarkin and Anderson, 2006) and others, such as the Geological Time Aptitude Test (Dodick and Orion, 2003), are ''conceptually intensive'' (Parham et al., 2010). In other words, conceptually extensive inventories address a range of concepts within a discipline, whereas those that are conceptually intensive focus more deeply on a limited number of concepts. In developing a concept inventory for oceanography, we used the conceptually extensive approach.

Although there are well-established theories and methods from the field of psychometrics to inform test construction in general, there exists no single prescribed approach for developing concept inventories. Thus, we used both Classical Test Theory (CTT) and Item-Response Theory (IRT) to develop and evaluate the Oceanography Concept Inventory (OCI).

The purpose of this study was to evaluate the OCI, which was developed to measure student understanding of oceanographic concepts (Arthurs and Marchitto, 2011). As part of this process, we asked two research questions: (1) to what extent is the instrument valid and reliable, and (2) what potential for generalizability does the instrument possess for use in oceanography courses taught elsewhere. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.