Targeted Assessment Rubric: An Empirically Grounded Rubric for Interdisciplinary Writing
Mansilla, Veronica Boix, Duraisingh, Elizabeth Dawes, Wolfe, Christopher R., Haynes, Carolyn, Journal of Higher Education
At the dawn of the twenty-first century, the American academy is marked by renewed interest in interdisciplinary research and education. Multiple drivers propel the growth. Socio-environmental challenges such as mitigating climate change or eliminating poverty demand interdisciplinary solutions. Technologies have ignited interdisciplinary innovations, from unprecedented information sharing to systemic accounts of gene regulation. Recent analyses of the future of industry and labor call for individuals who can understand, employ, and integrate knowledge, methods, and approaches, as well as collaborate across industry sectors, cultures, and disciplinary teams (Levy & Murnane, 2004; National Academies, 2005).
Recognizing this state of affairs, American colleges and universities have increased their interdisciplinary course offerings. In the 2006 US News & World Report college and university rankings, 61.71% of liberal arts institutions reported offering interdisciplinary studies majors. In a recent Social Science Research Council survey of 109 American Baccalaureate College-Liberal Arts institutions, 99.07% report either being very or somewhat oriented to interdisciplinary instruction. In this sample, 65.42% expect to increase their offerings over the next five years (Rhoten, Boix Mansilla, Chun, & Klein, 2006). The American Association of Colleges and Universities (AACU) has called for a renewal of liberal education competencies reminiscent of interdisciplinary learning, such as "integrating knowledge of various types and understanding complex systems; resolving difficult issues creatively by employing multiple sources and tools; [and] working well in teams, including those of diverse composition" (National Leadership Council for Liberal Education and America's Promise, 2007). Among federal funding agencies, the National Institutes of Health (NIH) Roadmap Initiative promotes "interdisciplinary research teams of the future" (NIH, 2006), and the National Science Foundation (NSF) advocates "investigations that cross disciplinary boundaries and require a systems approach to address complex problems" (NSF, 2006, p.6).
Yet the ongoing growth of interdisciplinary programs and courses comes with deep uncertainty about how to structure interdisciplinary learning experiences and measure their success. Overwhelmingly, interdisciplinary programs rely on student grades and opinion surveys (Rhoten et al., 2006). An analysis of four well-regarded interdisciplinary programs (Boix Mansilla, 2005; Boix Mansilla & Dawes Duraisingh, 2007) showed that innovative methods to assess learning outcomes (e.g., real-life problems, portfolios) are informed by generic criteria (e.g., logic of argument or effort and commitment). Such criteria sidestep the question of what, if any, are the defining qualities that characterize interdisciplinary achievement (ibid). In an era of increased accountability, reliable approaches for assessing interdisciplinary learning are necessary to ensure not only the effectiveness of interdisciplinary courses and programs but also their survival (Astin. 1993; Banta, 2002; National Academies, 2005).
A growing body of research on assessment has yielded a plethora of principles and artifacts to monitor and support student learning. Performance-based rubrics, protocols, and portfolios suggest how to make the learning contract between faculty and students clear and student learning visible. Yet with few exceptions (e.g. Wolfe & Haynes, 2003; Boix Mansilla & Dawes Duraisingh, 2007), the question of what exactly to assess when student work is interdisciplinary remains unanswered. What constitutes quality interdisciplinary student work and how can faculty validly and reliably distinguish between higher and lower achievements? How can administrators discern whether students are developing competencies of interdisciplinary inquiry and communication?
Here, we introduce the Targeted Assessment Rubric for Interdisciplinary Writing (Appendix A), an empirically-tested instrument designed to assess interdisciplinary writing at the collegiate level. Interdisciplinary writing presents unique challenges to students, calling upon them to mediate the rhetorical, theoretical, and methodological differences inherent in multiple disciplinary discourses. The rubric proposes four distinct dimensions to be examined: a paper's purposefulness, disciplinary grounding, integration, and critical awareness. For each criterion, four qualitatively distinct levels of student achievement are described: naive, novice, apprentice, and master. The rubric builds on a clear definition of interdisciplinary work, a related assessment framework, and recent scholarship on interdisciplinary writing (Boix Mansilla, 2005; Boix Mansilla & Dawes Duraisingh, 2007; Boix Mansilla, Miller, & Gardner, 2000; Wolfe & Haynes, 2003).
Systematic analysis of student work enabled us to test the rubric's reliability at capturing differences in performance at three stages of collegiate training (freshmen, sophomores, and seniors). These students were enrolled in the Interdisciplinary Studies major at Miami University, following a sequence of interdisciplinary courses in preparation for an individualized interdisciplinary specialization. The rubric is designed as a dynamic tool that researchers and faculty can adapt to examine student work at various disciplinary crossroads, from first-year essays to senior coursework and theses. Aside from grading, the rubric can be used to identify qualities of students' interdisciplinary understanding and support their further development.
Below, we review the assessment literature and the rubric's conceptual foundations. We introduce the rubric through an example of student work and describe the methods by which we developed and tested it. We conclude with concrete recommendations for practice.
Assessing Learning Outcomes
A key marker of institutional effectiveness--despite being difficult to measure--is the quality of individual student learning (Chun, 2002). The drive to advance valid measures of such learning has yielded a range of approaches toward assessment (Chun, 2002; Ewell, 1991; Hutchings, 1990; Schneider, 2002), and assessment experts have advocated the use of rubrics in pre-collegiate and higher education contexts. First, grading is seen to be fairer and more consistent when assessment criteria are made explicit and instructors describe different levels of performance. Second, self-assessment is valued as a means to help students reflect on their work; rubrics allow students to judge the current quality of their work and the ways in which they could develop it further (Brough & Pool, 2005; Huber & Hutchings, 2004; Walvoord & Anderson, 1998).
Some critics charge that rubrics promote shallow learning and are incongruous with student-centered teaching practices because they promote conformity and standardization (Kohn, 2006; Wilson, 2006). Wilson believes rubrics "violate" the complexity of a piece of written work by dividing it into separate, quantifiable parts which do not capture a piece's overall impact or quality. However, as Goodrich-Andrade (2006) points out, some of the perceived shortcomings of rubrics stem from a narrow interpretation of rubrics as tools for grading rather than supports for understanding. She and others (e.g., Huba & Freed, 2000) caution that in a well designed rubric, scoring highly on all of a rubric's criteria is incompatible with not doing the task well. In other words, the power of a rubric rests on the degree to which it captures meaningful dimensions of the work without which a quality product could not be achieved. As suggested earlier, while the authentic assessment movement has broadened the ways in which students are assessed, determining what to assess has proven more …
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Article title: Targeted Assessment Rubric: An Empirically Grounded Rubric for Interdisciplinary Writing. Contributors: Mansilla, Veronica Boix - Author, Duraisingh, Elizabeth Dawes - Author, Wolfe, Christopher R. - Author, Haynes, Carolyn - Author. Journal title: Journal of Higher Education. Volume: 80. Issue: 3 Publication date: May-June 2009. Page number: 334+. © 1999 Ohio State University Press. COPYRIGHT 2009 Gale Group.
This material is protected by copyright and, with the exception of fair use, may not be further copied, distributed or transmitted in any form or by any means.