Targeted Assessment Rubric: An Empirically Grounded Rubric for Interdisciplinary Writing

Article excerpt

At the dawn of the twenty-first century, the American academy is marked by renewed interest in interdisciplinary research and education. Multiple drivers propel the growth. Socio-environmental challenges such as mitigating climate change or eliminating poverty demand interdisciplinary solutions. Technologies have ignited interdisciplinary innovations, from unprecedented information sharing to systemic accounts of gene regulation. Recent analyses of the future of industry and labor call for individuals who can understand, employ, and integrate knowledge, methods, and approaches, as well as collaborate across industry sectors, cultures, and disciplinary teams (Levy & Murnane, 2004; National Academies, 2005).

Recognizing this state of affairs, American colleges and universities have increased their interdisciplinary course offerings. In the 2006 US News & World Report college and university rankings, 61.71% of liberal arts institutions reported offering interdisciplinary studies majors. In a recent Social Science Research Council survey of 109 American Baccalaureate College-Liberal Arts institutions, 99.07% report either being very or somewhat oriented to interdisciplinary instruction. In this sample, 65.42% expect to increase their offerings over the next five years (Rhoten, Boix Mansilla, Chun, & Klein, 2006). The American Association of Colleges and Universities (AACU) has called for a renewal of liberal education competencies reminiscent of interdisciplinary learning, such as "integrating knowledge of various types and understanding complex systems; resolving difficult issues creatively by employing multiple sources and tools; [and] working well in teams, including those of diverse composition" (National Leadership Council for Liberal Education and America's Promise, 2007). Among federal funding agencies, the National Institutes of Health (NIH) Roadmap Initiative promotes "interdisciplinary research teams of the future" (NIH, 2006), and the National Science Foundation (NSF) advocates "investigations that cross disciplinary boundaries and require a systems approach to address complex problems" (NSF, 2006, p.6).

Yet the ongoing growth of interdisciplinary programs and courses comes with deep uncertainty about how to structure interdisciplinary learning experiences and measure their success. Overwhelmingly, interdisciplinary programs rely on student grades and opinion surveys (Rhoten et al., 2006). An analysis of four well-regarded interdisciplinary programs (Boix Mansilla, 2005; Boix Mansilla & Dawes Duraisingh, 2007) showed that innovative methods to assess learning outcomes (e.g., real-life problems, portfolios) are informed by generic criteria (e.g., logic of argument or effort and commitment). Such criteria sidestep the question of what, if any, are the defining qualities that characterize interdisciplinary achievement (ibid). In an era of increased accountability, reliable approaches for assessing interdisciplinary learning are necessary to ensure not only the effectiveness of interdisciplinary courses and programs but also their survival (Astin. 1993; Banta, 2002; National Academies, 2005).

A growing body of research on assessment has yielded a plethora of principles and artifacts to monitor and support student learning. Performance-based rubrics, protocols, and portfolios suggest how to make the learning contract between faculty and students clear and student learning visible. Yet with few exceptions (e.g. Wolfe & Haynes, 2003; Boix Mansilla & Dawes Duraisingh, 2007), the question of what exactly to assess when student work is interdisciplinary remains unanswered. What constitutes quality interdisciplinary student work and how can faculty validly and reliably distinguish between higher and lower achievements? How can administrators discern whether students are developing competencies of interdisciplinary inquiry and communication?

Here, we introduce the Targeted Assessment Rubric for Interdisciplinary Writing (Appendix A), an empirically-tested instrument designed to assess interdisciplinary writing at the collegiate level. …