Interweaving Rubrics in Information Systems Program Assessments- Experiences from Action Research at Two Universities

Article excerpt

Introduction

Educational assessment is an important aspect in promoting improvement in student learning and greater accountability in higher education. Its role is emphasized also by accrediting organizations at professional and regional levels. According to Ewell (2002:9), assessment as a term refers to the processes used to determine an individual's mastery of complex activities, generally through observed performance. Various academic program assessment methods are discussed in Palomba and Banta (1999).

Over the last few years assessment has become a major topic in Information Systems (IS) education research. Earlier publications in IS program assessment had dealt predominantly with indirect methods of program assessment based on student surveys (Pick & Kim, 2000; Williams & Price, 2000). A well defined agenda using exams as direct assessment measures related to the IS2002 model curriculum (Gorgone et al., 2002) was initiated at the Center for Computing Education

Research (CCER) (Landry et al., 2006; McKell, Reynolds, Longenecker, Landry, & Pardue, 2006; Reynolds, Longnecker, Landry, Pardue, & Applegate, 2004). The first account of a comprehensive effort on assessment at the level of an IS program was presented at ISECON 2004 by Petkova and Jarmoszko (2004) and was later expanded in Petkova, Jarmoszko, and D'Onofrio (2006). A comprehensive theoretical review on how various learning outcomes can be promoted in an IS program is presented in Todorova and Mills (2004). Stemler and Chamblin (2005) provide an account on how a well designed assessment process played a role in the accreditation of a Management Information Systems (MIS) program at a small liberal arts university. They used a set of common rubrics for assessing student performance on various artifacts. Another illustrative case study on implementing program assessment using the standardized tests by CCER at a private university is presented in White and McCarthy (2007). Such an approach is rigorous as it is tied strongly to the standard curriculum for the IS discipline IS2002.

Several other research efforts relate to narrower aspects of assessment in IS education. For example, assessment at the course level in Systems Analysis is discussed in Hoopes (2000). Amoroso (2004) has explored the use of online tools for assessing student learning in large classes. O'Neil (2005) and Robinson and Thoms (2001) have also focused on assessment of computer literacy knowledge. The use of multi-year projects as an assessment instrument in an IS program is discussed in Cooper and Heinze (2007).

A growing use of portfolios as an assessment instrument in other academic disciplines indicates their potential for IS education. Akar (2001) presents experiences with the use of portfolios in assessment in education. Portfolios seem to be a well established assessment method in education but to the best of the authors' knowledge there are limited accounts of their use in an IS or CS program apart from Higgs and Sabin (2005). The latter paper reports on research about the design of systems supporting portfolios. Love and Cooper (2004) have explored the design criteria for information systems supporting assessment portfolios. Sweat-Guy and Buzetto-More (2007) provide an analysis of common e-portfolio features and existing platforms.

Projects are a typical artifact with a strong presence in IS education. If projects are to be evaluated using rubrics in assessing student learning, the rubrics must be rigorously designed. These rubrics need to be standardized in different courses across the program (Petkov & Petkova, 2006). The use of unified rubrics in interrelated subjects is one way to address program level assessment. Projects from specific courses at important stages of the student studies within a program can be used as evidence of student progress with respect to the overall program goal (Petkov, Petkova, Jarmoszko,& D'Onofrio, 2007). …