Academic journal article Journal of Social Work Education

Direct and Indirect Measures of Learning Outcomes in an MSW Program: What Do We Actually Measure?

Academic journal article Journal of Social Work Education

Direct and Indirect Measures of Learning Outcomes in an MSW Program: What Do We Actually Measure?

Article excerpt

Postsecondary education programs engage in learning outcomes assessment to improve curriculum content and delivery and to satisfy accreditation requirements set by accrediting agencies. Price and Randall (2008) notes that learning outcomes can be measured directly (namely, assessing students' mastery of content or skills) or indirectly (i.e., assessing opinions or attitudes toward learning). Similarly, Nichols and Nichols (2005) distinguishes between indicators of knowledge and skills attainment (direct learning outcome measures) and attitudinal indicators of perception of knowledge and skills attainment (indirect learning outcomes measures).

Traditionally, social work education programs have used student-focused direct and indirect measures (e.g., tests, papers, and students' course evaluations) as assessment strategies that exist on a continuum and that assess the same construct: namely, students' learning (Holden, Barker, Meenaghan, & Rosenberg, 1999). However, more recently, experts point out the differences in the relative utility of direct and indirect measures of learning outcomes. For example, Suskie (2009) stated that direct measures of students' learning are "visible ... evidence of exactly what students have ... learned" (p. 20). In contrast, the very construct validity of indirect measures as indicators of students' learning has been criticized as weak and inaccurate (Allen, 2004). Suskie (2009) noted that such measures merely provide "proxy signs that students are probably learning" (p. 20) and therefore are not convincing.

Allen (2004) distinguishes between direct measures that assess actual learning and indirect measures that assess perception of learning. The first category includes standardized and locally developed embedded assignments and course activities, portfolios of students' work, and competence interviews. The latter includes surveys, interviews, reflective essays, and focus groups that target students' perceptions of their learning. Though both types of measurement are limited, the consensus among experts in outcome assessment of learning in higher education is that despite the resource cost involved in developing direct measures (Allen, 2004), such strategies feature high construct and content validity in terms of assessing students' learning.

The field of social work education has been considering issues of assessing learning outcomes, leading to recent shifts in the policies of its accrediting agency, the Council on Social Work Education (CSWE). Attainment of learning has been redefined in terms of actual practice behaviors (CSWE, 2008) that presumably require direct measures of actual learning while possibly diminishing the value of indirect measures of perceived learning.

Within the field of social work, several direct assessment strategies have been used successfully. Adams (2004, p. 121) described the use of the Classroom Assessment Techniques (CAT) in assessing ongoing learning in a social welfare policy class. These brief assessment tools, which may include anonymous polls or brief responses to open ended questions, help instructors identify the level of content mastery and comprehension that students attain as the course progresses. Adams found that data gathered through CATs are useful for identifying and ultimately minimizing barriers to learning knowledge, values, and skills in a social welfare policy class.

Regehr, Bogo, Regehr, and Power (2007) developed an evaluation system that helped field instructors to better assess students' performances in the field practicum and to identify students who were experiencing difficulties. Regehr et al. discovered that use of rating scales to assess students' behaviors in the field was not useful in effectively evaluating the competency level of students' performances. Instead, they developed an instrument composed of vignettes that described contextual practice behaviors. The participating field instructors were asked to match the practice behavior patterns of their second year master's of social work degree (MSW) students with the scenarios described in the vignettes. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.