Academic journal article Canadian Journal of Education

A Forest of Forests: Constructing a Centre-Usage Profile as a Source of Outcomes Assessment

Academic journal article Canadian Journal of Education

A Forest of Forests: Constructing a Centre-Usage Profile as a Source of Outcomes Assessment

Article excerpt

Introduction

Most writing-centre administrators collect centre-usage information because it can generate one of the most basic forms of assessment. Such assessment can and often does determine resources in the institutional-funding process. Language-support units within all academic institutions have resource constraints. As Reardon (2010) stated, "No writing center administrator can ever rest too comfortably in regards to his or her centre's continued support or funding, especially during recessions."

In addition to responding to the call since the 1980s for rigorous scientific assessment issued by both the research and practice-oriented communities in the field of writing-centre research (Hawthorne, 2006; Henson & Stephenson, 2009; Lerner, 2003; Neuleib, 1980, 1982, 1984), assessment-based activities have also become necessary for accreditation, budget, and educational-accountability purposes at both institutional and programmatic levels. Assessment not only helps identify a unit's strengths and weaknesses at different levels, but as many have pointed out, it is also critical to moving the field forward (e.g., Lerner, 2003).

This paper reports on a usage-profile analysis of an outcomes- (or more accurately, progress) assessment project in the context of a newly established language-support unit. The usage-profile analysis is one component of a multi-component assessment project that gathered and analyzed both direct and indirect evidence for the purpose of evaluating the effectiveness of new academic English-language support and services provided by a writing centre at a Canadian university. In addition to the findings' implications for writing-centre research and practice, the information about the approach used in implementing this component may be useful to administrators, researchers, and practitioners in academic language-support units across institutions of higher education.

Background

Although academic English-language-support units sometimes resist assessment for immediate practical and longer-term implicational reasons, it has also proven to be beneficial for both evaluating the effectiveness of the services in order to plan and improve and answering the age-old question: Does what we do matter (Henson & Stephenson, 2009; Niller, 2003, 2005)?

Researchers in the field of writing-centre research have emphasized the need for an evidence-based approach to outcomes assessment (e.g., Bell, 2000; Hawthorne, 2006; Henson & Stephenson, 2009; Pemberton, 2003), and cited such challenges as time and resource constraints, the need for expertise in assessment research methods, (mis)conceptions about purposes of assessment (e.g., Lerner, 2003; Schuh & Upcraft, 2001), and difficulties involved in substantiating the link between the support received and any improvement in students' writing (Enders, 2005; Jones, 2001; Lerner, 1997, 2001; Pemberton & Kinkead, 2003). The field of writing-centre research has developed a rich body of qualitative work, and, in recent years, the field has also witnessed efforts to utilize quantitative methodologies, but to-date, such evaluation studies are still lacking (Hawthorne, 2006; Jones, 2001).

According to Allen (2004), assessment may involve asking questions about "students' satisfaction with their educational experience," "the amount of their engagement or participation," and/or "what they actually gained from that experience" (p. 96). Drawing on Allen's work, as well as that of Schuh and Upcraft (2001) regarding the student-services assessment model, the first component is to "keep track of who participates," which most centre directors do. This paper analyzes the amount of user engagement or participation, which indicates how the centre is being utilized over a period of time.

Although usage reports are the most commonly implemented component of assessment (and may be the extent of regular assessment attempts for many), most such reports involve tabulating and reporting simple usage counts for total number of users, year of studies, and number of repeat users. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.