Academic journal article Exceptional Children

Progress Monitoring with Objective Measures of Writing Performance for Students with Mild Disabilities

Academic journal article Exceptional Children

Progress Monitoring with Objective Measures of Writing Performance for Students with Mild Disabilities

Article excerpt

The neglect of writing in the public schools has been well documented in both regular and special education (Bridge & Hiebert, 1985; Roit & McKenzie, 1985). This neglect is reflected both in the small amount of dedicated instructional time (Leinhardt, Zigmond, & Cooley, 1980) and in the small number of individual education program (IEP) objectives for writing (Schenck, 1981), yet the prevalence and degree of writing deficiencies among students with mild handicaps is considerable. These writing deficiencies include mechanical errors (Thomas, Englert, & Gregg, 1987), inability to conform to a topic (Englert & Thomas, 1987), inability to produce a cohesive story (Barenbaum, Newcomer, & Nodine, 1987), inability to use organizing strategies (Englert, Raphael, Fear, & Anderson, 1988), and low productivity (Nodine, Barenbaum, & Newcomer, 1985). Phelps-Gunn and Phelps-Terasaki (1982) state, "Not until comparatively recently have professionals who work with language and learning disabled children realized their (writing) needs, and how little diagnostic ... material existed" (p. 2).

In the current reappraisal of writing in schools (Scardamalia & Bereiter, 1986; Stewart, 1985) emphasis is given to the need for both effective instruction and for adequate tests of writing proficiency, since remediation of writing deficiencies implies accurate assessment (Isaacson, 1985). Writing assessment in special education should provide both evaluation and formative adjustment of instruction through progress monitoring (Moran, 1987). Therefore, writing tests are needed that are sensitive to small increments of skill growth across short and medium periods of time (Tindal, 1989)--placing demands for high technical adequacy on the tests.

Direct writing assessment methods--which examine actual student writing samples--have received considerable attention (Stiggins, 1982), because they are considered to have high face and content validity (Charney, 1984; Moran, 1987; Stiggins, 1982). In this study we investigated the technical adequacy of a direct assessment methodology for progress monitoring.

The two primary methods for directly scoring writing samples are "holistic," in which subjective judgments are used to rank or rate papers, and "atomistic," which consider discrete, countable components of the written product (Isaacson, 1985). The holistic evaluations carried out in this study resulted in a single, global judgment of writing quality (Conlan, 1978; Spandel, 1981), while the atomistic indexes, including "number of correct word sequences" and "number of correctly spelled words," yielded counts, averages, and proportions or percents (Deno, Marston, & Mirkin, 1982; Hunt, 1965). While direct holistic evaluations may be more suited to a "writing process" instructional approach (Lynch & Jones, 1989), the atomistic indexes are suited to an instructional approach based on building mastery of subskills.

TECHNICAL ADEQUACY OF

HOLISTIC JUDGMENTS

With prior training and the use of "anchor" papers (Charney, 1984; McColly, 1970), it is possible to attain moderate to strong intrascorer and interscorer reliability levels (r = .75-.85) with holistic judgments (Mishler & Hogan, 1982; White, 1984). Furthermore, these judgments are often moderately related to criterion measures such as standardized writing tests (Veal & Hudson, 1983), handwriting or neatness (McColly, 1970), spelling errors, and length of writing sample (Nold & Freedman, 1977).

Holistic judgment scoring methods may be adequate for eligibility or program entry/exit decisions (Spandel & Stiggins, 1980), but not for planning individualized programs, since they are not referenced to any specific instructional features (Moran, 1987; Spandel & Stiggins, 1980). Such judgments also may lack appropriate scale properties for use in frequent, repeated assessments. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.