Academic journal article The Behavior Analyst Today

Building the Case for Large Scale Behavioral Education Adoptions

Academic journal article The Behavior Analyst Today

Building the Case for Large Scale Behavioral Education Adoptions

Article excerpt

Working with school districts requires the ability to quickly adapt to the needs of each district and to provide as much useful information as possible to enable them to make decisions. Districts feel most comfortable when they can review evidence of program effectiveness obtained from their own schools. Accordingly, it is necessary to be able to quickly respond to requests and to provide data that may not meet "gold standard" requirements, but that do help provide a basis for sound decision making. This article provides a case study of such an effort, and may help to elucidate how in-district data may be quickly generated and combined with other evidence to strengthen the case for program adoption.

The program being considered for a district-wide summer school adoption was Headsprout[R] Reading Comprehension (HRC). HRC is an online program designed to directly teach children how to comprehend text. It provides instruction in literal, inferential, main idea, and derived word meaning comprehension, as well a vocabulary instruction using stimulus-equivalence like procedures. Learners interact with the 50 episode (lesson) program for about 30 minutes a day. As with all Head-sprout programs, HRC underwent extensive user testing during the course of its development, more than 120 learners, one at a time, interacted with the various program segments, providing data on their effectiveness and occasions for revisions. HRC, first its components and then the entire program, was revised and retested until nearly all learners met criterion. As of this writing, more than 35,000 online learners provided, and continue to provide, data for further evaluation and revision. An overview of HRC and its development can be found in Leon et al (2011). A detailed description is provided of the contingency analytic foundations of HRC by Layng, Sota, & Leon (2011), of the analysis that determined its content by Sota, Leon, and Layng (2011), and of the programming and testing of the repertoires and relations involved by Leon, Layng, & Sota (2011). The methods Headsprout employs in the design, development, and testing of all its programs has been described by Layng, Twyman, & Stikeleather (2003), Twyman et al (2004), and Layng, Stikeleather, and Twyman (2006).

Program evaluation can be of two kinds, formative and summative. In formative evaluation, criteria are established for learner performance. The program, or component thereof, is tested to determine if learners reach the specified criteria. If learner behavior does not meet those criteria, the program is revised and retested until nearly all criteria are met (Layng et al, 2006; Markle, 1967; Twyman et al, 2004). In summative evaluation, evidence is gathered after the program has been developed in an attempt to determine program effectiveness. Summative evaluation often employs pretest versus posttest comparisons or group comparisons of various types.

As noted by Layng et al (2006):

   Whereas group designs, typically the basis for summative
   evaluation, are readily accepted as providing scientific evidence
   for program effectiveness, single subject designs typically form
   the basis for formative evaluation. While both group and single
   subject designs are descended from highly successful scientific
   traditions, and both may provide equally rigorous and informative
   results, single subject design is relatively less understood. Both
   do, however, differ in the questions asked; one asks about the
   behavior of groups, the other asks about the behavior of
   individuals.

Layng et al (2006) go on to describe a 3 X 3 matrix that illustrates the relation between evidence types found in both formative and summative evaluation, and the results of their intersection. See Table. The columns comprise types of summative evaluation, and the rows types of formative evaluation. For summative evaluation, the columns are: A. Experiential Assessment, B. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.