Academic journal article
By Miramontes, Nancy Y.; Marchant, Michelle; Heath Allen, Melissa; Fischer, Lane
Education & Treatment of Children , Vol. 34, No. 4
As more schools turn to positive behavior interventions and support (PBIS) to address students' academic and behavioral problems, there is an increased need to adequately evaluate these programs for social relevance. The present study used social validation measures to evaluate a statewide PBIS initiative. Active consumers of the program were polled regarding their perceptions of the program's social relevance, including the acceptability of its treatment goals, procedures, and outcomes. Based on participants' feedback, several areas were identified for improvement, including the amount of paperwork required for successful implementation and the practicality of implementing and adhering to program procedures. As evidenced from the findings of this study, social validity is an important consideration when evaluating school-wide programs.
Key Words: social validity, positive behavior support, social validation, contextual fit.
The criteria for evaluating behavioral support programs are changing. In addition to supplying behavioral intervention strategies that are theoretically and technically sound, programs must now be matched specifically to the people and environment affected by implementation (Albin, Lucyshyn, Horner, & Flannery, 1996). With complex local and societal needs, public educators may feel overwhelmed by plans and strategies that promise results they do not deliver. The purported effectiveness of educational programs, however, does not guarantee that each program will be equally effective in every setting (Reimers, Wacker, & Koeppl, 1987). In selecting programs, educators must evaluate each on its applicability as well as the reliability and validity of its content and measures. They must also consider the program's potential value for the specific group of consumers it will serve.
Social validity was First described by Wolf in 1978 as the value society places on a product. To legitimately analyze a program, Wolf proposed that society must evaluate its effectiveness based on goals, procedures, and outcomes. This information could then be used to tailor the program to better meet the needs of the consumer. With its roots in applied behavior analysis, social validity "attempts to go beyond 'clinical judgment' to derive information from the broader social environment of the individual(s) whose behavior is being changed" (Kennedy, 1992, p. 147). This focus not only makes social validity an important concept to consider when evaluating programs, but it challenges the field to look beyond typical "clinical judgments" and recognizes the value of and need for assessing consumer reaction.
A program with high social validity is responsive to consumer needs, an aspect integral to evaluating program effectiveness because social validity promotes increased fidelity and sustainability (Albin et al., 1996). When researchers consider the concept of social validity and respond to consumers' concerns, these consumers become invested in making informed choices and are more likely to offer support. Additionally, Schwartz (1991) noted that consumers who made informed choices reported increased satisfaction, and that satisfied consumers improved a program's viability.
Improving a program's viability begins by considering the dynamics between research and practice, which in the case of social validity includes a disconnect between published research and applied research as it is actually carried out in the field. Educators have valid reasons for concern regarding the quality and applicability of current educational research (Carnine, 1997). Because of the increasing gap between educational research and practice (Kern & Manz, 2004) and the top-down manner in which research-based programs are typically introduced (Child Trends, 2008), social validity becomes even more critical.
Purpose of Social Validity Research
The purpose of social validity is not to gather false praise for a proposed program, but to gather useful information about potential pitfalls, implementation barriers, and varying perceptions regarding the program's potential impact (Schwartz & Baer, 1991). …