I was favorably impressed with the breadth, scope, and quality of the articles in this miniseries that dealt with the various aspects and correlates of social behavioral functioning as well as assessment and intervention considerations. Each of these articles dealt with a unique aspect of social behavioral functioning in children and youth and each emphasized a different focus of this important topic for school psychologists. My commentary will be broadly divided into two main categories: assessment considerations and intervention considerations. I will comment on each article within each of these categories.
Three articles in this miniseries dealt with various aspects of social behavioral assessment (Christ, Riley-Tilman, Chafouleas, & Jaffery, 2011; McConaughy, Volpe, Antshel, Gordon, & Eiraldi, 2011; Merrell, Cohn, & Tom, 2011). The Christ et al. article investigated the psychometric properties of Direct Behavior Ratings (DBRs) of social behavior, the McConaughy et al. article reported on cognitive and social behavioral characteristics of children with attention deficit hyperactivity disorder (ADHD), and the Merrell et al. article reported on the factor structure of a newly developed teacher rating scale of students' social-emotional assets. I will comment on each of these articles in the order just presented.
Christ et al.
This article investigated the inter-rater reliability of DBRs and two types of accuracy using systematic direct observations (SDOs) as the criterion measure. Inter-rater reliability and criterion-related validity were high using global behaviors of academic engagement and disruptive behavior. The correlation between SDO and DBR measures of academic engagement was .75 for positively and negatively worded items, respectively. The correlations between SDO and DBR measures of disruptive behavior were .81 and .79 for positively and negatively worded items, respectively. Magnitudes of rater bias were substantial across positive and negative wording conditions in that raters underestimated well-behaved behaviors and overestimated disruptive behaviors as measured by the criterion of SDOs. In fact, this rater bias effect was very large, accounting for about 45% of the variance in DBRs. Appropriately, the authors note that this finding has important implications for school psychologists in that over
reliance on behavior ratings of positive and negative behaviors may be compromised by teacher rater bias that could affect anything from referral rates to measurement of response to intervention.
A useful body of literature to draw upon for interpreting the effects of teacher rater bias is the informant discrepancy or optimal informant literature (Achenbach, 2011; De Los Reyes & Kazdin, 2005; Kraemer, Measelle, Ablow, Essex, Boyce, & Kupfer, 2003; Youngstrom, Loeber, & Stouthamer-Loeber, 2000). This literature suggests that measurement of social behavioral functioning in children and youth is complicated by the absence of an incontrovertible criterion that can be used in combining and interpreting assessment data from multiple sources. In assessment practice, the recommended strategy is to collect data from multiple sources (teacher, students, parent, direct observation). However, there is no scientifically defensible way of combining this information in a meaningful manner to make decisions based on a meaningful integration of this information. In educational practice and research, discrepancies influence how one draws conclusions in that: (a) multiple sources of information are often used to assess students' social behavior without guidance as to which source of information to trust or weight most heavily; (b) use of a single source of information will necessarily restrict the conclusions and recommendations to be drawn; and (c) the use of single or multiple sources of information in research studies often significantly changes the conclusions that might be drawn from about an individual (De Los Reyes & Kazdin, 2005; Weisz, Jensen-Doss, & Hawley, 2005). …