Academic journal article School Psychology Review

Improving Decision-Making: Procedural Recommendations for Evidence-Based Assessment: Introduction to the Special Issue

Academic journal article School Psychology Review

Improving Decision-Making: Procedural Recommendations for Evidence-Based Assessment: Introduction to the Special Issue

Article excerpt

The role of the school psychologist is constantly evolving to meet the changing and expanding needs of schools, families, and children (Armistead & Smallwood, 2014). Now, more than ever, school psychologists are asked to work across tiers of service with involvement in universal, secondary, and tertiary supports (Jimerson, Burns, & Van Der Heyden, 2007). Within each of these tiers, the school psychologist is seen as an expert in data-based decision-making who can guide school teams in the collection of assessment evidence to inform the delivery of evidence-based services for students. The primacy of this role is reflected in professional standards of training and practice. For example, the National Association of School Psychologist (NASP) Model for Comprehensive and Integrated School Psychological Services (i.e., NASP Practice Model; NASP, 2010) delineates this role within Domain 1: Data-Based Decision-Making and Accountability.

Assessment to inform applied decision-making was, is, and always will be an essential role of a school psychologist and currently comprises over 50% of typical job-related responsibilities (Castillo, Curtis, & Gelley, 2013). A traditional role and function of a school psychologist was one of "gate keeper of special education," which relied heavily on a test-and-place model of assessment and decision-making (Fagan & Sachs Wise, 2007). The training embedded within many school psychology graduate preparation programs reflected this role and included an emphasis on the use and interpretation of, and evidence for, individual assessment (Rossen & von der Embse, 2014). As such, there is a relatively large (albeit somewhat controversial; see McGill & Busse, 2017) empirical base for the assessments that are commonly used within special education evaluation, including those related to intellectual or achievement testing.

ASSESSMENT RESEARCH

Over the past 2 decades, school psychology practice has become increasingly embedded within multitiered systems of support. Accordingly, school psychology assessment research has largely shifted to examine the types of tools used to support decision-making across the tiers, including universal screeners and progress monitors. The majority of this research has been psychometric in nature, pertaining to the reliability and validity of scores derived from these tools. Advances in complex statistical analyses (e.g., item response theory, structural equation modeling) have allowed for more robust psychometric research than ever before. For instance, through item response theory, researchers are developing psychometrically sound tools that are more efficient than many of their predecessors. Such efficiency is founded upon the item response theory-based optimization of scales via the elimination of unnecessary items (Anthony, DiPerna, & Lei, 2016) or the use of computer-adaptive testing methodology (Shapiro, Dennis, & Fu, 2015). Similarly, through structural equation modeling, researchers simultaneously confirm the internal structure of their tools, validate it relative to other criteria, and even examine the impact of methodological variance on resulting scores (Miller et al., 2018). As a result of this new and exciting research, school psychologists now have access to a wide range of evidence-based assessments across academic, behavioral, and social-emotional domains.

In comparison to this psychometric work regarding assessment tools, relatively less research has examined the procedures through which assessment data are collected and analyzed. This is unfortunate, as the procedures surrounding the application and use of assessment tools can have a large impact on their utility, defensibility, and social consequences. Consider the following example scenarios. First, a teacher administers a test of math computation in a nonstandardized manner across students. Such administrative variability results in enhanced measurement error, thereby reducing the reliability of scores (Christ, Zopluoglu, Monaghen, & Van Norman, 2013). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.