I was asked to address the future of the field of learning disabilities (LD) from the perspective of a director of one of the five Institutes for Research on Learning Disabilities (IRLD) funded by the Office of Special Education Programs more than 25 years ago. The theme of each of the Institutes differed; the Minnesota IRLD focused on assessment and decision making. We conducted research on how school personnel sorted students who were achieving poorly in school into those who were and those who were not LD, and worked to develop improved ways to use assessment information to plan and adapt instructional interventions. We pointed to the considerable variability at that time (late 1970s and early '80s) in the kinds and numbers of students receiving LD services in different settings, and to the tremendous variability in criteria used by state and local education agencies. On many days in many ways, we asked this question: Who are students with learning disabilities and what is being done for them in special education?
For five years we addressed a set of issues in assessment and decision making. We produced 144 research reports, and in 1983 (20 years ago) we (Ysseldyke et al., 1983) stated 14 generalizations based on the studies we conducted. An abridged version of these generalizations is found in Table 1.
In the conclusions to the 1983 paper we indicated that an alternative to current practice was one of intervening at the point of referral and using data on student performance to make eligibility decisions (an early call for a prereferral intervention or "response-to-intervention" approach). On many occasions since that writing, I have argued that we spend far too much of our professional time making predictions about students' lives, and far too little time making a difference in their lives. Documenting then prevalent assessment and decision-making practices, we argued that there was much to be gained by abandoning much of what we were doing (Ysseldyke et al., 1983).
Since our work more than 20 years ago, many have challenged our findings and the direction that they set. Yet, little has changed since we did our work or they did theirs. Certainly, there have been calls for change--some louder than others. Professional associations, advocacy groups, and government agencies have formed task forces and task forces on the task forces to study identification of students with LD. We have had mega-analyses of meta-analyses and syntheses of syntheses. Nearly all groups have reached the same conclusion: There is little empirical support for test-based discrepancy models in identification of students as LD.
Most task forces have called for a response-to-intervention (variously called problem solving, intervention support team, intervention-based assessment) model (Burns, Appleton & Stehouwer, 2004; Fuchs, Mock, et al., 2003), even though "The RTIs differ in terms of the number of levels in the process; who delivers the interventions, and whether the process is viewed as a precursor to a formal evaluation for eligibility, or if RTI is itself the eligibility evaluation" (Fuchs, Mock et al., p. 159). Clearly there is a political push and political will to change (International Dyslexia Association, 2002; National Center for Learning Disabilities, 2002; National Research Council, 2002; Learning Disabilities Summit Majority Report, 2002; the President's Commission on Excellence in Special Education, 2002).
Current legislation (IDEA 2004) gives states the option to move away from a discrepancy model in identification of students as LD and permission to move toward a response-to-intervention or problem-solving model. A remnant of test authors continue to argue for cognitive assessment, process assessment, and discrepancy-based identification (American Academy of School Psychology, 2004). Those who advocate for a response-to-intervention model (Gresham et al., 2004) argue for "Direct measurement of achievement, behavior and the instructional environment in relevant domains as the core loci of a comprehensive evaluation in SLD" (p. …