Academic journal article Learning Disability Quarterly

Assessment and Decision Making for Students with Learning Disabilities: What If This Is as Good as It Gets?

Academic journal article Learning Disability Quarterly

Assessment and Decision Making for Students with Learning Disabilities: What If This Is as Good as It Gets?

Article excerpt

I was asked to address the future of the field of learning disabilities (LD) from the perspective of a director of one of the five Institutes for Research on Learning Disabilities (IRLD) funded by the Office of Special Education Programs more than 25 years ago. The theme of each of the Institutes differed; the Minnesota IRLD focused on assessment and decision making. We conducted research on how school personnel sorted students who were achieving poorly in school into those who were and those who were not LD, and worked to develop improved ways to use assessment information to plan and adapt instructional interventions. We pointed to the considerable variability at that time (late 1970s and early '80s) in the kinds and numbers of students receiving LD services in different settings, and to the tremendous variability in criteria used by state and local education agencies. On many days in many ways, we asked this question: Who are students with learning disabilities and what is being done for them in special education?

For five years we addressed a set of issues in assessment and decision making. We produced 144 research reports, and in 1983 (20 years ago) we (Ysseldyke et al., 1983) stated 14 generalizations based on the studies we conducted. An abridged version of these generalizations is found in Table 1.

In the conclusions to the 1983 paper we indicated that an alternative to current practice was one of intervening at the point of referral and using data on student performance to make eligibility decisions (an early call for a prereferral intervention or "response-to-intervention" approach). On many occasions since that writing, I have argued that we spend far too much of our professional time making predictions about students' lives, and far too little time making a difference in their lives. Documenting then prevalent assessment and decision-making practices, we argued that there was much to be gained by abandoning much of what we were doing (Ysseldyke et al., 1983).

Since our work more than 20 years ago, many have challenged our findings and the direction that they set. Yet, little has changed since we did our work or they did theirs. Certainly, there have been calls for change--some louder than others. Professional associations, advocacy groups, and government agencies have formed task forces and task forces on the task forces to study identification of students with LD. We have had mega-analyses of meta-analyses and syntheses of syntheses. Nearly all groups have reached the same conclusion: There is little empirical support for test-based discrepancy models in identification of students as LD.

Most task forces have called for a response-to-intervention (variously called problem solving, intervention support team, intervention-based assessment) model (Burns, Appleton & Stehouwer, 2004; Fuchs, Mock, et al., 2003), even though "The RTIs differ in terms of the number of levels in the process; who delivers the interventions, and whether the process is viewed as a precursor to a formal evaluation for eligibility, or if RTI is itself the eligibility evaluation" (Fuchs, Mock et al., p. 159). Clearly there is a political push and political will to change (International Dyslexia Association, 2002; National Center for Learning Disabilities, 2002; National Research Council, 2002; Learning Disabilities Summit Majority Report, 2002; the President's Commission on Excellence in Special Education, 2002).

Current legislation (IDEA 2004) gives states the option to move away from a discrepancy model in identification of students as LD and permission to move toward a response-to-intervention or problem-solving model. A remnant of test authors continue to argue for cognitive assessment, process assessment, and discrepancy-based identification (American Academy of School Psychology, 2004). Those who advocate for a response-to-intervention model (Gresham et al., 2004) argue for "Direct measurement of achievement, behavior and the instructional environment in relevant domains as the core loci of a comprehensive evaluation in SLD" (p. 34). They contend that the focus of assessment needs to be on measurable and changeable aspects of the instructional environment.

It used to surprise me that there was so much consistency between the assessment practices of the 1970s and current practices. It was a reliability coefficient that was both high and difficult to accept. I am troubled by the virtual absence of change over time in predominant assessment practices. Yet, I am no longer surprised. As I have argued elsewhere (Ysseldyke, 2001), "Change is difficult and more political than data-based ... and while change is difficult, change requiring extra work is next to impossible" (p. 300). We continue to do what we did more than 25 years ago, and the outcomes remain the same as well--little satisfaction that we have identified the "right children," too many children, and lack of results. One of the most important premises I learned in graduate school was that the best predictor of future behavior or performance is past behavior or performance. Therein lies the distressing part. While we complain about how our identification practices need to change, we continue to engage in much of the same old thing. And important, change involves both hard work and the support from a system that does not reward such change.

THE FUTURE

I was asked to make a set of predictions and recommendations. My recommendations are consistent with the recommendations of the major task forces and the assessment provisions of the most recent rendition of IDEA. I strongly support application of a response-to-intervention/problem-solving model. I strongly advocate for a results-based orientation. I strongly advocate for continuous and/or periodic assessment and the use of information derived from it for developing and implementing evidence-based interventions designed to enhance student competence and build the capacity of systems to meet student needs. I believe my recommendations are consistent with the current knowledge base and the current rhetoric in school psychology and special education.

So where does this leave us.) Professionals have demonstrated the nonsense of a discrepancy approach (Stuebing, Fletcher, & LeDeux, 2002). Many of us believe that performance-based (curriculum-based, standards-referenced) measures are easily interpretable and the best way to go. Some of us know that traditional norm-referenced achievement tests do not match the curriculum, and are therefore ill suited for measuring a student's actual achievement (and also produce scores that are unrelated to instruction). However, we should also recognize that most curricula are so ill defined and ill structured that they defy analysis--they cannot meet the curriculum-based criterion (Salvia, personal communication, 2004). A significant number of teachers do not use the components of effective instruction (Spicuzza et al., 2001; Ysseldyke & Christenson, 2002). And, we know that in spite of evidence for the effectiveness of problem-solving approaches, they are tough to sustain (Meyers, Meyers, & Deno, 1980). Our recent research illustrates that using technology-enhanced progress monitoring and instructional management systems like Accelerated Math[TM], Yearly Progress Pro, Aimsweb, Standards Master, and Renaissance Place increases the likelihood that teachers will actually use an RTI methodology (without such help it is too much work).

Over time, I envision an emerging condition called "RTI resistance." While RTI is doable (especially with the aid of technology-enhanced progress monitoring systems), we will probably identify more students as LD and those students will be more variable than those identified using the discrepancy model. We will also run into some serious conceptual/communication problems. Even if we had good curriculum and effective instruction, and good fidelity of implementation of these, RTI is not a dichotomous variable; there is a range of responses to instruction (it's normally distributed!) (VanderHeyden, Witt, & Barnett, 2004), and there is a range of instruction (as evidenced by our failure to find treatment/intervention integrity) (Ysseldyke & Tardrew, 2002). How bad does the response have to be to qualify as LD? Is RTI stable over time? I applaud those who are addressing these questions (Burns & Senesac, 2004; McMaster, Fuchs, Fuchs, & Compton, in press).

"RTI resistance" is here now, and will get serious when too many students are identified as LD using RTI approaches. And, then we will do what we always have done and what government agencies like welfare agencies and departments of natural resources always have done: We will put upper limits and/or "slot limits" on conditions to control eligibility. Departments of natural resources use "slot limits" (e.g., one may keep only fish between 16-24 inches) to define "keepers." When the harvest gets too high, they modify the slot (say from 1624 to 20-24) and redefine "keepers." As in the classification decisions we make, definitions and numbers are more politically than scientifically determined.

Dan Reschly and I entitled a recent chapter "The Past Is Not the Future" (Reschly & Ysseldyke, 2002). We envisioned a bright future in which school personnel monitored the progress of all students and used the information they obtained either to put in place evidence-based interventions designed to enhance the competence of individual students, or to build the capacity of systems to meet student needs (Ysseldyke et al., 1997). Of course, if this were true, there might be few learning disabilities. Over the past 25 years there has been very little change in assessment practices. I see too much evidence that this may be as good as it gets, and, therein lies the most distressing truth.

Table 1

Generalizations from 20 Years Ago

* The special education team decision-making process is at best
inconsistent. In most instances the process operated to verify
problems first cited by teachers, and team efforts usually were
directed toward what Sarason and Doris (1979) call a "search for
pathology."

* Eligibility decisions were more a function of naturally
occurring pupil characteristics than they were data-based.

* Many children without disabilities were being declared eligible
for LD services.

* There was no defensible system for declaring students
eligible for LD services.

* Over three-fourths of low-achieving students could be identified
as LD by at least one definition then in current use. Many LD
students did not meet any of the then-current criteria.

* There are no reliable psychometric differences on norm-referenced
tests between students with LD and their low-achieving peers.

* There are technically adequate norm-referenced tests, but no
technically adequate measures of the psychological processes and
abilities that assessors were required to use to identify deficiencies.

* Curriculum-based measurement is a technically adequate alternative
to lengthy assessments currently administered.

* Student results are better when teachers gather data on student
performance and use the data to adapt instruction. It is difficult
to get them to do so.

* Clear and consistent differences exist in the performance of LD
resource room students and regular class students on one-minute
samples using simple measures of reading, spelling, and
written expression. Given that these measures reliably
differentiate students, they also are useful
for referral and assessment (eligibility) decisions.

Abridged from "Generalizations from Five Years of Research on
Assessment and Decision Making: The University of Minnesota
Institute," by J. Ysseldyke, M. Thurlow, J. Graden, C. Wesson,
B. Algozzine, & S. Deno, 1983, Exceptional Education Quarterly,
4(1), 75-93.

REFERENCES

American Academy of School Psychology. (2004). Recommendation on comprehensive evaluation for learning disabilities. Communique, 32(7), 12.

Burns, M., Appleton, J., & Stehouwer, J. D. (2004). Empirical review of response-to-intervention research. Manuscript submitted for publication, University of Minnesota.

Burns, M., & Senesac, B. (2004). Comparison of dual discrepancy criteria for diagnosis of unresponsiveness to intervention. Unpublished manuscript, University of Minnesota.

Gresham, F., Reschly, D., Tilly, D., Fletcher, J., Burns, M., Crist, T., Prasse, D., Vanderwood, M., & Shinn, M. (2004). A comprehensive evaluation of learning disabilities: A response-to-intervention perspective. Communique, 33(4), 34-35.

McMaster, K., Fuchs, D., Fuchs, L., & Compton, D. (in press). Responding to nonresponders: An experimental field trial of identification and intervention methods. Exceptional Children.

Meyers, B., Meyers, J., & Deno, S. L. (1980). Formative evaluation and teacher decision making: A follow-up (Research Report No. 41). Minneapolis: University of Minnesota, Institute for Research on Learning Disabilities.

Reschly, D., & Ysseldyke, J. E. (2002). Paradigm shift: The past is not the future. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 3-20). Bethesda, MD: National Association of School Psychologists.

Sarason, S., & Doris, J. (1979). Educational handicap, public policy, and social history. New York: Free Press.

Spicuzza, R., Ysseldyke, J., Lemkuil, A., Koscioleck, S., Boys, C., & Teelucksingh, E. (2001). Effects of using a curriculum-based monitoring system on the classroom instructional environment and math achievement. Journal of School Psychology, 39(6), 521-542.

Stuebing, K., Fletcher, J., & LeDeux, J. (2002). Validity of IQ-discrepancy classification of learning disabilities: A meta-analysis. American Educational Research Journal, 469-518.

VanderHeyden, A. M., Witt, J. C., & Barnett, D. A. (2004). The emergence and possible futures of response to intervention. Manuscript submitted for publication, Louisiana State University.

Ysseldyke, J. (2001). Reflections on a career: 25 years of research on assessment and instructional decision-making. Exceptional Children, 67(3), 295-310.

Yesseldyke, J. E., & Christenson, S. L. (2002). Functional assessment of Academic Environment Scale. Longmont, CO: Sopris West.

Ysseldyke, J. E., Dawson, M., Lehr, C. A., Reschly, D., Reynolds, M., & Telzrow, C. (1997). School psychology: A blueprint for the future of training and practice II. Bethesda, MD: National Association of School Psychologists.

Ysseldyke, J., & Tardrew, S. (2002). Differentiating math instruction. Wisconsin Rapids, WI: Renaissance Learning.

Ysseldyke, J., Thurlow, M., Graden, J., Wesson, C., Algozzine, B., & Deno, S. (1983). Generalizations from five years of research on assessment and decision making: The University of Minnesota Institute. Exceptional Education Quarterly, 4(1), 75-93.

JIM YSSELDYKE, Ph.D., NCSP, is Birkmaier Professor of Educational Leadership and co-director, University of Minnesota Center for Reading Research.

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.