Increasing the Validity of Outcomes Assessment

Article excerpt

Increasing accountability in higher education has prompted institutions to develop methods to meet growing expectations that they will implement and demonstrate a commitment to the assessment of student learning outcomes. These responses range from ensuring institutional compliance with the mandatory requirements of accrediting bodies to the adoption of institutionspecific assessment regimens designed to more closely align with local conditions and cultures. One such regimen is provided by the Voluntary System of Accountability (VSA), and this article explores how the University of Delaware (UD) implemented the Association of American Colleges and Universities' VALUE rubrics, as part of its ongoing initiative to create a campus-wide culture of assessment. As UD implemented the VALUE system for assessment, both the value of local adaptation and the limitations of adopting a national testing regimen have become increasingly apparent.

In 20 07, the VSA was initiated by public four-year universities and two of the higher education associations - the Association of Public and Land-grant Universities and the Association of State Colleges and Universities to supply comparable information on the undergraduate student experience through the use of three standardized tests to assess core general education (Gen Ed) skills of critical thinking, reading, writing, and mathematics (quantitative reasoning). The argument for standardized tests is that they provide the best way to assess student learning through universal, unbiased measures of student and school performance. Critics claim, however, that these tests fail to assess accurately students' knowledge and school performance, may institute a bias against underserved populations, and are not reliable measures of institutional quality (Beaupré, Nathan, and Kaplan 2002). The VSA advocates the use of one of three standardized tests - UD chose to use the Educational Proficiency Profile (EPP). After an initial experience with this test, UD decided to work with the VALUE project and its defined core learning goals and modifiable assessment rubrics. Choosing this path, institution leaders believed, would allow UD to examine student learning with greater sensitivity to local conditions and obtain more useful information on the quality and type(s) of student learning outcomes that are most challenging to evaluate via standardized tests.

VALIDITY OF EPP AND VALUE RESULTS

UD's Office of Educational Assessment (OEA) administered the abbreviated EPP to 196 first-year students and 121 seniors in fall 2010. ETS subsequently provided results in the form of scaled scores on the four core skill areas (reading, critical thinking, writing, and mathematics) as well as an aggregated individual score for each test taker. ETS stresses that institutions should not focus on individual results on the abbreviated EPP but instead concentrate on the aggregate scores. UD implemented the abbreviated version of the EPP, and ETS only provides mean scores from the long version of its Proficiency Profile Total Test. Since ETS did not provide mean comparative scores for the actual test UD students had taken, there was no precise basis for comparison, notwithstanding ETS's claim that the abbreviated test provides equivalent results. Thus, the actual scores obtained from ETS could offer little guidance in understanding how students at UD performed in comparison to test populations at other universities. Additionally, the lower number of questions in the abbreviated EPP created concern over content validity compared to results created with the longer version. There were, for example, only a total of nine questions designed to examine student proficiency in the area of quantitative reasoning.

In terms of the local testing environment, the methods required to obtain first-year students' and seniors' participation in this Gen Ed study may well have affected the validity of the results obtained with the EPP. …