Academic journal article College Student Journal

Standardized Testing for Outcomes Assessment: Reanalysis of the Major Field Test for the MBA (MFT-MBA), with Corrections and Clarifications: (Rejoinder to R. Wright, Standardized Testing for Outcome Assessment: Analysis of the Educational Testing Service MBA Tests)

Academic journal article College Student Journal

Standardized Testing for Outcomes Assessment: Reanalysis of the Major Field Test for the MBA (MFT-MBA), with Corrections and Clarifications: (Rejoinder to R. Wright, Standardized Testing for Outcome Assessment: Analysis of the Educational Testing Service MBA Tests)

Article excerpt

Wright (2010) questioned the reliability and validity of the Major Field Test for MBA (MFT-MBA) and made a series of claims against the use of the test as an outcomes assessment for MBA programs. These claims, including an incorrect interpretation of the mean percent correct scores of schools (also called the assessment indicators, or AIs), are summarized and corrected in this paper. The paper ends with a conclusion that the MFT-MBA is a reliable and valid tool that can be used as an outcomes measure by MBA programs and accreditation organizations.

**********

The Major Field Test for the MBA (MFT-MBA) is a test that assesses the mastery of concepts, principles, and knowledge of MBA students nearing completion of their studies. The test is developed by a panel of subject matter experts, including a group of MBA faculty members (ETS, 2010a). The test includes 124 items coveting five subject areas and skills that are common to most MBA programs. Schools can add an optional section of 50 locally authored items and administer it together with the standard MFT-MBA.

The MFT-MBA reports several different types of scores for individuals and institutions. For an individual, a scale score is reported. For an institution, the mean scale score of the group is reported, as well as scores on the Assessment Indicators (AI). The AIs measure the performance of the group on questions in five different content areas. They are expressed as the mean percent correct score across all questions in a content area. In addition, the MFT-MBA provides percentile ranks for these individual and institutional scores, based on comparative data from all participating MBA programs. An optional item information report can also be obtained to evaluate the item level performance for a program or cohort. ETS strongly recommends that these scores and comparative information be used in conjunction with other information when making decisions about programs or individuals, and cautions test users against the practice of using a cut score or percentile on the MFT-MBA as a condition for a student's graduation (ETS, 2010b).

In his paper titled, Standardized testing for outcome assessment: Analysis of the educational testing systems MBA tests, Wright (2010) evaluated the MFT-MBA using information from the ETS/MFT website, and concluded that the ETS outcomes assessment methodology may not be optimal. He made a series of incorrect claims about the MFT-MBA, which are summarized, clarified, and corrected in the following section.

Wright (2010) claimed, "Faculty members are not involved in the construction of these tests" (pp. 144-145). The fact is that the MFT-MBA is developed by content experts in the field, including a committee of current business faculty members who determined the test specifications, test questions, and types of scores to be reported (ETS, 2010b). The test is based on a core curriculum identified in a national survey of MBA programs. The industry standards for educational testing (e.g., AERA, APA, & NCME, 1999; ETS, 2002) are strictly adhered to by the program.

Wright believed that the test was inappropriately difficult. He explained that although students take the MFT-MBA test before completing their MBA programs, only 5% of them could answer 69% or more of the questions correctly using the 95th percentile of AI1 (Marketing) scores as an example. He then claimed the test cannot be a valid measure of business knowledge and skills that the MBA programs intend to teach.

However, the test is not as difficult as Wright claimed. Wright interpreted the mean percent correct scores of an institutional assessment indicator as if they were the percentiles of individual scores. As mentioned earlier in this paper, the AI is a group-level mean score on a specific content area and can be used to compare schools, but not individual students. In the example Wright used, a school with an AI-1 (Marketing) score of 69% (corresponding to a percentile rank of 95) only means that students in this school answered 69% of the questions correctly, on average, and this school is ranked higher than 95% of the schools in the comparative data group. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.