Value-Added Assessment Using the Major Field Test in Business
Rook, Sarah P., Tanyel, Faruk I., Academy of Educational Leadership Journal
This research uses the Educational Testing Service (ETS) Major Field Test in Business (MFTB) to assess the value-added by an AACSB-International accredited undergraduate business program. The matched-pair design shows that on average, MFTB scores rise approximately 14 points, student score percentiles improve 31 points, and student z-scores increase one standard deviation. Upper level business core grade point average is a slightly better predictor of MFTB improvement than the upper level business grade point average. There is some evidence that the business concentration affects the change in MFTB score. Difficulties of value-added assessment are highlighted.
The Major Field Test in Business (MFTB) is sponsored by Higher Education Assessment of the Educational Testing Service (ETS) and covers foundation course content taught in a typical undergraduate business program. The MFTB multiple-choice questions cover topics in eight areas: accounting, economics, management, quantitative business analysis and information systems, finance, marketing, legal and social environment, and international issues. The ETS reports (Judy Bennett, personal communication, August 17, 2007) that 143,349 seniors at 553 different schools have taken the MFTB from 2003 to 2006. ETS (2000) does not claim that its sample proportionally represents the various types of higher education institutions. Approximately thirty percent of the participating schools are accredited by AACSB International - The Association to Advance Collegiate Schools of Business. Approximately thirty-five percent of all U.S. AACSB accredited schools use the MFTB. Rotondo (2005) presents a thorough summary of advantages and disadvantages of the use of the MFTB for assessment.
At our university, the MFTB has been used as an assessment tool in the fall and spring semesters since 1 998. The MFTB is included for assessment purposes since it offers the opportunity for the comparison of the student performance at the university with students at other universities, as well as, the comparison of our students over time. Other assessment methods are used to measure specific learning goals.
Assessment can be performance based (Does the student meet program standards?) or value-added based (Has the student gained knowledge?). Value-added assessment demonstrates the improvement in learning due to the student's program of study and generally utilizes a pre/post design to measure the change in student learning. This study focuses on the change in MFTB scores for students who took the exam as sophomores and then again as seniors. Thus, this is a matched-pair value-added study based on 68 students.
In the fall of 2000, as an additional form of assessment of the business program, we began administering the MFTB to a small sample of sophomores. Several references cited below indicate there are factors other than curriculum such as student ability, experience, gender, etc. which affect MFTB scores. We observed a few sophomores (who had not completed the business curriculum) scoring better than some seniors (who had completed the business curriculum). To reduce the influence of individual student characteristics and help isolate the impact of the curriculum on the test performance, we matched the senior score with the sophomore score for each student to create a change in score variable (matched-pair or dependent sample design). The intent was to measure the MFTB score improvement for each student and attribute the score improvement to the curriculum. By looking at the change in score for each student, we reduce the impact of student variability and are better able to isolate the impact of the curriculum. Of course, we expected student scores to improve reflecting the value-added nature of our business curriculum. We further expected the performance improvement to be related to business program performance as measured by business course grades.
There are numerous examples in the literature of business schools using the MFTB for assessment purposes. Several studies look at possible factors affecting MFTB performance, such as grade point average (GPA), specific course grade point average, SAT scores, age, gender, race, and student major. Bycio and Allen (2007) find significant correlations between MFTB scores and business GPA, university GPA, SATVerbal, SAT-Math, and student motivation. Bagamery, Lasik, and Nixon (2005) report gender, whether the student took the SAT, and grades as significant predictors of MFTB performance. Wathen and Naie (2003) report relatively high correlations between MFTB performance and courses in accounting, statistics, macroeconomics, finance, business law, operations, and strategy. Rook, Lancaster, Tanyel, and Word (2002) found MFTB performance was significantly related to SAT, college GPA, and gender. Novin, Arjomand, and Finlay (2004) found the highest course-specific MFTB correlations with Global Business, Strategic Management, and Principles of Marketing. They also reported high correlations with SAT- Verbal scores and college GPA. These examples of MFTB research focus on the factors which may predict MFTB performance for a variety of students.
The value-added assessment approach involves measuring student improvement over time. Mahoney (2004) argues that an accountability system for schools should be a value-added system which is based on what a student learns in one year. Thus a school system focuses on improvement-not level of performance. Conversely, Miller (1 999) argues that universities should be concerned with the level of knowledge achieved not the knowledge the university adds to the individual. Miller advocates the use of course grades which only measure knowledge level at the end of the course. Osigweh (1985) describes an elaborate value-added education model for the business school at Northeast Missouri State University in the 1 970s. Students were tested as freshmen, sophomores, and seniors using national tests. (MFTB was not available at that time.) Institutional survey data were collected from students, alumni, and employers. Information was used to help students improve general knowledge and specific fields of knowledge and personal development (such as, self-confidence and self-image). Greene and Zimmer (2003) present findings on the self-assessed improvement of a student's global perspective and internet research skills using a global internet research assignment in an introductory marketing course. Students answered a questionnaire to estimate the degree of value added in seven areas, such as, familiarity with electronic information sources or knowledge of how to conduct business in a foreign market. Students also reported increased interest in further study or a possible career in international business. Jonas and Weimer (1999) report on a two-year assessment research project involving six different colleges and universities. Using the MFTB, the research compared performance of traditional and nontraditional (accelerated) undergraduate business students. In the study, the schools collected matched-pair data (same student completing the MFTB twice - pre-test and post-test) for 173 students. Traditional and nontraditional students in all colleges demonstrated a significantly higher score on the post-test compared to the pre-test. The average for the pre-test score was 145 and 155 for the post-test score. Furthermore, Jonas and Weimer report a significant positive correlation between post-test scores and overall GPA. Jonas, Weimer, and Herzer (2001) compare the MFTB performance of traditional students (n = 24) and nontraditional students (n = 36) using a pre- and post-assessment design. Both traditional and nontraditional student scores improve, but the mean for the nontraditional students increases more (15.6 points) than the mean for the traditional students (5.9 points).
In this matched-pair study, we measure the change in MFTB scores between the sophomore testing and the senior testing experiences. It is expected that in general, the measured change would reflect an improvement (a positive change) and demonstrate the value-added by the business program. Thus we test the null hypothesis that the mean score change is less than or equal to zero using a t-test for a dependent sample (matched-pair t test). Measuring this change is complicated by different forms of the MFTB used during the testing time span. Therefore, three different measures of student score performance are included in this study: the change in the raw score (RAWSCOREΔ), the change in the percentile score (PERCENTILEΔ), and the change in the z-score (Z-SCOREΔ). For each of the three measures, we expect the mean difference to be significantly different from zero.
The form or version of the test available from ETS changed during the time span of the study. There were three different versions of the MFTB taken by students in our sample. Thus all students did not have the same form of the test for the sophomore and senior tests. Half of the students did take the same version of the test for both tests. For those students, a comparison of raw scores (RAWSCOREΔ= senior raw score - sophomore raw score) on the two exams would be appropriate. For the other half of the students, a comparison of raw scores would not be as meaningful because the mean and standard deviations of the different tests are not the same. Thus two additional forms of comparison were developed: the percentile change and the z-score change. The percentile change (PERCENTILEΔ) is the measure of the percentile score improvement (senior percentile score - sophomore percentile score). The percentile is based on data provided by ETS. The z-score change (Z-SCOREΔ) is the measure of the z-score improvement (senior zscore - sophomore z-score). The z-score is the difference between the student's raw score and the mean raw score divided by the standard deviation. The test means and standard deviations are available from ETS (2000, 2004, and Judy Bennett, personal communication, August 17, 2007).
Once it is established there is a significant difference in the performance, we are interested in trying to explain the magnitude of the difference. Many variables that might be expected to affect test performance (such as SAT) are held constant because we are measuring change in score for the same person (matchedpair). Students' scores are expected to be affected by the education experience between the two tests. Sophomores taking the test would have completed a few 200-level business core courses, but no upper-level business courses. Seniors taking the exam would have completed the required lower-level and upper-level business core courses and would have taken a number of business courses in their chosen concentration (accounting, economics/ finance, general business, management, or marketing). One would expect the MFTB improvement to be related to the student's performance in business courses, especially upper-level business courses completed after taking the exam as a sophomore. Two variables are used to measure the upper-level business course performance: the ULCOREGPA and the ULGPA. The ULCOREGPA is the grade point average for the six upper-level business core courses which all business majors complete (Legal Environment of Business, Principles of Marketing, Business Finance, Organizational Management and Behavior, Operations Management, and Business Policy). The ULGPA is the grade point average for all upper-level business courses which includes the core and concentration courses. We use regression analysis to look at the relationship between score improvement and business GPA with a null hypothesis of the regression coefficient is less than or equal to zero.
The concentration courses differ among the students. Since the upper level concentration courses vary among the students, there may be a relationship between test improvement and concentrations. We use analysis of variance to explore this relationship with a null hypothesis that all concentrations show the same score improvement.
To summarize, it is anticipated that the change in test performance, on average, will improve and that the improvement is related to upper level business course performance as measured by GPA. Business concentration choice may also influence test performance.
DESCRIPTION OF DATA AND METHOD
Each fall and spring semester the test is administered in the required senior-level Business Policy class. The test score is counted as 20 percent of the student's course grade. All seniors taking the MFTB have completed all of the 200-level business core courses (two semesters of accounting, two semesters of economics, two semesters of statistics, and one semester of business information systems) and all of the 300level business core courses (Legal Environment of Business, Principles of Marketing, Business Finance, Organizational Management and Behavior, Operations Management). From fall 2000 through spring 2005, 154 students in 200-level economics or statistics courses took the MFTB. To entice sophomore students to take a two-hour exam not scheduled during the regular class time, an incentive of a guaranteed 100 on 5 % of the course grade was offered. Students volunteered to take the test and then were screened to ensure that the student was a business major and that the student had not completed any 300-level business courses. The students were currently enrolled in at least one 200-level business core course and often had completed some of the other 200-level business core courses. Therefore the students had been exposed to some of the business concepts tested on the MFTB. Due to the extreme difficulty of obtaining a random sample, these results are based on a convenience sample with the inherent problems of non-random sampling.
There are 68 students included in this matched-pair study. The average age of the students when they took the senior exam is 23.7 years. There are 21 males, 47 females, 30 minority students (20 AfricanAmericans, 8 Asians, and 2 Hispanics), 16 accounting students, 7 economics/finance students, 19 general business students, 8 management students, and 1 8 marketing students. Table 1 shows the descriptive statistics for each variable used in this study.
On average, the raw scores improved about 14 points. The raw score for four students actually decreased (-1,-1,-2,-4) and for four students the raw score improved 29 or 30 points. On average, this score improvement caused the student's percentile to improve about 3 1 points. For three students the percentile decreased (-1,-3,-4) and for two students the percentile improved 72 points. On average, the z-score improved about one point-students raised the score by one standard deviation. The z-score for three students dropped (-0. 15, -0.07, -0.02) while for three students the z-score improved at least 2.24 standard deviations. There are high correlation coefficients among the three measures of performance improvement.
To test the hypothesis that test performance improves from the sophomore testing to the senior testing, we used a matched pair t-test. The null hypothesis is that the difference is less than or equal to zero (H^sub 0^: mean difference < 0). We tested the performance improvement using each of the measures of test improvement.
For these 68 students, the average upper level core GPA is 2.991 (from 2.04 to 4.0) and the average upper level business GPA is 2.995 (2.05 to 4.0). There is a high correlation between the two GPA measures.
To explore the test score improvement, ordinary least squares is used to estimate the following regression models:
RAWSCOREΔ = β^sub 0^ + β^sub 1^ ULCOREGPA + ∈^sub i^
RAWSCOREΔ = β^sub 0^ + β^sub 1^ ULGPA + ∈^sub i^
PERCENTILEΔ = β^sub 0^ + β^sub 1^, ULCOREGPA + ∈^sub i^
PERCENTILEΔ = β^sub 0^ + β^sub 1^, ULGPA + ∈^sub i^
Z-SCOREΔ = β^sub 0^ + β^sub 1^ ULCOREGPA + ∈^sub i^ and
Z-SCOREΔ = β^sub 0^ + β^sub 1^, ULGPA + ∈^sub i^.
We expect each grade point average coefficient (β^sub 1^) to be positive (H^sub 0^: β^sub 1^ < 0).
To test the relevance of business concentration to MFTB performance improvement, a single factor analysis of variance (ANOVA) with the concentrations as the treatments is used. The null hypothesis is there is no difference in test performance due to concentration.
A t-test for matched pair sampling is used to measure the significance of the change in test scores. The t statistic for the RAWSCOREΔ mean is 13.14; for the PERCENTILEΔ mean, 12.29; and for the ZSCOREΔ mean, 13.80. Each of the t-tests is significant at less than the 0.001 level of significance. For each of the measures of test improvement, the average difference between sophomore and seniors is positive and thus shows a significant improvement in performance.
The regression results for each of the three measures of test improvement are included in Table 2. The R-square values range from 0.1 1 1 to 0.158, therefore, none of the models explain a substantial portion of the variation in the dependent variable. For each of the regression equations, the upper level core GPA (ULCOREGPA) coefficient is significant at the 0.001 level of significance and the upper level GPA (ULGPA) is significant at the 0.01 level. Thus the upper level core GPA yields better results than using the upper level business course GPA. This result is consistent with the claim that the MFTB tests the typical business core content.
Using the upper level core GPA models to predict test improvement, we would predict that a student who has an upper level business core grade point average one point higher than another student, should expect to improve the test performance by about 6 points, 15 percentile points, or about one-half of a standard deviation compared to the student with the lower business core GPA.
The analysis of variance test of the equality of the means of Z-SCOREΔ for each of the five concentrations is significant at the 0.05 level of significance (H^sub 0^: Z-SCOREΔ mean of each concentration is equal). The F test statistic is 2.66. We reject the null hypothesis that the concentrations show the same test score improvement. Based on this sample, upper-level concentration courses may influence the MFTB improvement.
We found significant improvement in MFTB performance from the testing experience as a sophomore to the testing experience as a senior, using each of the three measures of test score change. Jonas and Weimer (1999) reported an average 10 point raw score gain compared to the 14 point gain in this sample. We found that business core GPA was a slightly better predictor of test improvement than upper-level business GPA, however both measures were highly significant. Several earlier studies report a linkage between grade point average and test performance. With this relatively small sample, there appears to be a relationship between concentration (and thus the upper level business courses taken) and the MFTB performance improvement.
Value-added assessment measurement is conceptually appealing (What did the student learn?), but difficult to execute as shown by this experiment. The first difficulty was the form of the MFTB changed during the study period. Thus, the raw score change was not an entirely appropriate measure of the improvement. This forced the creation of two alternative measures to show test score improvement, percentile change and z-score change. While seniors are required to take the test as part of the business policy course, securing the serious cooperation of sophomores to take a two-hour test is another difficulty encountered. In addition, many of the sophomores tested did not retake the exam as a senior. This adds to the expense of the study because each test costs approximately 25 dollars. The length of time required for the study is also problematic - fall 2000 to fall 2006. Curriculum and instruction are not static for this length of time and therefore it would be difficult to know what particular change contributed to the improvement. For value-added assessment using the MFTB to be a viable method for business schools these difficulties would need to be addressed.
Continued research in this area should include a larger sample where the sophomore participation could be randomly selected. Ideally, students would be initially tested before completing any business courses so the contribution of 200-level core courses could be measured. Furthermore, the investigation of the contribution of particular courses to the MFTB improvement could be an interesting line of research. However, due to the expense, time, and sampling problems of this type of study, a large scale MFTB valueadded experiment is unlikely to occur.
Bycio, P. & J.S. Allen (2007). Factors Associated With Performance on the Educational Testing Service (ETS) Major Field Achievement Test in Business (MFT-B). Journal of Education for Business, 82 (4), 196-201.
Bagamery, B.D., J.J. Lasik, & D.R. Nixon (2005). Determinants of Success on the ETS Business Major Field Exam for Students in an Undergraduate Multisite Regional University Business Program. Journal of Education for Business, 81 (1), 55-63.
Educational Testing Service. (2000). Major Field Tests - Comparative Data Guide and Descriptions of Reports includes Academic Year 1999-2000 Data. Princeton, NJ: Author.
Educational Testing Service. (2004). Comparative Data 2001-2002 Table 2A. Retrieved May 24, 2004 from http://www.ets.org/hea/mft/compare_data.html.
Greene, C.S. & R. Zimmer (2003). An International Internet Research Assignment - Assessment of Value Added. Journal of Education for Business, 78 (3), 158-163.
Jonas, P.M. & D. Weimer (1999). Non-traditional vs. Traditional Academic Delivery Systems: Comparing ETS Scores for Undergraduate Students in Business Programs, 1996-1999. ERIC Collection of AIR Forum Papers, 4-23.
Jonas, P.M., D. Weimer & K. Herzer (2001). Comparison of Traditional and Nontraditional (Adult Education) Undergraduate Business Programs. Journal of Instructional Psychology, 28 (3), 161-170.
Mahoney, J. W. (2004). Why Add Value in Assessment? School Administrator, 61(11), 16-18.
Miller, M.S. (1999). Classroom Assessment and University Accountability. Journal of Education for Business, (Nov/Dec), 94-98.
Novin, A.M., L.H. Arjomand & N. Finlay (October, 2004) Investigation of Factors Affecting Students Success in ETS Major Field Test for Business. Presented to the Southeastern Chapter of the Institute for Operations Research and the Management Sciences, Myrtle Beach, SC.
Osigweh, C.A.B. (1985). Measuring Performance in a Business School. Delta Pi Epsilon Journal, 28 (2), 130-141.
Rook, S.P., L.M. Lancaster, F.I. Tanyel, & W.R Word (2002). Relationships Between Student Characteristics and Performance on the Major Field Test in Business. Journal of Business and Training Education, 11 (fall), 41-51.
Rotondo, D.M. (2005). Assessing Business Knowledge. In Kathryn Martell and Thomas Calderon (Eds.), Assessment of Student Learning in Business Schools: Best Practices Each Step of the Way (pp. 82-102). Tallahassee, FL: AIR and AACSB International.
Wathen, S.A. & RD. Nale (2003). On-Going Experiences and Issues Involved in Curricular Assessment using the ETS Business Major Field Test. Proceedings of the 39th Southeast Institute for Operations Research and the Management Sciences (SEInfORMS), 147-151.
Sarah P. Rook, University of South Carolina Upstate
Faruk I. Tanyel, University of South Carolina Upstate…
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Article title: Value-Added Assessment Using the Major Field Test in Business. Contributors: Rook, Sarah P. - Author, Tanyel, Faruk I. - Author. Journal title: Academy of Educational Leadership Journal. Volume: 13. Issue: 3 Publication date: September 1, 2009. Page number: 87+. © The DreamCatchers Group, LLC 2008. Provided by ProQuest LLC. All Rights Reserved.