Comparison of Traditional and Nontraditional (Adult Education) Undergraduate Business Programs

Article excerpt

With the increase in the number of non traditional academic programs, debate has raged over the quality of these programs as compared to the traditional format. In order to address this concern, Cardinal Stritch University (CSU) conducted a two-year assessment research project comparing the academic achievement of students in similar traditional and nontraditional adult education undergraduate programs in business. The main goal of the research project was to compare and contrast the academic achievement through pre- and post-assessment by using the Educational Testing Service (ETS) Major Field Achievement Test (MFAT) in business.

Nature of the Problem

Over the past decade the number of nontraditional, adult education programs has increased dramatically in higher education. With this influx of new programs has come a barrage of criticism and questions regarding the academic quality of such programs. Moreover, accreditation organizations are emphasizing assessment as a major component for re-accreditation.

This assessment evidence must demonstrate that significant and favorable learning has occurred between the student's enrollment, graduation, and beyond. One of the major concerns of the University is to provide assessment evidence that satisfies the guidelines set by accrediting agencies. Assessment is also good for the University and helps with continuous improvement. The assessment documentation needs to include undergraduate traditional and nontraditional students' outcomes information (for example test scores).

The "nontraditional student" is a student that attends CSU's College of Business and Management (CBM) accelerated program developed for the working adult student. The delivery system is accelerated; courses are between five to ten weeks in length. The program is designed around the cohort model. Between fourteen to twenty-two students makeup a cohort group. Students attend class as a group one night a week for four hours, taking one course at a time and follow a preset program schedule. The curriculum and instruction are designed specifically for the adult learner. The "traditional student" is a student attending in the traditional fourteen-week format, meeting three to five times per week for fifty to ninety minutes each class. These students attend class independently; not as a cohort group.

Previous Research Studies

Scott and Conrad (1991, pp. 6-66) examined previous research which compared traditional and intensive course formats. Their study ranged from nontraditional courses developed during World War II to the present. Based on the research they reviewed, no significant differences in student learning over time were found to support one type of learning format better than another learning format. Consequently, Scott-Conrad concluded that, "based on the evidence, intensive courses seem to be effective alternatives to traditional-length classes regardless of format, degree of intensity, discipline or field of study--although the research seems to suggest that certain fields of study may benefit more than others" (p. 67).

Lord (1997) compared constructivist teaching to traditional teaching in a year-long study. Lord explains that constructivist educators believe that learners assess new knowledge by associating it with prior experiences, student-centered group activities, and the presentation of only necessary content in a lesson. His study compared two populations of General Biology students taught by the same instructor by two different teaching methods. The same unit exams were given to each group. The constructivist group scored significantly higher on each of the unit exams than their traditional counterparts. In conclusion, Lord justified the results by stating that the constructivist group was able to discuss and formulate their own understandings which helped them integrate and actually apply the knowledge as compared to the traditional learning experience which is strictly memorization of content (pp. 197-215).

Gleason's (1986, as cited in Scott & Conrad, 1991) study compared three different macroeconomic courses taught in either a 3- or 5-week formats and four sections taught during a traditional semester. The nationally normed Revised Test of Understanding in College Economics (TUCE) was administered as a pre-test to measure students' aptitude and as a post-test to measure students' achievement. Findings indicated that macroeconomic students in the 3-week course scored higher on the post-test than the traditional length semester course students. She noted that these two groups of students were not equivalent statistically. Gleason found no difference between the 5-week and traditional semester students. She concluded that the learning period had no impact on achievement in the economics course and that intensive courses were as effective as the traditional semester length course taken at the same time, with other course subjects.

Miller and Groccia (1997) also compared teaching formats: a traditional lecture-based biology course versus a cooperative taught biology course. The traditional course was team-taught by two instructors in a traditional teaching format. The cooperative course was taught by Miller and encouraged active engagement from the students by discussing what they were learning with others, writing about what they were learning and relating this new learning to past experiences in order to apply the new knowledge to their daily lives. The Watson-Glaser Critical Thinking Appraisal was administered to both learning groups. The cooperative class scored higher than the traditional class on section tests assessing inference, recognition of assumptions, and deduction and interpretation. The traditional class scored higher on the test of evaluation of arguments. None of the differences were found to be significant. On the post-Biology I assessment of factual knowledge, both groups scored identically. On the post-Biology II assessment, the cooperative class scored significantly higher than the traditional class. Miller and Groccia reported that the cooperative class students rated learning effective teamwork skills to a greater degree than those in the traditional class. They concluded from this study that the use of cooperative learning strategies would support the need in the professional scientific world for teamwork skills (pp. 253-273).

Scott (1994) completed a comparison study of students' learning experiences in intensive and semester-length courses and of the attributes of high-quality intensive and semester course learning experiences. As a result of her study, faculty and students felt that intensive courses resulted in a continuous learning experience which students felt allowed them to synthesize and connect ideas better. Reports from students concluded that they were able to plan their schedules better and concentrate exclusively on a small number of classes. From this study, factors that may contribute to altering the differences between high-quality attributes and powerful intensive course experiences include the instructors' skills, the length of the intensive course, the students' intellectual development and age, other students' responsibilities, course subject, time of year, and students' related classroom experience (pp. 1-42).

The delivery methods addressed by the previous research studies indicate that despite the delivery method experienced the students' learning is usually equivalent. Some studies indicated that the students' learning was greater experienced in an accelerated format. Instructional delivery systems have both positive and negative effects on students and the learning process. Despite academe's desire to remain unchanging; students have dictated by enrollment trends what their needs and society's needs are regarding educational scheduling and instruction. Based on the students' needs and, most importantly, higher education's responsibility, institutions must still continue to measure student learning.


Higher education institutions are being asked by leaders in the academic community, legislatures, parents, potential employers, and the wider public to provide evidence that students have learned at their institutions. An overriding theme in higher education is assessment of student learning (Ewell & Lisensky, 1988, p. 13). Borden and Bottrill (1994) explain that outcomes assessment of students began from external pressures for documentation of accountability. Therefore, methods of assessment concentrated on the outcomes and outputs of the educational process (p. 16). Brookfield (1986) describes assessment as "a value-free ascertainment of the extent to which objectives determined at the outset of a program have been attained by participants" (p. 264).

Seymour (1992) contends that assessment, despite its benefits, continues to be held with deep suspicion by many campus people because of the threat for accountability they may feel implied in assessment data. Measurement is improvement feedback. Measurement, according to Seymour, is the basis for an organization that is knowledge-driven (pp. 19-20). Accountability and assessment may be viewed differently by higher education in the future. In order to gain new institutional resources, assessment may become the major tool. By documenting the worth of higher education, assessment evidence may help institutions prove to others that they are helping society and are valuable (Erwin, 1991, p. 161).

Ewell and Lisensky (1988) explain assessment as it pertains to evidence demonstrating curriculum distinctiveness. "Assessment provides a critical opportunity to raise such questions of overall curricular coherence. Even if concrete evidence of the attainment of cross-cutting objectives is fragmentary, it can at least be determined if the design and delivery of individual courses in fact match more curricular intent" (p. 46).

While both external and internal pressures and incentives for assessment should be acknowledged, the main purpose of this activity should be to enhance institutional and program effectiveness through improved learning for the student (Johnson, McCormick, Prus, & Rogers, 1993, p. 164). A good assessment, according to Garfield (1994, p. 3), goes beyond student grading and testing. Assessment becomes a component of instruction. Multiple methods of assessment yield complementary sources of student learning information. A more complete analysis of what happened in a particular course is provided to both the student and the instructor.

Evaluation of Assessment Methods and Results

Pretest-posttest designs are called objective-based studies. They assess the amount of learning students have gained from completing instruction. The limitation of these types of studies is that they do not provide enough useful information for making curriculum or program improvements (Kemp, Morrison, & Ross, 1998, p. 260) Ewell and Lisensky (1988, p. 69) mention that the use of commercially-available standardized examinations may be utilized as curriculum effectiveness evidence. But care must be taken in evaluating gains as high initial scores and compound measurement error may skew them.

According to Caine and Caine (1997), teaching for pre-specified information may result in the student learning only what needs to be required. Testing and grading limit questioning and creative thinking by the student. Testing does not encourage genuine variance in questioning and thinking from the learners. Student's learning is established and being judged by an outside, authoritative source (p. 39). McKeachie (1994) describes several methodological problems associated with research studies comparing different teaching methods. He explains the "Hawthorne effect" whereas one group of students knows that their scores will be compared with another group of students experiencing a routine teaching method. The emotional reaction alone from students can result in scores being different. A second problem explained by McKeachie was a suitable control group. Instructors, personalities, and skills influence students' outcomes. Other problems shared were biased sampling, controlling condition factors, and the statistical methods used to analyze results may differ (pp. 340-342).

Banta (1997) concedes that assessment tools and methods have serious limitations. There is not one that is perfectly valid and reliable. Most are seriously flawed when it comes to measuring concepts. However, schools should not stop assessing because the given method is imperfect. Schools should continually improve assessment tools while using a multiplicity of measures and evaluate the commonalties they yield.

"In a simplistic manner, educational research may be defined as a systematic approach to (a) identifying relationships of variables representing concepts (constructs) and/or (b) determining differences between or among groups in their standing on one or more variables of interest" (Isaac & Michael, 1995, p. 2). Angelo and Cross (1993) state that classroom research is expected to advance knowledge about learning. A good classroom assessor should be able to explain whether a certain teaching technique works, and to what degree, and for which students. What classroom assessment does not explain to us is why it works. The overall goal of research is that practitioners will use the results to improve practice (pp. 384-385).

Methodology and Procedures

This inferential study utilizing the research methodology was conducted using several main procedures. First, the researchers conducted a literature review to determine what research was available that addressed delivery systems and assessment. The literature reviewed and conversations with faculty who taught for the CBM supported the need for this research study. Second, the research methodology design for this study is quasi-experimental ex post facto because it utilized existing data collected from business students from September, 1996 to December, 1998.

The recently collated MFAT assessment data were utilized in a joint assessment project CSU conducted with eleven other colleges and universities associated with the Consortium for the Advancement of Adults in Higher Education (CAAHE). Because of accreditation criteria for institutions to be able to verify student learning outcomes and for an institution to continually improve, this project was introduced and initiated. The MFAT was selected by the group because of its national reputation, as well as its validity and reliability. The researchers utilized the MFAT results from CSU students only.

The MFAT was administered throughout the 1996-1997 academic year as a pre-assessment instrument to both traditional and nontraditional undergraduate business students. These business students were all at the beginning of their major business course work. The daytime students were administered the pre-assessment during their orientation session. The evening students were administered the pre-assessment on their first night of their first business major course. The post-assessment instrument was administered at the conclusion of the 1998-1999 school year to daytime students during their last days of their last business major course. The evening students were administered the post-assessment on their last evening session of their business major coursework.

Description of Population and Sample

The population included in this study were all the business administration students enrolled in the Fall semester of 1996. This included 289 students; 74 traditional full-time students and 215 nontraditional full-time students.

The sample was made up of CSU's traditional and nontraditional undergraduate business administration students who completed their major courses in December, 1998. This included a total of 60 students, 24 traditional students and 36 nontraditional students. The MFAT was administered during the business students' orientation classes and at the end of the students' last business major class session.

Random assignment procedures could not be used for this research treatment because the students were members of intact groups; students majoring in business attending day school beginning their major courses and evening students already assigned to intact groups beginning their business major coursework as a group. According to Borg and Gall (1989, p. 670), "it is possible to design an experiment in which the limitations of nonrandom assignment are partially or wholly overcome. Experimental designs of this type have been designated `quasi experiments' by Campbell and Stanley to distinguish them from `true' experiments, that is, experiments having random assignment."

During 1996-1997, CSU administered the ETS MFAT at student orientations as a pre-assessment instrument to both traditional and nontraditional undergraduate business administration students. At the conclusion of these same students' business coursework (December, 1998), the ETS MFAT was administered again as a post-assessment instrument. The delivery system for the traditional business students is daytime coursework that included traditional semester length coursework instructed by full-time CSU faculty. The delivery system for the non traditional business students is evening, accelerated, computer-enhanced coursework, cohort learning groups, and instructed by adjunct CSU faculty.

Data Analysis

The data presented are interval. The pre-test and post-test total scores for each individual student from the two sampling groups is used. The pre-test scores were used to confirm both student groups were of equal ability. Also, students' grade point averages were presented for information data only. These grade point averages were not utilized in statistical analysis to compare student groups because grades do not truly measure learning outcomes. The null hypothesis is that there would be no statistically significant difference at the .05 level in the scores obtained from the MFAT from traditional and nontraditional undergraduate business students' at CSU. The alternative hypothesis is that there would be a statistically significant difference at the .05 level in the scores obtained from the MFAT from traditional and nontraditional undergraduate business students at CSU. To reject the hypothesis, the level of significance was set at the .05 level. The region of rejection was set at the Alpha = .05 two tailed, non-directional.

Statistical Test

A t-test for independent means analysis was used to analyze the data. The t-test is "the most common statistical procedure for determining the level of significance when two means are compared" (McMillan & Schumacher, 1993, p. 345). Borg and Gall (1989, p. 549) state that the t-test for independent means is used for determining the significance of differences between sample means when the two groups analyzed are unrelated to each other.


This study was designed to investigate the relationship between the independent variable, the delivery systems experienced by CSU students and the dependent variable, the results of the post-assessment ETS MFAT scores completed by the CSU students. The research hypothesis to be tested was that there would be a significant difference between the traditional and nontraditional undergraduate business students' scores on the MFAT at CSU. Data were collected on a total of 60 students. Minitab software was used for statistical analysis.

The null hypothesis for this research study was that there would be no statistically significant difference at the .05 level in the scores obtained from the MFAT from traditional and nontraditional undergraduate business students' at CSU. Post-test MFAT scores were analyzed using the t-test for independent means because the groups were independent of each other and given the size of the sample (24 traditional and 36 nontraditional students). Also, a two-tailed test of significance was used to determine the significance level of differences between two means in either direction. Pre-test scores were not analyzed using the t-test because the means and standard deviations were found to be not significantly different (as shown in Table 1).

Table 1
Totals for Pre-assessment ETS MFAT and GPA

           Traditional   Nontraditional   National
           Students      Students         Mean

Mean/GPA   143.1/2.7     143.3/3.0        154.8
Std Dev      9.8          11.5             13.8
Number      24            36              10.830

Descriptive statistics for the pre-test and post-test scores were calculated and are presented in Table 1 and Table 2. Traditional and nontraditional students' mean scores on the pre-assessment test scored similar, 143.1 for the traditional students as compared to 143.3 for the nontraditional students. The national mean according to ETS for Spring, 1998 was 154.8. Standard deviations for the traditional business students' were 9.8 and 11.5 for the nontraditional business students. Both CSU students' standard deviation were lower than the national mean of 13.8. However, the national mean and standard deviation are for post-assessment only. Traditional students' GPA was 2.7 versus 3.0 for nontraditional students at the beginning of their business coursework. Nontraditional students scored higher on the post-assessment test than the traditional students (158.9 vs. 149.0). The CSU nontraditional students also scored higher than the national mean of 154.8. Standard deviations for the groups were 9.5 for the nontraditional students and 11.3 for the traditional students. These deviations were also less than the national standard deviation of 13.8. Nontraditional students' GPA was 3.6 as compared to traditional students' GPA of 3.5 at the conclusion of their coursework.

Table 2
Totals for Post-assessment ETS MFAT and GPA

           Traditional   Nontraditional   National
           Students      Students         Mean

Mean/GPA   149.0/3.5     158.9/3.6        154.8
Std Dev     11.3           9.5             13.8
Number      24            36              10,830

Table 3 shows the results of the t-test analysis for the post-assessment scores. The t-test for independent means yielded a t of -3.53. Since the p value for this test is .0010, the null hypothesis can be rejected at the 5% level of significance. This test provides significant evidence that the nontraditional students significantly tested higher than the traditional business students.

Table 3
Two Sample T-Test and Confidence Interval

Students         Number   Mean    StDeviation   Se Mean

Traditional          24   149.0   11.3          2.3
Nontraditional       36    58.9    9.5          1.6

95% CI for mu Traditional Students - mu Nontraditional Students
(-15.5, -4.2) T-Test mu Traditional Students = mu Nontraditional
Students (vs not =): T = -3.53 P = 0.0010 DF = 43

The confidence interval, set at 95% probability, yielded a result of -15.5 and -4.2, approximately. This means that the difference between means can be expected to occur again in favor of the same group if this treatment were repeated. This obtained difference is statistically significant.

Summary of Results

Study results showed that there was a significant difference between the post-assessment MFAT scores from the nontraditional and traditional business students at CSU. This study rejected the null hypothesis of there being no significant difference at the .05 level in the scores obtained from the MFAT from traditional and nontraditional undergraduate business students at CSU.

Discussion, Conclusions, Implications, and Recommendations

Conclusions derived from this study provide assessment verification that the CSU nontraditional business students' scored higher on a national norm-referenced test as compared to CSU traditional business students. The null hypothesis of there being no significant difference at the .05 level in the scores obtained from the MFAT from traditional and nontraditional undergraduate business students' at CSU was rejected. Moreover, this study indicated that the nontraditional business students' post-assessment score mean was higher on the MFAT than the national norm (158.9 versus 154.8). Finally, these students' standard deviation was also less than the national norm (9.5 versus 13.8).

One test in time, which is what this study indicates, is not a decisive measure to alter or dismiss the validity of traditional and nontraditional business programs at CSU. Factors addressed in this study, as stated in the literature review, are maturity level of students, instructors' skills, life experiences of students, and students' related classroom experience. These factors need to be taken into account when analyzing this type of research study. This is an important and valid study documenting that nontraditional learning is as effective, and in this case, more effective than a traditional delivery system.

Conclusions presented by this study have implications for the nontraditional delivery system. Evidence from this research study found that the nontraditional business students at CSU scored higher than the traditional business students on the ETS MFAT assessment. Although, related research literature concluded mixed results in favor of nontraditional learning.

Assessment, as addressed in the literature review, in the educational community is here to stay. As yet, because of federal funds made available for education, as well as accrediting agencies need for effective and reliable standards of measurement, assessment provides one of several tools that is necessary for regulation. Assessment is most useful though, as addressed in the review of literature, when it provides useful feedback to the students; not just outcomes measurements for the school.

In addition, various assessment measures need to be demonstrated and implemented by educational institutions. Hence, evaluating the commonalties found by these various measures would then be utilized by institutions for curriculum and program improvement. Cardinal Stritch University will be combining the various studies that do exist with the additional research requested (i.e. this research study) to aid in obtaining the Association of Collegiate Business Schools and Programs (ACBSP) accreditation. Although, additional measures may still be needed to complete the outcomes criteria required by the accrediting agency. The University is aware that outcome measurements are a necessary component of a learning institution.

The nontraditional learning track is an alternative choice that students make when pursuing a college education. It is the students' choice. It is the educational institutions' choice and responsibility to provide various learning avenues in order to assist the students in completing their degrees. In part, it is the educational institutions' choice to report the learning outcomes derived from these various learning avenues to the proper constituents (students, accrediting agencies, and to the community).


Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers (2nd ed.). San Francisco, CA: Jossey Bass.

Banta, T. W. (1997). Moving assessment forward: Enabling conditions and stumbling blocks. In M. Kramer (Series Ed.), & P. J. Gray & T. W. Banta (Vol. Eds.), New directions for higher education: No. 100. The campus-level impact of assessment: Progress, problems, and possibilities (pp. 79-91). San Francisco, CA: Jossey Bass.

Borden, V. M. H., & Bottrill, K.V. (1994). Performance indicators: History, definitions, and methods. In P. T. Terenzini (Series Ed.), & V. M. H. Borden & T. W. Banta (Vol. Eds.), New directions for institutional research: No. 82. Using performance indicators to guide strategic decision making (pp. 5-21). San Francisco, CA: Jossey Bass.

Borg, W. R., & Gall, M. D. (1989). Educational research: An introduction (5th ed.). White Plains, NY: Longman.

Brookfield, S. D. (1986). Understanding and facilitating adult learning. San Francisco, CA: Jossey-Bass.

Caine, R. N., & Caine, G. (1997). Education on the edge of possibility. Alexandria, VA: Association for Supervision and Curriculum Development.

Charles, C. M. (1998). Introduction to educational research (3rd ed.). New York, NY: Addison Wesley Longman, Inc.

Educational Testing Service (1997). Major field tests: Comparative data guide and descriptions of reports. Princeton, NJ: Author.

Educational Testing Service (1998). Major field tests: Comparative data guide and descriptions of reports. Princeton, NJ: Author.

Erwin, T. D. (1991). Assessing student learning and development: A guide to the principles, goals, and methods of determining college outcomes. San Francisco, CA: Jossey Bass.

Ewell, P. T., & Lisensky, R. P. (1988). Assessing institutional effectiveness: Redirecting the self-study process. Washington, DC: Consortium for the Advancement of Private Higher Education.

Garfield, J. B. (1994). Beyond testing and grading: Using assessment to improve student learning. Journal of statistics education (On-line). Available: jse/v2n1/garfield.html

Isaac, S., & Michael, W. B. (1995). Handbook in research and evaluation: For education and the behavioral sciences (3rd ed.). San Diego, CA: Educational and Industrial Testing Services.

Johnson, R., McCormick, R. D., Prus, J. S., & Rogers, J. S. (1993). Assessment options for the college major. T. W. Banta, In Making a difference (pp. 151-167). San Francisco, CA: Jossey Bass.

Jonas, P. M. (1997, August). Report of the learning outcomes assessment task force: Research project, 1996-1997. Milwaukee, WI: Cardinal Stritch University, College of Business and Management.

Jonas, P. M., & Weimer, D. (1999, May). Non-traditional vs. traditional academic delivery systems: Comparing ETS scores for undergraduate students in business programs, 1996-1999. Milwaukee, WI: Cardinal Stritch University, College of Business and Management.

Kemp, J. E., Morrison, G. R., & Ross, S. M. (1998). Designing effective instruction (2nd ed.). Upper Saddle River, NJ: Prentice-Hall, Inc.

Lopez, C. L. (1998, March). The commission's assessment initiative: A progress report. Paper presented at the annual meeting of the North Central Association of Colleges and Schools/Commission on Institutions of Higher Education, Chicago, IL.

Lord, T. R. (1997, Spring). A comparison between traditional and constructivist teaching in college biology. Innovative Higher Education, 21, 197-216.

McDonald, W. K. (1995, March). Comparison of performance of students in an accelerated baccalaureate nursing program for college graduates and a traditional nursing program. The Journal of Nursing Education, 34, 123-127.

McKeachie, W. J. (1994). Teaching tips: Strategies, research, and theory for college and university, teachers (9th ed.). Lexington, MA: D.C. Heath and Company.

McMillan, J. H., & Schumacher, S. (1993). Research education: A conceptual introduction (3rd ed.). New York: HarperCollins College Publishers.

Miller, J. E., & Groccia, J. E. (1997, Summer). Are four heads better than one? A comparison of cooperative and traditional teaching formats in an introductory biology course. Innovative Higher Education, 21, 253-273.

Scott, P. A. (1994, March). A comparative stud3, of students 'learning experiences in intensive and semester-length courses and of the attributes of high-quality intensive and semester course learning experiences. St. Louis, MO: North American Association of Summer Sessions.

Scott, P. A., & Conrad, C. F., (1991). A critique of intensive courses and an agenda for research. Madison, WI: University of Wisconsin-Madison. (ERIC Document Reproduction Service No. ED 337 087)

Seymour, D. T. (1992). On Q: Causing quality in higher education. New York: American Council On Education and Macmillan Publishing Company.

Warren, J. (1988). Cognitive measures in assessing learning. In P. T. Terenzini & M. W. Peterson (Series Eds.), & T. W. Banta (Vol. Ed.), New directions for institutional research: No. 59. Implementing outcomes assessment: Promise and perils (pp. 29-39). San Francisco, CA: Jossey Bass.

Peter M. Jonas, Ph. D., Associate Professor, College of Education, Cardinal Stritch University, Milwaukee. Don Weimer, Institutional Research, Milwaukee Area Technical College. Kim Herzer, Admissions Representative, College of Business and Management, Cardinal Stritch University.

Correspondence concerning this article should be addressed to Dr. Peter M. Jones, Associate Professor, College of Education, Cardinal Stritch University, 6801 Yates Road, Milwaukee, WI 53217.