Measuring and evaluating teaching effectiveness is a complex and difficult task. Nevertheless, the information derived from such measurement and evaluations can be extremely valuable to individual instructors in terms of their further development as teachers.
The Test of Understanding in College Economics (TUCE) has been used extensively as a means of measuring student learning in introductory economics courses. This purpose of this paper is to illustrate how information derived from the TUCE in introductory economics courses can also be used to provide feedback to instructors in these courses to help them improve their teaching effectiveness. The results obtained can be used to help instructors better understand a) what students are learning and how well they are achieving the learning goals for their courses; and b) specific areas of course content and cognitive categories for which students' performance is poor and/or lower than their overall performance.
Identification of content and cognitive categories for which students' improvement is low, or lower than their overall improvement, provides useful information that can be used by instructors to indicate those topics to which they should concentrate additional time and emphasis in teaching the course in the future. In addition, instructors may want to consider changes in teaching strategies in order to better communicate information and engage students as a means of improving their understanding and achievement of course learning goals. Results obtained in this way can be used in conjunction with student evaluations of courses and instructors in an ongoing process of improving teaching effectiveness.
The focus of this study on using outcomes achieved in introductory economics courses as a means of evaluating teaching and providing feedback to instructors is consistent with the following definition of effective teaching: "Effective teaching can be defined, very simply, as activities that promote student learning. It encompasses all of those instructor behaviors that foster learning of the instructor's and/or of the institution's educational objectives." (UCLA Office of Instructional Development, https://www.oid.ucla.edu/ publications/evalofinstruction/eval1#1, accessed on October 1, 2010)
Research on evaluating teaching effectiveness is extensive. While teaching effectiveness and student learning are closely related, historically the focus of this research primarily has been on discovering and describing teacher characteristics that are associated with good teaching. This approach emphasizes the process of teaching such as course organization, teaching behaviors (lecture, discussion, etc.), as well as student learning activities and evaluation procedures.
Summarizing the research on characteristics of good teachers, Eble, (1988, pp. 21-22) notes:
Most studies stress knowledge and organization of subject matter, skills in instruction, and personal qualities and attributes useful to working with students.
Eisenberg (1996), cited in Seldin (1999, p.3) analyzed 18 studies concerned with effective teachers and reported the following characteristics of such teachers: knowledge, organization/clarity, stimulation/enthusiasm, use of active learning, effective communication, and instructional openness.
Specifically in terms of economics instructors, Boex (2000) used responses from student evaluations of instructor surveys to ascertain the attributes of economics instructors that are associated with teaching effectiveness as perceived by their students. Six broad instructor attributes were identified: presentation ability, organization and clarity, grading and assignments, intellectual or scholarly capacities, instructor-student interaction, and student motivation. Results of this analysis indicated that students perceive the dominant attributes of an effective economics instructor to be organizational skills and clarity. This finding is consistent with findings of other studies and indicates the far greater importance of this attribute as compared with other instructor attributes in determining instructor's effectiveness, as perceived by students.
More recently, research concerning evaluation of teaching effectiveness has moved away from a focus on teacher characteristics and toward increased use of student ratings, self-reviews, and peer evaluation. Seldin and Associates (1999) provide a useful review of several techniques for evaluating teaching including student ratings, self-evaluation, peer classroom observation, electronic classroom assessment, and portfolios, as well as consideration of the process of implementing teaching evaluation programs in educational institutions.
Student evaluations of teaching (SET) are widely used by economics departments as reported by White (1995) and Becker and Watts (1999). Bosshardt and Watts (2001) investigated the relationship between instructors' assessments of their teaching and their students' assessments. Using the TUCE, the authors found that although student and instructor perceptions of how well the instructor teaches are positively correlated, there are also important differences. "Instructors who speak English as their native language viewed enthusiasm and the ability to speak English as most important in forming their overall self-evaluation. But students of these instructors formed their overall evaluations quite differently, weighting instructor preparation most heavily. The students viewed the instructors' ability to speak English as next in importance, followed closely by grading rigor and enthusiasm.
Still another approach to evaluating teaching is to focus on the amount of student learning in a course, or assessment. Assessment of student learning is a topic that has received increased attention by educational institutions in recent years. This greater attention partially reflects efforts by accreditation agencies to require educational institutions to better define specific learning outcomes, demonstrate learning, and use the assessment results in a cycle of continuing educational improvement. The process of assessment can be an essential element in any systematic objective evaluation of individual students, individual courses, multiple sections of individual courses, programs, or institutions as a whole and as an additional objective means of helping individual faculty members improve their teaching skills.
Walstad (2006, p. 193) notes:
[A]ttention to teaching methods is important because it shapes the presentation of course content and the nature of classroom contact with students. What is often overlooked, however, is the vital role that assessment can play in helping economics instructors do a better job of giving students a better chance to learn economics.
In assessing student learning, one important question is how student learning is to be measured. The relative benefits, as well as costs, associated with use of multiple-choice and essay questions in assessing understanding of economics have been addressed by Walstad and Becker (1994) and Walstad (2001). Advantages of multiple-choice questions include ease of grading (resulting in quick feedback to students on test performance), objectivity in scoring, and greater capacity to sample the content domain as compared with the few questions that can be asked on an essay test. Essay questions require substantially more time to grade and involve less objectivity in scoring. Their work supports the hypothesis that some essay questions add little information to results obtained from well-written multiple-choice questions.
Buckles and Siegfried (2006) conclude that multiple-choice questions can be used to measure some, but not all, elements of in-depth understanding of economics. Using Bloom's (1964) taxonomy of educational achievement, consisting of six levels of cognition: 1) knowledge, 2) comprehension, 3) application, 4) analysis, 5) synthesis, and 6) evaluation, the authors argue that multiple-choice questions can be used to test student achievement for levels one (knowledge) through four (analysis). However, they question whether multiple-choice questions can be used to test students' ability to synthesize and evaluate (Bloom's cognitive levels five and six). They do, however, support the notion that multiple-choice questions can test for more than simple recognition and understanding (corresponding to Bloom's first two academic achievement levels of knowledge and comprehension). Further:
One additional use of multiple-choice questions, which permits assessment of even higher levels of understanding, is to ask students to choose the correct answer and then to explain why the correct answer is correct and why each incorrect answer is wrong. Finally, a series of questions that requires students to understand economics progressively more deeply can be used to inform instructors just how successful they have been in helping students learn how to think like an economist. A series of questions first assessing knowledge, then comprehension, next application, and finally analysis may permit instructors to see exactly where students' understanding has stopped and provide guidance as to what to emphasize in review.
Walstad (2006) explores advantages and disadvantages of essay tests and questions to assess higher levels of student understanding of economics and provides guidelines for essay testing and grading. Overall, his conclusion is that:
[E]ssay testing requires more work than is generally expected by economics instructors, but this commitment needs to be made if essay tests are to be used as an effective and reliable measure for depth of understanding in economics.
Simkins and Allen (2000) used pre- and post-tests in Principles of Macroeconomics courses in order to "evaluate teaching performance on a regular basis as a means of continually improving teaching effectiveness and increasing student learning in the classroom". The pretest used in this study consisted of nine multiple-choice questions, four mathematical problems, and a graphing exercise. Their primary objective was to use an instructor-developed pretest "as a diagnostic and developmental tool for instructors to assess and improve teaching effectiveness". Using the pretest results, instructors were able to modify their delivery of course content by providing more reinforcement of concepts for which the pretest indicated students' skills were lacking. The pretest results can also be used to "give students and instructors early feedback on the need for assistance while there is time to take corrective action through tutoring, extra homework assignments, improved note-taking skills, and other remedial help." Further, "post-testing students at the end of the course using the same questions provides valuable information that can measure student learning, suggest areas for teaching improvement, and improve course delivery." The authors also argue that the ability to use course-specific content in instructor-developed pre- and post-tests is a significant advantage compared to standardized tests.
The TUCE is a standardized test that was developed more than 40 years ago and has been used extensively by instructors and researchers in economics. The test is now in its fourth edition (TUCE-4). Use of the TUCE in economic education has been described by Becker (1997). Separate exams exist for microeconomics and macroeconomics. Each exam consists of 30 multiple-choice questions. According to Walstad and Rebeck (2008):
As with past editions, the TUCE-4 has two primary objectives: 1) to offer a reliable and valid assessment instrument for students in principles of economics courses; and to provide norming data for a national sample of students in principles classes so instructors can compare the performance of their students on a pretest and a posttest with this national sample.
These authors conclude:
The TUCE-4 also should be valuable for advancing research in economics education because it provides a standardized test that can be use to assess student achievement in principles of economics across different institutions or classes.
This study extends the literature on using standardized tests to measure and evaluate teaching effectiveness in introductory economics courses by demonstrating how the results obtained in terms of the six content categories and three cognitive categories that are incorporated in the TUCE can be used as feedback to instructors as a diagnostic and developmental tool to assess and improve teaching effectiveness. In effect, assessment of student learning and evaluation of teaching effectiveness are linked as part of a cycle of continuing educational improvement. (1)
SCOPE OF STUDY AND PROCEDURES
The fourth edition of the Test of Understanding of College Economics (TUCE--4) was administered as both a pre-and post-test to students in the Principles of Microeconomics and Principles of Macroeconomics courses at Saint Mary's College of California (SMC) during the 2007/2008 academic year. The goals of this process were to provide a consistent mechanism across multiple sections of the Principles courses for assessing how well overall course objectives are being achieved. Traditional student grades and teaching evaluations may not provide sufficiently detailed and consistent information for assessing student learning and the effectiveness of teaching in multiple sections of courses and for evaluating courses in terms of achievement of course objectives.
Six broad content categories are incorporated into the TUCE as a means of insuring "adequate coverage of the basic content of 'typical' college principles courses so that the total raw score can be deemed to measure general understanding of basic economic principles" and "discriminate among individual students on the basis of their ability to understand and apply selected concepts and principles." (Saunders, 1991, p.2). The comparative effectiveness of courses in achieving the objectives measured by the TUCE can be ascertained by comparing the scores of students with the percentile distributions of the scores of students used to develop norming data for the TUCE. Topics included in each of the six microeconomic and macroeconomic content categories are shown in Appendix A and Appendix B, respectively.
Three cognitive categories also are incorporated into the TUCE. As noted by Saunders (1991, p.3), "all editions of the TUCE have sought to emphasize the application of basic concepts and principles ... The three broad cognitive categories used to classify questions on the TUCE III are: Recognition and Understanding (RU); Explicit Application (EA); and Implicit Application (IA)." The three cognitive categories on the TUCE-4 are identical to those in the TUCE III. Characteristics of each of these cognitive categories are contained in Appendix C.
We believe that a general goal of economics education, even at the Principles of Economics level, is for students to understand, and more importantly, have the ability to apply economic terms and concepts in actual situations. Within the three cognitive categories, the Explicit Application cognitive grouping is concerned with students' ability to apply the correct economic concepts to solve a problem when the concepts are either given or explicitly mentioned as part of the problem statement and no unstated assumptions are involved. The Implicit Application category is concerned with students' ability to define or solve a problem when the relevant economic concepts are not explicitly mentioned.
During the first week of the fall 2007 and spring 2008 semesters, the TUCE microeconomics test was administered to students in eight sections of the Principles of Microeconomics course. The same test was administered to students in these courses as one portion of the final exam. A total of 178 students in the Principles of Microeconomics courses took both the pre-and post-tests. Only data for these students are used in this analysis since such "matched" data are consistent with the data selected to norm the TUCE using a national sample of students. Use of matched samples controls for differences in the composition of students taking the pre- and post-tests. Using data for "unmatched" groups of students means that differences in composition of the students taking the pre- and post-tests may account for some of the differences in scores achieved. Overall, data were obtained for four different instructors teaching eight sections of the microeconomics course during the 2007/2008 academic year. Students were allowed nearly an entire 60-minute class period to complete the pre-test and the post-test was incorporated into the final exam in each course as part of a two-hour final exam.
Data for students in the Principles of Macroeconomics courses were obtained during the spring 2008 semester. A total of 54 matched samples of students who took both the pre- and post-tests were used in this analysis. Two instructors each taught two sections of the Macroeconomics principles course, thus providing data for four sections of the course (2).
Principles of Microeconomics Courses
Table 1 shows the mean percentage of correct responses by SMC students and the national sample of students for questions for each of the six microeconomic content categories on both the pre-and post-test TUCE-4. For two of the content categories, the percentage of correct responses by SMC students on the pre-test was only slightly greater than 25 percent which is the result expected from pure guessing on a four-option multiple-choice test. For each of the six microeconomic content categories, scores of SMC students on the post-test were statistically higher than scores on the pre-test at the 0.005 level of significance. Thus, there was a statistically significant improvement in student performance in each of the six content categories between the pre- and post-tests. However, whether or not the differences in scores between Saint Mary's students and the national sample of students were statistically significant on either the pre- or post-test could not be determined because the distribution of the percentage of correct responses for the national sample of students is not reported.
The average percentage of correct responses to questions in each cognitive category on both the pre-and post-test, for SMC students and students in the national sample, is shown in Table 2. On the pre-test, the mean score of SMC students in the RU category was below 25 percent and only slightly above 25 percent for the IA category. For each of the three cognitive categories, scores of SMC students on the post-test were found to be statistically higher than scores on the pre-test at the 0.005 level of significance. In terms of percentage point improvement between the pre- and post-tests, both SMC students and the national sample did best on the RU category and poorest on the IA category.
Principles of Macroeconomics Courses
Table 3 is similar to Table 1 but summarizes data for the 54 SMC students who completed both the pre- and post-test macroeconomics TUCE-4, along with results for the national sample of students.
For each of the six content categories, scores of SMC students on the post-test were found to be statistically higher than scores on the pre-test at the 0.005 level of significance indicating that, as for the microeconomic principles course, the macroeconomic course contributed to a statistically significant improvement in student performance between the pre-and post-tests. However, as noted for the microeconomic results, no conclusion can be drawn as to whether results obtained for SMC students differ from results of the national sample of students because the distribution of scores for the national sample of students is not reported.
Table 4 reports the mean percentage of correct responses to questions in each cognitive category for SMC students and the national sample of students on the pre- and post-test. For each of the three cognitive categories, scores of SMC students on the post-test were found to be statistically higher than scores on the pre-test at the 0.005 level of significance.
In terms of percentage point improvement between the pre- and post-tests, SMC students did best on the RU category and poorest on the EA category. Students in the national sample also showed the greatest improvement in the RU category but did poorest on the IA category.
Interpretation of Results
For both the microeconomics and macroeconomics courses, scores of SMC students on the post-test in each of the six content categories were significantly higher than on the pre-test implying a substantial increase in their knowledge of economic concepts.
In microeconomics, SMC student scores improved least in the Basic Economic Problem Category, suggesting that instructors need to focus more on this category, perhaps by devoting more time and attention at the outset of the principles courses to consideration of fundamental economic concepts. This result is reinforced by the fact that improvement of SMC students in this category was less than one-half of the percentage point improvement by the national sample of students. In all other categories, with the exception of Theories of the Firm, SMC students showed greater improvement than did the national sample. However, it cannot be inferred that the improvement is statistically greater.
The results also suggest that instructors need to devote greater time and attention to theories of the firm, factor markets, and international economics because improvement of SMC students in these three categories was substantially less than in the two categories for which students showed the greatest improvement. The lower performance of SMC students in the Theories of the Firm category is likely of greatest concern. Anecdotal evidence suggests that factor markets and international economics are often given short shrift in many SMC Microeconomics Principles courses because they tend to be covered near at the end of each term when less time is available to be devoted to them. It seems likely that similar practices exist in many of the principles courses taken by the national sample of students.
In macroeconomics, SMC students improved their scores between the pre-and post-test substantially more than did the national sample in each of the six content categories, implying a relatively high degree of learning by SMC students because students in each group were about as well prepared based on their performance on the pre-test. Differences in the extent of improvement in each of the six Content Categories offers instructors information as to subjects in which some increased attention might be focused in the future.
With regard to all three of the Cognitive Categories, improvement of SMC students in both microeconomics and macroeconomics was greater than that of the national sample. Further, improvement in each of the three categories in macroeconomics was considerably higher than was achieved in microeconomics. This same result was also observed for the national sample, although the disparity in results was not as great as for SMC students. The difference in results may merely reflect greater student awareness of topics in macroeconomics as compared with microeconomics but is a matter for departmental consideration, especially because performance on the pre-test, in both microeconomics and macroeconomics was similar for the two groups.
Improvement of SMC students in microeconomics was greatest in the RU category and least in the IA category, implying that SMC students are learning the most fundamental concepts best but, as the level of abstract thinking increases, the improvement is much less. This is a cause for concern and a matter which instructors need to give more attention in preparing their courses. The greatest percentage point improvement for macroeconomics courses was also in the RU category, again indicating that students are substantially improving their knowledge of basic macroeconomic terms and concepts. Improvement between the pre-and post-test was smallest for the EA category, indicating the need for strengthening instruction in problem solving using specific concepts. Still, in comparison with the national sample, the results indicate that SMC teaching is quite effective in all three of the cognitive categories, but provide information that instructors can utilize to further improve their teaching effectiveness.
Overall, the results in both microeconomics and macroeconomics are a cause for some concern about the effectiveness of teaching because they indicate that students gained most in terms of recognition and understanding of economic concepts but did not improve as much in terms of comprehension and application of concepts. The implication for economics instructors is that less emphasis should be given to memorization of terms and concepts and more emphasis given to application of terms and concepts in problem-solving. The results achieved may also relate to introductory economics courses including coverage of a broad range of topics. The effect may be that students gain familiarity with topics but are unable to use them effectively either in explicit or implicit applications. One solution may be to pursue the idea that "less is more" in introductory economics courses by covering a smaller number of topics in greater depth rather than covering a large number of topics that students are not able to apply, and are likely to soon be forgotten. The matter of depth versus breadth of coverage in economics principles courses is, of course, a matter of considerable debate within the economics profession at present.
CONCLUSIONS AND IMPLICATIONS
The TUCE provides instructors in introductory economics courses with an objective means of assessing student learning as well as a means of evaluating their teaching effectiveness. The effectiveness of teaching can be evaluated in two different ways: 1) improvement of student scores between the pre-and post-test; and 2) improvement of student scores relative to improvement by a national sample of students. In addition, by using the content and cognitive categories incorporated in the TUCE, instructors have a diagnostic and developmental tool to identify both specific course content and cognitive areas on which to focus, in terms of devoting additional class time and attention or by altering teaching strategies, as a means of improving teaching effectiveness. The objective information gleaned from this process can be used in conjunction with student evaluations of teaching.
Using the results obtained from use of the TUCE in introductory economics courses, individual instructors can establish goals for improvement in students' mean scores between the pre-test and post-test in each of the six content areas and the three cognitive categories. Progress in meeting such goals can then be monitored as part of an ongoing assessment process that is increasingly being requested by accrediting agencies. Further, the results can be used by instructors as a guide to help them improve their teaching effectiveness.
Further refinement of the procedures outlined in this paper can be achieved by selection of a subset of questions on the TUCE to obtain a better match with the actual content included in a specific course. For example, if international economics is typically covered in the Macroeconomics principles course at an institution, questions on international economics can be eliminated from the pre-and post-test to provide more useful results tailored to that microeconomics course.
Lastly, use of the TUCE pre- and post-test procedure requires a considerable amount of class time and resources for analysis so, while the procedure can provide useful results, it is not advocated for use in each principles course each term. Rather, the procedure might be thought of as a means of periodic assessment of courses and programs as well as comparison of results over time and across instructors. The procedure could also be used for specific courses in which specific problems have been identified and follow-up action is required.
A significant advantage of the TUCE is that it provides an objective evaluation of student performance and measures outcomes that can be used as a diagnostic and developmental device by instructors. Another advantage of the TUCE relative to other instruments for assessing student learning and teaching effectiveness is that results can be used to measure improvement of students in a given class with improvement of a national sample of students to provide a benchmark for comparison of student learning and effectiveness of teaching. Improvement in student learning is likely to be a direct result of improved methods of evaluating teaching and using such results in an ongoing process to improve teaching effectiveness.
Microeconomic Content Categories on the TUCE--4
A. The Basic Economic Problem (scarcity, opportunity cost, choice)
B. Markets and Price Determination (determinants of supply and demand, utility, elasticity, price ceilings and floors)
C. Theories of the Firm (revenues, costs, marginal analysis, market structures)
D. Factor Markets (wages, rents, interest, profits, income distribution)
E. The (Microeconomic) Role of Government in a Market Economy (public goods, maintaining competition, externalities, taxation, income redistribution, public choice)
F. International Economics (comparative advantage, trade barriers, exchange rates)
Source: Walstad, Watts, and Rebeck, 2007
Macroeconomic Content Categories on the TUCE--4
A. Measuring Aggregate Economic Performance (GDP and its components, real vs. nominal values, unemployment, inflation)
B. Aggregate Supply and Aggregate Demand (potential GDP, economic growth and productivity, determinants and components of AS and AD, income and expenditure approaches to GDP, the multiplier effect)
C. Money and Financial Markets (money, money creation, financial institutions)
D. Monetary and Fiscal Policies (tools of monetary policy, automatic and discretionary fiscal policies)
E. Policy Debates (policy lags and limitations, rules vs. discretion, long run vs. short run, expectations, sources of macroeconomic instability)
F. International Economics (balance of payments, exchange rate systems, open-economy macro)
Source: Walstad, Watts, and Rebeck, 2007
Definitions of Cognitive Categories Used to Classify Questions on the TUCE--4
(RU) Recognizes and Understands Basic Terms, Concepts, and Principles
1.1 Selects the best definition of a given economic term, concept, or principle
1.2 Selects the economic term, concept, or principle that best fits a given definition
1.3 Identifies or associates terms that have closely related meanings
1.4 Recalls or recognizes specific economic rules, e.g., an individual firm's profit is maximized at the level of output at which marginal cost equals marginal revenue
(EA) Explicit Application of Basic Terms, Concepts, and Principles
2.1 Applies economic concepts needed to define or solve a particular problem when the concepts are explicitly mentioned
2.2 Distinguishes between correct and incorrect application of economic concepts that are specifically given
2.3 Distinguishes between probable and improbable outcomes of specific economic actions or proposals involving no unstated assumptions
2.4 Judges the adequacy with which conclusions are supported by data or analysis involving no unstated assumptions
(IA) Implicit Application of Basic Terms, Concepts, and Principles
3.1 Applies economic concepts needed to define or solve a particular problem when the concepts are not explicitly mentioned
3.2 Distinguishes between correct and incorrect application of economic concepts that are not specifically given
3.3 Distinguishes between probable and improbable outcomes of specific economic actions or proposals involving unstated assumptions
3.4 Judges the adequacy with which conclusions are supported by data or analysis involving unstated assumptions
Source: Walstad, Watts, and Rebeck, 2007
Becker, W. E. 1997. Teaching economics to undergraduates. Journal of Economic Literature 35 (3): 1347-1373.
Becker, W.E. and M. Watts.1999. How departments of economics evaluate teaching. American Economic Review 89 (2): 344-349.
Bloom, B.S., ed. 1964. Taxonomy of educational objectives. New York: David McKay.
Boex, L.F. J. 2000. Attributes of effective economics instructors: An analysis of student evaluations. Journal of Economic Education 31 (3): 211-227.
Bosshardt, W. and M. Watts. 2001. Comparing student and instructor evaluations of teaching. The Journal of Economic Education 32 (1): 3-17.
Buckles, S. and J. J. Siegfried. 2006. Using multiple-choice questions to evaluate in-depth learning of economics. The Journal of Economic Education 37 (1):48-57.
Eble, K. E. The Craft of Teaching. 2 ed. San Francisco: Jossey-Bass, 1988.
Eisenberg, R. (1996). Personal correspondence.
Saunders, P. 1991. Test of understanding in college economics: Examiners manual for third edition. New York: Joint Council on Economic Education.
Seldin, Peter and Associates. 1999. Changing practices in evaluating teaching. Bolton, MA: Anker Publishing Company, Inc.
Simkins, S. and S. Allen. 2000. Pretesting students to improve teaching and learning. International Advances in Economic Research 6 (1):100-112.
Walstad, W. B. 2001. Improving assessment in university economics. Journal of Economic Education 32 (3): 281-295.
Walstad, W.B. 2006. Assessment of student learnring in economics, in Teaching economics: more alternatives to chalk and talk, ed. W.E. Becker et al. Northampton, MA: Edward Elgar Publishing, Inc., 193-212.
Walstad, W.B. and W.E. Becker. 1994. Achievement differences on multiple-choice and essay tests in economics. American Economic Review: Papers and Proceedings 84 (2): 193-196.
Walstad, W.B. and K. Rebeck. 2008. The test of understanding of college economics. American Economic Review: Papers and Proceedings 98 (2): 547-551.
Walstad, W. B., M. W. Watts, and K. Rebeck. 2007. Test of understanding in college economics: Examiners manual for fourth edition. New York: National Council on Economic Education.
White, L.J. 1995. Efforts by economics departments to assess teaching effectiveness: Results from an informal survey. The Journal of Economic Education 26 (1): 81-85.
(1) We recognize that performance on the TUCE reflects student aptitude in addition to teaching effectiveness. However, we believe the test provides one means of evaluating teaching effectiveness which, in combination with others, can assist instructors in promoting student learning.
(2) We report the performance of SMC students across all sections in which the TUCE was administered. While our intent was to assess the effectiveness of the Economics Department as a whole, the test can also serve as a way to assist individual instructors in gauging their teaching effectiveness.
Richard H. Courtney, Saint Mary's College of California
William Lee, Saint Mary's College of California
Kara Boatman, Saint Mary's College of California
Table 1: Percentage of Correct Responses by Saint Mary's College of California Students and a National Sample of Students, to Questions on the Microeconomics TUCE-4, Grouped by Content Category Pre-test Post-test Content Category SMC Sample SMC * Sample ** SMC Sample (percentage correct) (percentage point change) Basic economic problem Mean score 27.2 29.0 32.3 40.5 5.1 11.5 Markets and price determination Mean score 29.6 33.8 51.0 42.7 21.4 8.9 Theories of the firm Mean score 28.7 29.4 45.2 45.2 13.1 15.8 Factor markets Mean score 36.0 35.0 49.1 42.0 13.1 7.0 (Microeconomic) role of government in a market economy Mean score 26.3 30.6 46.8 41.1 20.5 10.5 International economics Mean score 32.7 32.0 46.5 40.3 13.8 8.3 * For each of the content categories, scores of SMC students on the post-test were statistically higher than scores on the pre-test at the 0.005 level of significance. ** No conclusion as to whether or not post-test scores of the national sample of students were statistically higher than scores on the pre-test because the distribution of correct responses is not reported. Table 2: Percentage of Correct Responses by Saint Mary's College of California Students and a National Sample of Students, to Questions on the Microeconomics TUCE-4, Grouped by Cognitive Category Pre-test Post-test Cognitive Category SMC National SMC * National Sample Sample ** (percentage correct) Recognition and understanding Mean score 24.2 28.0 45.1 44.0 Explicit application Mean score 33.0 34.8 50.1 45.0 Implicit application Mean score 26.0 27.0 40.0 36.9 Cognitive Category SMC National Sample (percentage point change) Recognition and understanding Mean score 20.9 16.0 Explicit application Mean score 17.1 10.2 Implicit application Mean score 14.0 9.9 * For each of the cognitive categories, scores of SMC students on the post-test were statistically higher than scores on the pre-test at the 0.005 level of significance. ** No conclusion as to whether or not post-test scores of the national sample of students were statistically higher than scores on the pre-test because the distribution of correct responses is not reported. Table 3: Percentage of Correct Responses by Saint Mary's College of California Students and a National Sample of Students, to Questions on the Macroeconomics TUCE-4, Grouped by Content Category Pre-test Post-test Content Category SMC National SMC National sample sample (percentage correct) Measuring aggregate economic performance Mean score 34.7 34.8 62.5 53.2 Aggregate supply and demand Mean score 38.3 37.9 61.1 51.3 Money and financial markets Mean score 24.1 24.2 50.5 46.2 Monetary and fiscal policy Mean score 26.2 32.7 57.1 46.9 Policy debates Mean score 30.2 26.3 51.2 35.0 International economics Mean score 25.9 31.3 64.2 43.0 Content Category SMC National sample (percentage point change) Measuring aggregate economic performance Mean score 27.8 18.4 Aggregate supply and demand Mean score 22.8 13.4 Money and financial markets Mean score 26.4 22.0 Monetary and fiscal policy Mean score 30.9 14.2 Policy debates Mean score 21.0 8.7 International economics Mean score 38.3 11.7 * For each of the content categories, scores of SMC students on the post-test were statistically higher than scores on the pre-test at the 0.005 level of significance. ** No conclusion as to whether or not post-test scores of the national sample of students were statistically higher than scores on the pre-test because the distribution of correct responses is not reported. Table 4: Percentage of Correct Responses by Saint Mary's College of California Students and a National Sample of Students, to Questions on the Macroeconomics TUCE-4, Grouped by Cognitive Category Pre-test Post-test Cognitive Category SMC National SMC * National Sample Sample ** (percentage correct) Recognition and understanding Mean score 25.0 27.3 58.3 46.3 Explicit application Mean score 33.0 34.1 56.4 48.8 Implicit application Mean score 31.7 33.8 62.0 45.4 Cognitive Category SMC National Sample (percentage point change) Recognition and understanding Mean score 33.3 19.0 Explicit application Mean score 23.4 14.7 Implicit application Mean score 30.3 11.6 * For each of the cognitive categories, scores of SMC students on the post-test were statistically higher than scores on the pre-test at the 0.005 level of significance. ** No conclusion as to whether or not post-test scores of the national sample of students were statistically higher than scores on the pre-test because the distribution of correct responses is not reported.…