The Influence of University Entry Scores on Student Performance in Engineering Mechanics
Thomas, G., Henderson, A., Goldfinch, T., Australasian Journal of Engineering Education
Students just don't have the maths anymore, that's why we have such high failure rates. We are accepting too many students with low university entranceA scores. Not enough students have done physics in high school.
These assertions will be instantly recognisable to anyone who has ever attended some sort of undergraduate engineering curriculum meeting, but how accurate are they? This is what the team of an Australian Learning and Teaching Council funded project to investigate and address high failure rates in engineering mechanics courses intended to find out, among other commonly cited causes of poor performance. Academics at the University of Wollongong, the University of Tasmania, the University of Technology Sydney and the Australian
Maritime College investigated the students' high school academic history and subsequent performance in first-year mechanics courses to determine the extent to which achievement at the end of year 12 is an indicator of performance in first-year engineering mechanics, and whether the lack of high-level maths or physics at year 12 are contributing factors to failing mechanics courses.
Previous work by Dwight & Carew (2006) and Tumen et al (2008) has shown limited correlations between academic history and university performance. A federal government study by Urban et al (1999) also showed that while tertiary entry scores are good predictors of performance (in terms of degree completion) between high and low scores, no significant differences exist between the top four entry score deciles, where the majority of engineering students lie. Interestingly, Tumen et al (2008) also singled out engineering students as more likely to leave after first-year than other students, indicating that engineering could indeed be a special case with its own set of difficulties. This current study examines the correlation between student performances in entrance exams and first-year mechanics courses, as well as the use of the university entrance score (UES) as a predictor for failure.
The engineering mechanics courses at the centre of this study covered the fundamentals of statics and dynamics at three of the participating institutions (A, C and D) and statics only at institution B, though to a more advanced level than the other three. The content of these courses, their organisation, teaching approaches and assessment are typical of many Australian first-year mechanics courses with weekly lectures and tutorials, practical laboratory work, in session quizzes, and a final examination worth approximately half the total assessment for the course. A more complete outline of typical content and assessment methods can be found in Goldfinch et al (2008). The student group represented in the statistics presented throughout the paper are high school leavers who make up the majority of undergraduate engineering enrolments at the four institutions, and importantly, the majority of students who fail these mechanics courses.
2 CORRELATION OF ENTRANCE SCORES AND MECHANICS RESULTS
UESs, which are used as key eligibility criteria for university applicants, were collated from the four participating universities--the University of Wollongong, the University of Tasmania, the University of Technology Sydney and the Australian Maritime College--for students who had completed a first-year mechanics course. (All four universities provide 4-year engineering degrees accredited by Engineers Australia. Their degrees are therefore covered by the Washington Accord.) Each of the institutions has a nominal UES cut off for normal enrolments. In most instances, students with a UES below the high 70s will have entered the engineering degree program through a different pathway such as an individual interview process, or articulation from vocational training in an engineering-related field. The entrance scores were available for nearly all students who had entered following completion of year 12 exams, however, scores were not available for students entering through different pathways such as mature students and international students.
These entrance scores were then plotted for each student against their result for a first-year mechanics course, examples of such plots are shown in figure 1 for Institutions A and B. There is a very large amount of scatter in both sets of data, and although it could be argued that there is generally a positive association between the UES and the mechanics course score for Institution A, this is predominantly due to those students entering with a UES above 95, who generally have performed well in the mechanics course. A linear regression line has been fitted to both sets of data and the R2 value, or coefficient of determination, determined as 0.2809 for Institution A and 0.0382 for Institution B; this indicates a very weak relationship between the UES and performance in the mechanics course.
The correlation between UES and mechanics grade for Institutions C and D are shown in figure 2. Again the relationships appear to be very weak; for example, at Institution C students gaining higher distinction grades entered with UES between 56 and 99. Further analysis of the data was conducted to determine statistical measures including the Pearson product-moment correlation coefficient, Student's t-test and Levene's test, these statistics all exhibited similar weak correlations between the UES and performance in the mechanics course.
[FIGURE 1 OMITTED]
[FIGURE 2 OMITTED]
Therefore, based on an analysis of mechanics course scores and grade distributions, it appears to be impossible to reliably predict a student's performance based on their overall performance in high school.
It is evident then that academic history can only tell us part of the story of why students fail mechanics, but which part? This prompted the authors to look at the statistics in a different way. Borrowing from the world of medical statistics, it was decided to investigate the variation in "risk of failure" of different student groups.
3 RISK FACTOR
The concept of risk factor is used extensively in the medical world when presenting aspects that increase a person's chances of developing a disease. The technique was first proposed by Kannel et al (1961) in a study that isolated the major risk factors associated with heart disease: high blood pressure, high cholesterol levels and certain irregularities in the electrical patterns in the heart.
[FIGURE 3 OMITTED]
A risk factor is calculated by comparing the risk of those exposed to the potential risk factor to those not exposed as follows:
Risk Factor = number of students experiencing event / number of students exposed to risk factor
It may be adapted to the educational field by assessing the risk of failing a course by relating the number of students who failed a course entering with a specific UES compared to the total number of students entering with that UES:
Risk Factor = number of students failing course/ number of students with university entry score in specific range
This allows a comparison of the risk of those who entered the course with a high UES, to those who entered with a low one. This approach may not only shed light on whether students entering mechanics courses with low UES are more prone to failing, but also add some statistics to the debate over which students should be accepted into engineering degree programs.
The risk factor for the student cohorts from each participating university were determined for a range of UESs and are presented in figure 3. These plots show the percentage of students who failed the mechanics course for various ranges of UESs, or the risk factor of failing for a given UES (note that a mechanics score of less than 50 represented failure). For Institution C, students entering with a UES of less than 50 can be seen to have a risk factor of 0.5 of failing the mechanics course; this risk factor reduces as the UES increases with a risk factor of only 0.04 for those students with a UES between 90 and 100. Similar trends can be clearly seen for Institutions A and D, with an increase in risk factor for a reduction in UES. In contrast the data for Institution B exhibits a difference tendency, with a drop in risk factor for a UES of 50 to 59, when compared with a 60 to 69 UES.
The reason for this difference in risk factor for Institution B may be partially explained through studying the relative numbers of students in each UES category. Figure 4 shows the number of mechanics students in all institutions in each UES band as a percentage of the total number of students. This shows that when compared to Institution C (the figures for the other institutions are somewhat similar), there are relatively low numbers in the Institution B UES bands 50-59 and 60-69, which may being skewing the data to some extent.
The data in figure 3 also allows a comparison of student failure rates for the different institutions to be made. There is a significant variation between institutions, with students at Institution D having the highest risk factor of failing, while students at
Institution B have the lowest risk factor. Whether this difference is due to differences in teaching methods, student capability or course difficulty is not evident.
The use of the risk factor may be extended to determine the amount by which an occurrence is more likely to happen than average. Figure 5 shows the times more likely a student is of failing mechanics are plotted for each UES band; where a student undertaking mechanics with a UES of less than 50 can be seen to be 2.5 times more likely to fail mechanics than a student with the average risk factor for this institution (0.19).
The influence of taking other high school courses on student performance in mechanics can also be investigated using a risk factor approach. This is illustrated in figure 6, where the risk factors for participation, and non-participation, in an extension maths program for a student at Institution A are presented. There is a stark difference in the data for the two groups with students who participated in the extension program having a risk factor of 0.24, compared to a risk factor of 0.45 for those students who did not participate. While the difference between the two risk factors may meet initial expectations, the risk factor for students taking the higher level mathematics in school is surprisingly high. In agreement with the UES data, this suggests that there is much more factors involved in students' success in mechanics than discreet measures of prior academic performance.
[FIGURE 4 OMITTED]
[FIGURE 5 OMITTED]
[FIGURE 6 OMITTED]
The poor level of correlation between UES and engineering mechanics performance is surprising, particularly since UES, combined with the appropriate choice of subjects completed, are the criteria defining admission into university degree programs. Indeed, McKenzie & Schweitzer (2001) concluded that previous academic performance gives the best indication of performance in first-year university, more so than other factors such as integration into university, self efficacy and employment responsibilities, which are also important, but to a lesser extent.
One reason why UES is not a particularly accurate predictor of performance in first-year mechanics is that it contains a considerable diversity of students' academic background. The score is based on a number of best scoring tertiary entry level subjects, and it does not necessarily provide an indication of performance in units that assist in preparing students for engineering mechanics, such as mathematics and physics. In addition to meeting a minimum entrance score requirement, students seeking admission into an engineering degree program often must have completed base-level mathematics and physics units. Completing these units is sufficient to "open the door" into studying engineering, however, some students also complete higher level units in mathematics, physics and chemistry. The results presented in figure 6 suggest that students having studied higher level maths are in a sub-group with significantly reduced risk of factor of failing. This suggests that school subject choice may potentially be as important as UES as a performance predictor. However, this result must be interpreted with care as students in this sample group may have also achieved higher UESs.
Despite the lack of correlation between UES and performance in engineering mechanics, this study has shown that UES can be used to identify groups at higher risk of failure. The trend of students with lower UES is consistent in all four institutions studied. This information may be used in several areas. The risk factor approach may also be used to emphasise the importance of subject choice in addition to maximising their UES. The risk of failure may be expressed in a positive form as a chance of success, and used to inform school students that they have better chance of doing well in engineering mechanics if they complete higher level mathematics and physics. This type of positive reinforcement may have beneficial self efficacy effects, which are known to play an important role in academic performance (McKenzie & Schewitzer, 2001). If students believe they will succeed, they are more likely to succeed.
Students deemed to be at higher risk of failing may benefit from direct intervention measures in the form of additional support programs. This is a common practice in many university courses where subgroups are identified with a different academic background. For example, language courses are offered to international students; additional tutorials for students that have entered programs through a bridging courses instead of having completed usual prerequisite units; and basic computer skills courses. Could an additional tutorial program be introduced with the aim of improving the performance of those with high risk of failing engineering mechanics? The risk factor approach may also be used to provide a framework for assessing the effect of changes made to a unit, such as changes to unit delivery or curriculum; for example, assessing if an intervention measure is actually helping those with higher risk of failure.
The results from this study are based entirely on students with available UES data, which in this case consists mainly of recent school leavers. Students that enter university through other pathways such as international students, mature-aged students or students switching from other programs may not be well represented in this data set.
In addition, other factors that may influence performance in mechanics, such as age, gender, ethnicity and previous schooling, have not been investigated. A similar approach to that taken here, the use of a risk factor, could be used to investigate such possible determinants. It is acknowledged that there is no single indicator of performance in a subject such as mechanics, there are probably a large number of interrelated factors of varying importance for each individual student. One of the main aims of this paper has been to present the use of risk factor in the educational context and demonstrate its use for a particular indicator, ie. UES. Particularly where standard statistical approaches, as presented, do not indicate that a link is present, this risk factor method could be used to determine if other possible determinants are in fact a risk factor.
This study has shown that university entry scores from students at four tertiary institutions fail to provide a good predictor of performance in first-year engineering mechanics. Presenting the results in terms of a risk factor of failing engineering mechanics shows a clear trend that students with low UES have an elevated risk of failure. The risk factor approach is also identified as a useful tool in developing potential intervention programs and for evaluating the effects of curriculum change.
The authors wish to acknowledge the input of their project colleagues Professor Tim McCarthy, Dr Anna Carew and Dr Anne Gardner. Support for this work has been provided by the Australian Learning and Teaching Council Ltd, an initiative of the Australian Government Department of Education, Employment and Workplace Relations. The views expressed in this publication do not necessarily reflect the views of the Australian Learning and Teaching Council.
Dwight, R. & Carew, A. 2006, "Investigating the causes of poor student performance in basic mechanics", Proceedings of the 17th Annual Australasian Association of Engineering Education Conference, Auckland, New Zealand.
Goldfinch, T., Carew, A., Gardner, A., Henderson, A., McCarthy, T. & Thomas, G. 2008, "Cross-institutional Comparison of Mechanics Examinations: A Guide for the Curious", Proceedings of the 19th Annual Australasian Association for Engineering Education Conference, Yeppoon, Australia.
Kannel, W. B., Dawber, T. R., Kagan, A., Revotskie, N. & Stokes, J. 1961, "Factors of risk in the development of coronary heart disease - six-year follow-up experience: the Framingham study", The Annals of Internal Medicine, Vol. 55, pp. 33-50.
McKenzie, K. & Schweitzer, R. 2001, "Who succeeds at university? Factors predicting academic performance in first year Australian university students", Higher Education Research & Development, Vol. 20, No. 1, pp. 21-33.
Tumen, S., Shulruf, B. & Hattie, J. 2008, "Student Pathways at the University: Patterns and Predictors of Completion", Studies in Higher Education, Vol. 33, No. 3, pp. 233-252.
Urban, M., Jones, E., Smith, G., Evans, C., Maclachlan, M. & Karmel, T. 1999, Completions, undergraduate academic outcomes for 1992 commencing students, DETYA Higher Education Division, Occasional Paper Series, August.
G Thomas ([dagger])
Australian Maritime College, Launceston, Tasmania
University of Tasmania, Hobart, Tasmania
University of Wollongong, Wollongong, New South Wales
* Reviewed and revised version of paper originally presented at the 20th Annual Conference of the Australasian Association for Engineering Education (AaeE 2009), University of Adelaide, South Australia, 6-9 December 2009.
([dagger]) Corresponding author A/Prof Giles Thomas can be contacted at firstname.lastname@example.org.
Giles Thomas is an Associate Professor of Naval Architecture at the Australian Maritime College. He has taught a range of units on the BEng degree program concerning the static and dynamic behaviour of ships and boats. In 2006 he received the AMC Council Award for High Achievement in Teaching for his work on ship design units. In 2007 he was awarded citations for Outstanding Contribution to Student Learning from the Carrick Institute and the Australasian Association of Engineering Education. He worked on the 2008-2010 Australian Learning and Teaching Council (ALTC) project "A Pro-Active Approach to Addressing Student Learning Diversity in Engineering Mechanics" and is currently working on the 2010-2012 ALTC project "Exploring Intercultural Competency in Engineering".
Dr Alan Henderson is a lecturer in mechanical engineering at the University of Tasmania, with strong interests in university learning and teaching. Alan has previously worked as a team member of the 2008-2010 Australian Learning and Teaching Council (ALTC) project "A Pro-Active Approach to Addressing Student Learning Diversity in Engineering Mechanics" and is currently working as a team member of the 2010-2012 ALTC project "Exploring Intercultural Competency in Engineering". He is also an active member of research groups examining issues such as assessment of group work, peer assessment, and assessment of trans-disciplinary research projects.
Tom Goldfinch is a lecturer in engineering education at the University of Wollongong. He has been working on engineering education research and development projects since 2006, and is currently project leader in the Australian Learning and Teaching Council (ALTC) project "Exploring Intercultural Competency in Engineering". His key interests in the field are engineering mechanics, graduate attributes, curriculum planning/mapping, and the social and cultural aspects of engineering practice. Past projects in these areas include: 2008-2010 ALTC project "A Pro-Active Approach to Addressing Student Learning Diversity in Engineering Mechanics", comprehensive mapping of graduate attributes related learning activities in the Faculty of Engineering; a 2007-2008 graduate attribute focused curriculum review for the Universidad Technologica Metropolitana, in Santiago, Chile; and several internally funded grants focusing on graduate attributes and engineering mechanics. In 2009 he was awarded a vice-chancellors award for outstanding contribution to teaching and learning in the area of graduate attributes.…
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Article title: The Influence of University Entry Scores on Student Performance in Engineering Mechanics. Contributors: Thomas, G. - Author, Henderson, A. - Author, Goldfinch, T. - Author. Journal title: Australasian Journal of Engineering Education. Volume: 17. Issue: 1 Publication date: June 2011. Page number: 19+. © 2010 The Institution of Engineers, Australia. COPYRIGHT 2011 Gale Group.
This material is protected by copyright and, with the exception of fair use, may not be further copied, distributed or transmitted in any form or by any means.