Universities and governments are increasingly interested in using quality measures that provide evidence that can be used to improve the quality of student learning, as well as for benchmarking and funding decisions. Standard scales for assessing the students' experiences of their learning and of the teaching they receive during their studies are growing in popularity and use. At the level of the students' whole program, instruments such as the National Survey of Student Engagement (NSSE) in North America (Kuh, 2001), the National Student Survey (NSS) in England (Surridge, 2008) and the Course Experience Questionnaire (CEQ) in Australia (Ramsden, 1991) are being used.
Questionnaires of this sort tap into a range of aspects of the student learning experience, and usually include an item on their overall satisfaction with their course. In the case of the Australian questionnaire, the areas selected to be monitored, such as effectiveness of teaching, have been found in research studies to correlate with the quality of student learning (Lizzio et al, 2002; Ramsden 1991). The questionnaire results are therefore more about students' experience of their learning than about their satisfaction.
While these questionnaires provide information about specific areas of student experience that can be the focus of learning improvement interventions at the whole degree level, these changes are often hard to achieve. For this reason, questionnaires such as the Unit of Study Evaluation (USE) (ITL, 2008) have been developed as a means of collecting data from students on their experience of learning at the individual subject or unit of study level (these terms are interchangeable and are used as synonymously in this study). Some of the USE survey items correspond to the factor scales of the CEQ (Ramsden, 1991). Because of this correspondence, the USE can provide staff of the department, school or faculty with an indication of the relative contributions of different subjects, to faculty performance on the CEQ factor scales. The results of such studies are also used in the design of new courses and degree programs, as evidence for promotions and awards, and for various policy decisions.
This type of data from students is quite different to data collected about an individual teacher's teaching performance, though most of the research on student feedback has been done on these Student Evaluations of Teaching (SET) instruments focused on personnel evaluation (Marsh, 1987; Richardson, 2005). The factors associated with SET results have been explored extensively. Some of the earlier studies Marsh, 1984) concluded that the SET depended more on the instructor than on the course itself. Other studies have shown relationships between the faculty's research activities and their undergraduate teaching (Prince et al, 2007).
In engineering, Centra (1993) and Dee (2007), among others, have compared SET of engineering students with those in other disciplines. The views of students regarding what is a "quality engineering education" has been also studied qualitatively (Pomales-Garcia, 2007). What has not previously been explored are the relations between students learning experience and some of the factors that may be used to effect change to the quality of that experience.
The research described in this paper aims to address five questions:
1. How do USE scores change over time?
2. Are students' learning experiences different between the years of study?
3. Are students' experiences different between engineering sub-disciplines?
4. Does class size correlate with USE scores?
5. What aspects of coordinators' attributes are related to USE scores?
Section 2 reviews the literature on factors associated with variation in the quality of the student learning experience at the degree level. Section 3 describes the study method. Section 4 reports the results, and section 5 discusses the key findings and their implications. …