Trigger Points: Enhancing Generic Skills in Accounting Education through Changes to Teaching Practices
Watts, T., McNair, C. J., Australasian Accounting Business & Finance Journal
In 2001 a small Australian university implement particular intervention strategies designed to improve specific educational outcomes in its accounting degree program. These outcomes mirrored the three core areas of the Graduate Careers Council of Australia's Course Experience Questionnaire: (1) good teaching, (2) overall satisfaction, and (3) generic skills. Five areas were identified for intervention: (1) the effective allocation of full-time staff, (2) the effective use of sessional staff, (3) greater commitment by sessional staff, (4) the introduction of common subject outlines, and (5) the proactive response to student evaluations. The results indicate a statistically significant improvement in 2003 in the three core areas, supporting the argument that improving student satisfaction with their educational experience will improve student outcomes. A similar, but less significant, improvement of grades in the three final year accounting subjects was identified. Suggestions for the decline from 2004 are also explored.
Key words: Accounting education; student performance; student satisfaction; Intervention
This paper describes the events leading to and the results of an intervention strategy implemented specifically to improve student outcomes, as measured by the three core outcome areas of the Graduate Careers Council of Australia (GCCA). These outcome areas are: good teaching, generic skills and overall satisfaction. The course chosen for the study was a small accounting program. Five mechanisms were identified, which it was believed, would influence the attitudes of students towards their teaching and learning experience, and through these measures improve satisfaction. A sixth mechanism was identified that would provide a monitoring and accountability function. Considerable effort was spent on identifying and responding to statements relating to teaching and learning in the student evaluation process.
In 2001 the School of Business at the subject university established a working party in the accounting faculty to implement specific intervention strategies that were designed to improve specific student outcomes as measured by the Graduate Careers Council of Australia's Course Experience Questionnaire. While the university's performance measures for each of the three core outcome areas of the questionnaire (good teaching, generic skills and overall satisfaction) were comparable to, and in some areas exceeded, the national performance measures, it was felt that improvements could be made by improving students' satisfaction of their educational experience. The impetus for improved student satisfaction was a recognition that (1) student satisfaction is, in itself, an important educational outcome, (2) increased student satisfaction may increase graduate support students become practising professionals, and (3) that the improved satisfaction, reflected in various publicly available publications (such as the Australian Good Universities Guide) may entice students to the subject university.
The subject university is a small government-funded public university operating in New South Wales, Australia. The accounting program, Bachelor of Business (Accounting), is accredited for professional membership by the professional accounting bodies in Australia. The student body consists mainly of school leavers with some international and mature-age students. The accounting program is designed as a three-year, 'full-time' course, with little accommodation for part-time students or evening offerings. As with many accounting programs in Australia, the first year is a common year for all Bachelor of Business students irrespective of their major.
(3) THE PROJECT'S FRAMEWORK
To provide a rigorous base for the project the working party adopted the three primary components identified by Argyris (1970) as critical to an intervention process: valid and useful information, free choice and internal commitment.
For the first component (valid and useful information), the project relied on the fact that the information and data gathered could be verified, openly gathered, tested in other disciplines and used to effect change. The second component (free choice), centres on the options identified to effect change, the assurance that they were voluntary and not based on institutional coercion, and were proactive, not reactive. With respect to the third component (internal commitment), the involvement of all accounting discipline staff and the school's Accounting Advisory Committee provided a high level of ownership and a feeling of collective commitment.
The structure also drew from the Total Quality Control literature in that it focused the accounting faculty on issues of pride and concern for the program's reputation to provide the necessary incentives to ensure quality improvements. In particular, three basic principles of quality improvement were adopted: creating a simple process, making the problems visible and creating a climate for improvement (Stasey and McNair 1990).
(4) PURPOSE AND CONTRIBUTION
The project had one aim: to improve student outcomes, as measured by the three core values of the Graduate Careers Council of Australia's Course Experience Questionnaire. This was to be achieved by improving the student's satisfaction with their educational experience.
One of the major goals of the many educational reforms in Australia over the past two decades was excellent, quality education. Investigating the various aspects of student satisfaction can assist higher education in meeting that goal. The contribution of the project is the achievement of improved outcomes through the identification and implementation of techniques to improve student satisfaction.
(5) ASPECTS OF IMPROVING STUDENT SATISFACTION
Improved student satisfaction through better teaching and learning has been well documented (Anderson, Banks and Leary, 2002; Yazici 2004; Helms, Alvis and Willis, 2005; Shaftel and Shaftel, 2005). Yazici's (2004) study concludes that collaboration between teaching staff improves student satisfaction through understanding and enhanced critical thinking and communication skills. In a similar study Helms et al. (2005) suggest that satisfaction can be improved through combining business subjects to stimulate student learning through a greater understanding of the interrelationship between business subjects. Shaftel and Shaftel (2005) demonstrated that students' study skills and attitudes to learning improved significantly following an instructional intervention programme that redesigned an introductory accounting course.
5.1 Effective allocation of full-time staff
Student satisfaction can be enhanced by matching teaching staff with academic offerings, and ensuring that students are engaged with the content of learning tasks in a way that would enable them to reach understanding (Ramsden 1992). Putting this into context, McInnis, James and McNaught (1995) reported that of the first-year students surveyed in 1994, barely half found their subjects interesting, slightly less than half said that staff were good at explaining things, only 53 per cent believed that the academic who taught them was enthusiastic about the subject, and only 43 per cent agreed they got satisfaction from studying the subject.
5.2 Effective use of sessional staff
Improving satisfaction by continually having the material presented by enthusiastic and well-resourced teaching staff is difficult enough, however, current trends towards the increased use of sessional academics (part-time/casual) make the task more onerous. Shah's (2003) study of workforce restructuring in the vocational education and training sector in Victoria (Australia) between 1993 and 1998 points to a significant and rapid increase in sessional positions. This trend has continued in Australia, with a 48.3 per cent increase in sessional staff between 1995 and 2004 (DEST 2004). The body of literature from the United Kingdom and the United States suggests that this is an international trend, and many studies reflect the dual concerns of the qualifications and experience of sessional staff (Charfauros and Tierne 1999; Kift 2002; Rothwell 2002; Ramsden 2003). In Australia the concern about qualifications resulted in the commissioning of a report into professional development for university teaching, which recommended that there should be an expectation that sessional staff undertake a minimal level of teaching preparation before being offered a contract for teaching (Dearn, Fraser and Ryan, 2002). With respect to experience, Dixon and Scott (2004) argued that the sessional staff members' lack of teaching experience in student-centered practices, and this coupled with the tenuous nature of their employment, may affect their willingness to experiment with innovative teaching strategies.
Not withstanding the issues above, sessional staff have been recognised for making a significant contribution to university teaching because of their diverse backgrounds, their career paths and their skills (Harvey, Fraser and Bewes, 2005). In order to reconcile the need for sessional staff with the issues of qualifications and experience, many universities are targeting sessional staff as a strategic focus to increase the quality of teaching and learning practices (Dixon and Scott 2004; Harvey et al. 2005).
5.3 Proactive responses to student evaluations
One method of assessing student satisfaction is through student evaluations (Boud 1988; Entwistle and Tait 1990; Burns 1991; Chen and Hoshower 1998; Green, Calderon and Reider, 1998). Irrespective of drivers, internal or external, as part of their goal to improve teaching and learning universities have felt it necessary to increase the emphasis on student evaluations. This has produced two outcomes: increased teaching effort by staff dedicated to improving the educational experience, as well as higher levels of student satisfaction resulting from improved teaching (Kanagaretnam, Mathieu and Thevaranjan, 2003). However, in order to improve student satisfaction through this mechanism, two practical issues for teaching staff are critical: the need for early and clear communication of expected learning outcomes, and the provision of timely and diagnostic feedback (McInnis et al.1995; Thornton and Hornyak 2003).
Empirical relationship between improved student satisfaction and student evaluation has been demonstrated in a variety of studies. Pearson and Beasley (1999) reported students feeling that they had gained a greater understanding throughout a course by progressive feedback and actions taken in response to students' recommendations for positive change. Lindahl and Fanelli (2002) examined how student problems, reported in the student evaluations, were resolved through applying the principles of continuous improvement in the following course. This included directly confronting the students to clarify the problem, enlisting their aid in improving the course, and eliciting specific feedback, all of which substantially improved the level of student satisfaction. It is also possible that improved satisfaction is a reflection of students feeling empowered and involved in the course (Lancaster and Strand, 2001) However, Green et al. (1998) found in their survey that many student evaluations included items that students were incapable of responding to, and 20 per cent captured no data on the teaching and learning dimension. This lack of clear communication of expected learning objectives frustrated the students and greatly reduced their level of satisfaction.
5.4 Influence on student grades
Any review of the literature on improved student satisfaction and student evaluation would be incomplete without reference to student grades. According to Wallace and Wallace (1998) student evaluations measure the level of students' 'happiness' with the course, which includes workloads and grades. Sabot and Wakeman-Linn (1991) have argued that student satisfaction is an increasing function of their grades, and grades have a direct influence on students' utility function. It follows therefore that students' utility is a decreasing function of their learning effort (Allgood 2001). This view supports the findings of Cole (1993), Dreyfuss (1994) and Beaver (1997) who argue that to obtain a higher student rating academic staff have succumbed to an expectation of a reduction in student knowledge and a manipulation of grades, which is evidenced by increased number of students receiving high distinctions and distinctions over the past 20 years.
Conversely, research by Howard and Maxwell (1982) demonstrated that the relationship between grades and satisfaction may be caused by other variables, including student motivation and progress in the course, rather than contamination due to grading leniency. Their results indicate that there 'is no evidence that a grade-influencing-satisfaction interpretation is more likely than its opposite, namely, a satisfaction-causing-grades one' (175). These findings were reinforced by Pike (1991), who, having examined the relationship between grades and satisfaction, found that satisfaction exerted a stronger influence on grades than grades on satisfaction.
Contemporary research provides a more reasoned insight into the relationship between student grades and student satisfaction. Umbach and Porter (2002), using survey data from more than 1,300 students, concluded that the characteristics of academic departments had a significant impact on student satisfaction. These characteristics included: student contact with faculty staff, research emphasis and proportion of female undergraduates. In a similar study, Wiers-Jenssen, Stensaker and Grogaard, (2002) deconstructed the determinants of student satisfaction, and suggested that factors to improve satisfaction included: academic and pedagogic quality of teaching, social climate, aesthetic aspects of the physical infrastructure and the quality of services from the administrative staff.
(6) IDENTIFYING THE IMPROVEMENT MECHANISMS
Given the significant investment needed in terms of both dedicated resources and commitment to continuous quality improvement, to achieve the objective of increased student satisfaction, the accounting discipline focused on areas that would improve relationships between students and academics, and through this, enhance the learning experience (Hodgson 1984). The first step was to take an inventory of the mechanisms available within the accounting discipline that could be used to improve student satisfaction without imposing additional cost on the school, faculty, or student. The areas identified by the inventory were: (1) the effective allocation of full-time staff to primary accounting subjects, (2) the effective use of sessional staff, (3) greater commitment by sessional staff through improved communication and involvement, (4) the introduction of common subject outlines, and (5) a proactive response to student evaluation feedback. In addition, (6) the school's Accounting Advisory Committee was used to make the measures visible and provide a mentoring and accountability measure. These mechanisms and their expected outcomes are shown in Table 1
(7) IMPLEMENTING THE MECHANISM
7.1 Effective allocation of full-time staff
Prior to 2002, accounting academic staff were allowed, to some degree, to select the subjects and teaching times that suited their interests and personal preferences. However, this selfselection had occasionally resulted in a misalignment of abilities and teaching styles. The primary task was to identify the academic staff member best suited to teach the first-year fundamental accounting subject. This acknowledged students' need to be engaged with the content of learning tasks in a way that would enable them to reach understanding (see McInnis et al. 1995; Ramsden 1992). Also, because the first year of the degree is a common year for all students prior to selecting their major, the strategy was designed to encourage students to undertake the accounting major.
Previously, full-time staff taught up to three subjects from different accounting subdisciplines (financial accounting, management accounting, company accounting and auditing). This meant that some staff taught in areas outside their discipline specialisation. While this mismatch did not result in the poor level of teaching suggested by some academic researchers (see Feldman 1976; Eble 1988; Entwistle and Tatt 1990), it was reflected in both staff and student dissatisfaction. Staff found that the time needed to prepare for subjects outside their specialisation reduced their time available for research, and resulted in a less than adequate presentation to the students. Students reported that some teaching staff appeared somewhat uninterested and lacked the depth of knowledge to engage in a meaningful discussion. This reduced student satisfaction was reflected in the student evaluation reports.
As a result the school decided that, from 2002, full-time accounting academics would teach in no more than two subjects each semester, one consistent with the staff member's specialisation and the other chosen by the staff member. The result was higher levels of staff satisfaction and a belief they were improving the quality of student learning. It also affected the responsibility of sessional staff, because some were now required to take on the task of lecturer-in-charge of a subject in their discipline area. From the students' perspective, the evaluation reports indicated that several key principles of effective teaching had been achieved by this decision. These included improved interest and explanation; intellectual challenge and independence; active engagement; and understanding (for more discussion of these and other key principles of effective teaching, see Whitehead 1967; Brown 1978; Johnson, Maruyama, Johnson, Nelson and Skon, 1981; Tang 1990; Ramsden 1992).
7.2 The effective use of sessional staff
Sessional or part-time academic staff have been used to teach accounting and other related discipline areas in order to manage high demand and specialist subjects for decades. The advantages of relevant industry and professional experience, together with the acknowledged disadvantages of lack of student contact and supervision problems, have been well documented (Churchman 2002). Within the Bachelor of Business (Accounting) program in 2001 there were ten dedicated accounting subjects (including two specialist electives) and three full-time accounting academics.
A substantial number of sessional accounting staff were needed to teach accounting and related subjects. Prior to 2001 the appointment of sessional staff was essentially based on grace and favour, with limited attention to academic and professional qualifications or industry/commercial and teaching experience. Primarily this was due to the competition for sessional staff between the three major metropolitan universities in the Sydney area.
During 2001 the course coordinator for the Bachelor of Business (Accounting), together with the assistant head of school responsible for the employment of sessional staff, began rebuilding the academic profile of the sessional accounting academics. The immediate priority was identified as obtaining staff with relevant industry and professional experience together with demonstrated teaching experience in higher education. To achieve this, current full-time accounting staff were asked to provide a short list of three or four academic colleagues they felt they could work with and who would add value to the course. Essentially the sessional accounting academics for 2002 were 'head hunted'.
In 2003 the priority focused on improvements to academic and professional qualifications. While the 2002 sessional staff all had an undergraduate qualification in accounting, the working party felt that a postgraduate degree, together with membership of one of the two Australian professional accounting bodies, would add a new dimension to the quality aspect of the task. To assist with this the school placed advertisements calling for expressions of interest, and the short listed applicants were interviewed informally by at least one of the three full-time accounting staff members. Thus the starting point to improve the satisfaction of students (and through this, the accounting program) was the improved academic and professional qualifications of sessional accounting staff, together with a balance of teaching and industry experience relevant to accounting students.
7.3 Greater commitment by sessional staff
Following the review of the effective use of sessional staff, which included an evaluation of both academic and professional qualifications and teaching experience, some were appointed as lecture-in-charge of mainstream specialist accounting subjects. In order to avoid problems encountered in the past (and at other universities) relating to an ongoing commitment to students, it was decided by the working party to encourage sessional staff to be more proactive by involving them in school activities where they could help identify and resolve specific issues. This was achieved by modifying the function of the school's Accounting and Finance Research Group.
The Accounting and Finance Research Group was introduced in 2002 as an informal vehicle to encourage the research output of the accounting academics, and where appropriate, crossdiscipline research. Because of its informal nature, matters other than research were often discussed, including teaching methods and strategies, lectures, tutorials and various aspects of academic administration.
It was decided that sessional staff, particularly those appointed as lecturer-in-charge, be invited to attend. This resulted in a positive reaction from sessional staff, and regular attendance at meetings. It also provided a non-threatening environment where controversial issues such as the course coordinators' expectations relating to student consultation times, involvement in student evaluation, student discipline, examination preparation, marking, the input of student results and other administrative tasks could be discussed. In addition, it provided sessional staff who aspired to full-time academic positions to involve themselves in various research projects. The outcome of these informal meetings was greater sessional staff involvement in student evaluation exercises, where previously this was voluntary and few had participated. Also, there was agreement that the feedback would be discussed and the aggregate made available within the accounting discipline.
7.4 The introduction of common subject outlines
For some time prior to 2002, the design and content of subject outlines was a matter of choice by the lecturer-in-charge. However, from 2002 it was agreed that a common format be adopted, which would provide students with clear goals, details of appropriate assessment and timely and constructive feedback. Previously, academic staff had expressed disquiet about inconsistencies with respect to assessment tasks, including the excessive use of multiple choice and assessment based on attendance. It was agreed that a common format would provide consistency across a number of properties that have been identified with good teaching. These included the use and type of assessment methods, a requirement for giving timely and quality feedback on student work, and a commitment to making it absolutely clear what has to be understood and why (Ramsden 1992). In addition, subject outlines for subjects where the lecturer-in-charge was sessional needed to be reviewed by the course coordinator. This approach proved quite successful, and from 2005 has been adopted as Faculty policy for subject outlines across all disciplines.
7.5 Proactive response to student evaluations
The subject university, like almost every university in Australia, uses student evaluation surveys as part of its strategy to improve the quality of teaching and learning through a reflective approach to quality enhancement (see Biggs 2003). In the late 1990s, the school adopted a cluster of twenty compulsory statements that would be included in each evaluation to gauge specific attributes considered appropriate to the mission of the university and school. All teaching staff, full-time and sessional, were requested to subject themselves to evaluation, although this was not mandatory within the school.
In the case of the accounting discipline, all staff (full-time and sessional) agreed that they would participate and that the evaluations would be analysed and openly discussed. Adopting this view modified the evaluation process from a focus on evaluating staff performance to a focus of identifying and resolving problems of concern to students. The accounting staff agreed on four major approaches relating to student feedback. First, if any of the evaluation statements scored greater than 10 per cent in the categories disagree/strongly disagree, the specific category would be investigated. Second, all written comments would be given the highest priority for investigation and correction or emulation. Third, additional feedback would be provided to the students through a report presented and discussed in the first tutorial of the particular subject in the incoming semester. Further, a yearly comparison, by subject, would be provided to the school's Accounting Advisory Committee.
While the faculty had agreed on a common set of twenty statements, the accounting discipline agreed to focus its efforts on statements that related specifically to teaching and learning. They focused on seven key aspects: organisation, presentation, content, assessment, lecturers' characteristics and ethical behaviour. The nine statements singled out are shown in Table 2.
The student body indicated, through formal and informal feedback that they appreciated the openness of the staff, together with information provided by them during their first tutorial on actions that had been taken to address their concerns.. Also, it has allowed students to evaluate the importance that teaching staff placed on student issues or dissatisfaction, and has improved their level of satisfaction knowing that their concerns are taken seriously (see McInnis et al. 1995; Thornton and Hornyak 2003; Pearson and Beasley1999; Lindahl and Fanelli 2002).
7.6 Involvement of the school's accounting advisory committee
The decision to involve the school's Accounting Advisory Committee was seen by the working party as both a proactive and a defensive strategy. The Accounting Advisory Committee's role is to monitor the progress of the accounting program to ensure that it is meeting the needs of the key stakeholders, including the accounting profession. It is composed of accounting practitioners, representatives from commerce and industry, a representative from the professional accounting bodies, a senior accounting academic from another university, a student representative and academic staff from the accounting discipline.
By involving the Accounting Advisory Committee, the accounting discipline publicly set progressive goals and deadlines to achieve the improvements considered necessary to raise the level of student satisfaction. It also provided a degree of accountability and introduced a control mechanism, should the Accounting Advisory Committee consider that the parameters of the improvement program were exceeded. The Committee also acted as an independent body to advise and monitor the changes. In addition, it provided a vehicle that could pursue politically sensitive issues through the school or faculty, should the need arise.
H1^sub Null^ There would be no change in the core outcome of good teaching following the introduction of the intervention program.
H1^sub Alt^ There would be a change in the core outcome of good teaching following the introduction of the intervention program.
H2^sub Null^ There would be no change in the core outcome of generic skills following the introduction of the intervention program.
H2^sub Alt^ There would be a change in the core outcome of generic skills following the introduction of the intervention program.
H3^sub Null^ There would be no change in the core outcome of overall satisfaction following the introduction of the intervention program.
H3^sub Alt^ There would be a change in the core outcome of overall satisfaction following the introduction of the intervention program.
H4^sub Null^ There would be no change in the student grades following the introduction of the intervention program.
H4^sub Alt^ There would be a change in the student grades following the introduction of the intervention program.
(9) MEASURING IMPROVED STUDENT SATISFACTION
In order to assess any improvement in student satisfaction, three measures were used: (1) changes in the responses by students to the evaluation of specific accounting subjects, (2) final-year students' satisfaction ratings from the graduate course experience questionnaires, and (3) a comparison of the grades obtained in the three final-year subjects of the accounting major. The changes in the responses to student evaluations relate to the 2003 academic year. For the purpose of the project the subjects are identified as subjects A, B and C.
Table 3 shows the proportion of students in the three final-year accounting subjects who agreed/strongly agreed with the nine statements specifically related to teaching and learning (see Table 2 for questions). These statements were extracted from the twenty statements used by the faculty.
The results in Table 3 suggest that by concentrating on the student concerns about perceived deficiencies in teaching and learning, satisfaction is improved in these areas. The responses reflect the changes from 2001 to 2003, and it is argued that this improvement is reflected in the improved outcome in the course experience questionnaire data.
The course experience questionnaire is a composite indicator, collected by the Commonwealth Government and based on student perceptions of teaching quality generalised across a particular academic discipline or field of study. It is represented by an average rating on various aspects of teaching performance and includes three distinct but related core dimensions of teaching performance, specifically: a good teaching scale, a generic skills scale, and an overall satisfaction item (DEET, 1991).
Table 4 shows the changes in each of the three core areas from 2000 to 2004 at the 'agree and strongly agree' level, including the dramatic improvement in 2003. The subject university mean for good teaching increased from 31.6 in 2000 to 60.2 in 2003, dropping to 32.7 in 2004. Similar improvements can be seen in generic skills: 68.1 in 2000 to 75.4 in 2003 and 41.7 in 2004. Likewise, overall satisfaction rose from 70.3 in 2000 to 82.6 in 2004, and down to 59.2 in 2004.
The results using the same data set but restricting it to responses at the 'strongly agree' level are shown in Table 5. At this level the mean for good teaching increased from 5.9 in 2000 to 16.7 in 2003, dropping to 6.8 in 2004. Similar improvements were observed in generic skills: 13.1 in 2000 to 23.2 in 2003 and down to 17.2 in 2004. Likewise, overall satisfaction rose from 16.2 in 2000 to 34.8 in 2004 and down to 15.9 in 2004.
To test for any improvement in grades the standard normal distribution (Z score) was used to test for differences between the means. The grades obtained in the three final-year subjects of the accounting major for 2002, 2003 and 2004 were used to test the differences between the means of 2002 and 2003, and 2003 and 2004, the period where changes would be expected. In addition to comparison between years of the aggregate scores, a comparison was carried out between years for specific levels of grade: distinction, credit and pass.
(10) ANALYSIS OF THE DATA
The statistics reported in this paper come from those publicly available from the Graduate Careers Council of Australia and presented on the Australian Vice-Chancellors Committee (2005) website. Unfortunately, due to confidentially issues the raw data could not be made available. Therefore, it is the final published statistic, not the collected raw data that is being analysed. The mean displayed at both the university level and the national level is a linear transformation of the Likert scale percentages where 'strongly disagree' (SD) = -50, 'disagree' (D) = -100, 'undecided' (U) = 0, 'agree' (A) = +100, and 'strongly agree' (SA) = +50.
To test for homogeneity of variance, Hartley's F^sub MAX^ procedure was used. The resulting F^sub MAX^ statistic is displayed in Tables 6a, 6b and 6c. Using a level of significance of .05, the hypothesis of equality of group variances will be rejected if the computed F^sub MAX^ exceeds the upper-tail critical value of Hartley's F^sub MAX^ distribution based on c and (n -1) degrees of freedom. In this case, c = 2, and (n -1) = 1925 for GTS, 1925 for GSS, and 1953 for OSI. The critical value of FMAX at the .05 level of significance is 1.00. As the F statistic is greater than 1.00 for each component, the null hypothesis of equal variances is rejected.
The calculation of Hartley's F^sub MAX^ suggests that the variances are not equal. To test for the difference between the means of two independent populations having unequal variances, Cochran's test was adopted. In this test separate variance estimates are included in the test statistic, while the critical value of t is obtained by weighting the critical value of each sample by its variance of the mean (S^sup 2^/n). The hypothesis that there is no difference is rejected where the test statistic is greater than the critical value of t. The t statistic was calculated as the university mean minus the national mean divided by the square root of the university standard deviation squared, divided by the university population plus the square root of the national standard deviation squared divided by the national population. For each year, the population of the components (GTS, GSS and OSI) at the university level was consistent; however, there were some variations between the populations of each component at the national level.
The t statistic suggest that the change in 2003 for the GTS is significant at the one per cent level (t statistic of 4.1948 and a critical value of t of 2.422), for the GSS significant at the five per cent level (t statistic of 2.5144 and a critical value of t of 2.448), and for the OSI significant at the 10 per cent level (t statistic of 1.8382 and a critical value of t of 2.447). This data, together with the critical value of t at a significance level of .05, is displayed in Tables 6a, 6b and 6c.
Given the above, the null hypotheses of no change in the good teaching (hypothesis 1), the generic skills score (hypothesis 2) and the overall satisfaction index (hypothesis 3) are rejected, and the alternative hypotheses accepted.
The results of the test for improvement in grades were mixed (see Table 7). At the aggregate, only subject A and subject C exhibited a significant change at the one per cent level. Subject A during 2002-2003 and subject C during 2003.2004, the period of expected improved student performance indicated in tables 6a, 6b and 6c. Within subjects, at the specific grades of distinction, credit and pass, subjects B and C both exhibited a significant change at the distinction grade during 2003.2004. No significant change was observed at the credit grade in any of the three subjects. At the pass grade a significant change was observed in subject B during 2002.2003.
While the results were mixed, the significant changes in the grades reported for subject C in 2003.2004 does provide some support for the argument that improved student satisfaction can translate into improved student grades. The fact that this is not evidenced in subjects A and B during the same period could be a reflection of many variables, including the perceived difficulty of the subject, the subject's popularity, and the impact of teaching staff during this period. However, this aside, there was a significant difference in the specific level of 'distinction' during the period 2003.2004 for subject B.
The purpose of this paper was to investigate the results of an intervention strategy by members of the accounting discipline within a small Australian university designed to improve the student outcomes in the Bachelor of Business (Accounting) program by improving student satisfaction as measured by the three core outcome areas of the GCCA. Five areas were targeted, with the expectation of improving student satisfaction, and through this the quality of the teaching and learning experiences of students within the program. It is argued that improvements in the effective use of sessional staff, the effective allocation of full-time staff, the proactive response to student evaluations, greater commitment by sessional staff and the introduction and use of common subject outlines, resulted in improvements in the three key performance indicators of student satisfaction: good teaching, generic skills and overall satisfaction.
The paper argues that improved levels of satisfaction reflected in the subject evaluation program (Table 3), were driven by improvements in the five areas identified as necessary to improve student satisfaction (Table 1). Further, it is concluded that the significant changes in the three core components of the course experience questionnaire in 2003 (Table 4): good teaching, generic skills and overall satisfaction resulted from improvements identified through analysis of the nine statements of teaching and learning (Table 2). Further, this improvement in 2003 is statistically significant at each component level (Tables 6a, 6b and 6c). These results support the findings of Lindahl and Fanelli (2002), Thornton and Hornyate (2003), Dixon and Scott (2004), Yazici (2004), Harvey et al, (2005) and Shaftel and Shaftel (2005) with respect to: improving satisfaction through greater collaboration between teaching staff; matching staff to areas of interest/expertise; improved teaching preparation by sessional staff; and clear communication of expected learning outcomes and timely and diagnostic feedback through student evaluations.
With respect to improved grades, the overall results were mixed, although subject 3 exhibited a significant change at the 1 per cent level during 2003-2004 at the aggregate level, and the specific level of 'distinction' (Table 7). This corresponded with the significant change in the GCCA core outcome measures, in particular, the improved generic skills. A similar change was observed with subject 2, but only at the specific level of 'distinction'. Initially, this may provide some support to the findings of Pike (1991), who found that satisfaction exerted a strong influence on grades. However, there is nothing to indicate a causal relationship between the specific implementation mechanisms used and the student's performance. At best, it could support the findings of Howard and Maxwell (1982), Umbach and Porter (2002) and Wiers-Jenssen et al. (2002), which was that improved grades probably result from other variables, including student motivation, progress in the course, and characteristics of the academic department, embracing the academic and pedagogic quality of teaching.
While the gains made in 2003 appear to be lost in 2004, and dropped below the national mean for each component as measured by the three core outcome areas of the GCCA (Table 4), a different result is observed at the disaggregated level of 'strongly agree'. Again, at this level of aggregation there was a drop from the 2003 results; however, in this case the results for each category were above the national mean and also above the university mean of 2002 (Table 5). The drop from the 2003 level may have two possible explanations. First, the dramatic improvement obtained in 2003 was too great to maintain in the long run. Second, such improvements in student outcomes need to be continuously and consistently reinforced and maintained in order to institutionalise the process of ongoing change and learning.
The findings of the study may have been affected by several factors that could limit its efficacy. First, the inability to obtain the prime data from either the CCCA or the AVCC resulted in the analysis being an exercise in reverse statistical engineering. While this did not present insurmountable problems, it is possible that some relevant data was missing or interpreted incorrectly. Second, the student cohort reflected in the GCCA data only represents the students responding to the course experience questionnaire. While this number is consistent across the period 2002 to 2004, it only represents about 60 per cent of the graduating students. Third, the analysis of grades was limited to 2002-2003 and 2003-2004. This was due to the unavailability of data, together with a change of Government policy in 2001 on how the data would be recorded.
Overall, the results support the findings of Umbach and Porter (2002) and Wiers-Jenssen et al. (2006), which argue that improved student satisfaction has no single cause. Improved satisfaction can be obtained through a variety of influences, including: student contact with faculty staff, the perceived quality of teaching, social climate and aesthetic aspects of the physical infrastructure. The intervention project was successful in improving student satisfaction as measured by the core outcome measures of the GCCA, because it incorporated many of these factors. Therefore, the intervention mechanism or satisfaction motivators chosen and implemented had a positive impact.
Statements Relating Specifically to Teaching and Learning
* My experience in this subject has contributed to my development as an independent learner.
* My experience in this subject has enhanced my ability to solve problems.
* The tutorials, workshops, seminars contributed constructively to my learning in this subject.
* The material presented in each class was conveyed clearly and logically.
* Completing subject activities was a useful learning strategy for me.
* I believe that the content presented in this subject reflected the declared outcomes/objectives.
* Completing assessment tasks contributed to my learning in this subject.
* The knowledge and teaching style of the lecturer promoted interest and learning in this subject.
* This subject has contributed to my understanding of ethical issues relevant to the subject area.
Allgood, S., (2001). Grade targets and teaching innovations. Economics of Education Review, 20, 485-493.
Anderson, L. R., Banks, S. R., and Leary, P. A. (2002). The effects of interactive television courses on student satisfaction. Journal of Education for Business, 77, 164-168.
Argyris, C. (1970). Intervention Theory and Method. Addison-Wesley, Reading, MA.
Australian Vice-Chancellors' Committee. (2005). Course Experience Questionnaire Data 2004. AVCC, Canberra.
Beaver, W. (1997). Declining collage standards: It's not the course, it's the grades. College Board Review, 181, 2-7.
Biggs, J. (2003). Teaching for Quality Learning at University. (2nd ed), Open University Press, Maidenhead, UK.
Boud, D. (ed). (1988). Developing Student Autonomy in Learning. Kogan Page Ltd, London.
Brown, G. (1978). Lecturing and Explaining. Methuen, London.
Burns, R. D. (1991). Study and stress among first year overseas students at an Australian university. Higher Education Research and Development. 10, 61-76.
Charfauros, K. H. and Tierney, W. G. (1999). Part-time faculty in colleges and universities: trends and challenges in a turbulent environment. Journal of Personnel Evaluation in Education. 13, 141-151.
Chen, Y. and Hoshower, L. (1998). Assessing student motivation to participate in teaching evaluations: an application of expectancy theory. Issues in Accounting Education 13, 531-549.
Churchman, D. (2002). Voices of the academy: Academics' responses to a corporatized university. Critical Perspectives on Accounting, 13, 643-656.
Cole, W. (1993). By rewarding mediocrity we discourage excellence. The Chronicle of Higher Education, January 6.
Dearn, J., Fraser, K., and Ryan, Y. (2002). Investigation into the provision of professional development for university teaching in Australia: A discussion paper. A DEST Commissioned Project. Australian Government Publishing Service, Canberra.
Department of Employment, Education and Training. (1991). Performance Indicators in Higher Education. Australian Government Publishing Service, Canberra.
Dixon, H., and Scott, S. (2004). Professional development programs for international lectures: Perspectives and experiences related to teaching and learning. Paper presented at the 18th IDP Australian International Education Conference, Sydney Convention Centre, New South Wales, 5-8 October.
Drefuss, S. (1994). My fight against grade inflation. College Teaching, 41, 149-152.
Eble, K. E. (1988). The Craft of Teaching. (2nd ed), Jossey-Bass, San Francisco.
Entwistle, N. J., and Tait, H. (1990). Approaches to learning, evaluations of teaching, and preferences for contrasting academic environments. Higher Education, 19, 169-194.
Feldman, K. A. (1976). The superior college teacher from the student's view. Research in Higher Education, 5, 243-288.
Green, B. P., Calderon, T. G., and Reider, B. P. (1998). A content analysis of teaching evaluation instruments used in accounting departments. Issues in Accounting Education, 13, 15-30.
Harvey, M., Fraser, S., and Bowes, J. (2005). Quality teaching and sessional staff. Paper presented at the Higher Education & Development Society of Australia Conference, University of Sydney, Sydney, 3-6 July.
Helms, M. M., Alvis, J. M., and Willis, M. (2005). Planning and implementing shared teaching: An MBA teamteaching case study. Journal of Education for Business, 81, 29-34.
Hodgson, V. (1984). Learning from lectures. In F. Marton, et al. (eds). The Experience of Learning, Scottish Academic Press, Edinburgh.
Howard, G. S., and Maxwell, S. E. (1982). Do grades contaminate student evaluations of instructors. Research in Higher Education, 16, 175-188.
Johnson, D., Maruyama, D., Johnson, R., Nelson, D., and Skon, L. (1981). The effects of cooperative, competitive and individualistic goal structures on achievement: A meta-analysis. Psychological Bulletin, 89, 47-62.
Kanagaretnam, K., Mathieu, R., and Thevaranjan, A. (2003). An economic analysis of the use of student evaluations: Implications for universities. Management and Decision Economics, 24, 1-13.
Kift, S. (2002). Assuring quality in the casualisation of teaching, learning and assessment: Towards best practice for first year experience. Paper presented at the 6th Pacific Rim, First Year in Higher Education Conference, University of Canterbury, Christchurch, New Zealand, July.
Lancaster, K. A. S., and Strand, C. A. (2001). Using the team-learning model in a managerial accounting class: An experiment in cooperative learning. Issues in Accounting Education, 16, 549-567.
Lindahl, F. W., and Fanelli, R. (2002). Applying continuous improvement to teaching in another culture. Journal of Accounting Education, 20, 285.
McInnis, C., James, R., and McNaught, C. (1995). First year on campus: Diversity in the initial experiences of Australian undergraduates. A Commissioned Project of the Committee for the Advancement of University Teaching, University of Melbourne, Centre for the Study of Higher Education, Melbourne.
Pearson, C. A. L., and Beasley, C. J. (1999). Matching the values of university staff and students: a case study. Paper presented at the Higher Education & Development Society of Australia Conference, Melbourne, 12-15 July.
Pike, G. R. (1991). The effects of background, coursework, and involvement on students' grades and satisfaction, Research in Higher Education, 32, 15-30.
Ramsden, P. (1992). Learning to Teach in Higher Education. Routledge, London.
Ramsden, P. (2003). Learning to Teach in Higher Education. (2nd ed), Routledge, London.
Rothwell, F. (2002). Your flexible friends: Sessional teachers in the UK further education sector, commitment, quality and service delivery. Journal of Further and Higher Education, 26, 363-375.
Sabot, R., and Wakeman-Linn, J. (1991). Grade inflation and course choice. Journal of Economic Perspectives, 5, 159-170.
Shaftel, J., and Shaftel, T. (2005). The influence of effective teaching in accounting on student attitudes, behaviour, and performance. Issues in Accounting Education, 20, 231-246.
Shah, C. (2003). Employment shifts in the technical and further education workforce in Victoria. Education Economics, 11, 193-208.
Stasey, R., and McNair, C. J. (1990). Crossroads: A JIT Success Story. Dow Jones-Irwin, Homewood.
Tang, K. C. C. (1990). Cooperative learning and study approaches. Paper presented at the 7th Annual Conference of the Hong Kong Educational Research Association, University of Hong Kong, Hong Kong, November.
Thornton, J. M., and Hornyak, M. J. (2003). Make student feedback meaningful: 'Customizing' course critiques. Advances in Accounting Education, Teaching and Curriculum Innovations, B. N. Schwartz and J. E. Ketz, (Eds) 5, Elsevier Science, Amsterdam.
Umbach, P. D., and Porter, R. S (2002). How do academic departments impact student satisfaction? Understanding the contextual effects of departments. Research in Higher Education, 43, 209-234.
Wallace, J. J., and Wallace, W. A. (1998). Why the costs of student evaluations have long since exceeded their value. Issues in Accounting Education, 13, 443-447.
Whitehead, A. N. (1967). The Aims of Education and Other Essays. Free Press, New York.
Wiers-Jenssen, J., Stensaker, B., and Grogaard, J. B. (2002). Student satisfaction: towards an empirical deconstruction of the concept. Quality in Higher Education, 8, 183.195.
Yazici, H. J. (2004). Student perceptions of collaborative learning in operations management classes. Journal of Education for Business, 80, 110-118.
T. Watts* C. J. McNair[dagger]
* University of Wollongong, email@example.com
[dagger] United States Coast Guard Academy, USA,
Copyright ©2008 Australasian Accounting Business and Finance Journal and Authors.
Dr. Ted Watts* FCPA, CMA
School of Accounting and Finance
University of Wollongong
Dr. C. J McNair CMA
Professor of Financial Management
United States Coast Guard Academy
The authors would like to thank the participants of the 2007 Accounting and Finance Association of Australia and New Zealand Conference and research seminars at the University of Adelaide and the University of Wollongong for their helpful comments in the development of earlier drafts of this paper. In addition, the support and advice of Professor Andrew Worthington and the two anonymous reviewers of the paper are recognized. The authors retain all responsibility for the arguments presented and their potential errors.
* Corresponding author
Telephone 61 2 4221-4005
Fax 61 2 4221-4297
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Article title: Trigger Points: Enhancing Generic Skills in Accounting Education through Changes to Teaching Practices. Contributors: Watts, T. - Author, McNair, C. J. - Author. Journal title: Australasian Accounting Business & Finance Journal. Volume: 2. Issue: 2 Publication date: June 2008. Page number: 5+. © University of Wollongong Dec 2008. Provided by ProQuest LLC. All Rights Reserved.