Academic journal article Journal of Business and Educational Leadership

Quantitative Association Rules for Accounting Educational Assessment

Academic journal article Journal of Business and Educational Leadership

Quantitative Association Rules for Accounting Educational Assessment

Article excerpt

INTRODUCTION

The Association to Advance Collegiate Schools of Business (AACSB) requires its member institutions to assess the impact of the curricula on learning (Standard 8, 2013). Data for this assessment should include both direct and indirect measures of learning outcomes. In general, direct measures show what students have actually learned while indirect measures reflect perceptions of learning from the viewpoint of students, employers, and academic advisors (Palomba and Banta, 1999). Direct measures typically use examinations that cover specific learning goals, or the application of structured rubrics to student presentations, writing, or other assignments. Indirect measures include employer or student surveys, placement or graduation rates, etc. (AACSB White Paper, (2007)).

A cornucopia of analytical procedures can be used to evaluate both measures; however, extracting meaningful educational implications from that data can be problematic. Basic statistical analysis of frequently missed questions can provide useful information, but often fails to capture interrelationships between various learning deficiencies. Moreover, subtle patterns between different learning deficiencies cannot be observed or evaluated in a systematic way. Knowing these patterns may provide insight into evaluating the effectiveness of current pedagogy or the learning environment (Merceron and Yacef, 2004).

Data mining techniques enable educators to view assessment data from a richer perspective than what is provided by traditional statistical methods (Romero and Ventura, 2007). The branch of data mining known as Association Rules (AR) is particularly applicable to evaluating educational assessment data because it provides easily understandable antecedent-consequent statements (i.e., if A, then B) in probabilistic terms that could have meaningful implications. This article examines AR as an educational assessment mechanism for evaluating basic financial accounting skills at an AACSB accredited college of business in the Midwest. Recommendations are provided for developing specific accounting educational assessment models with AR and Bloom's Taxonomy. The contribution of this study rests in the presentation of a methodology that can view both categorical and numerical assessment data through a type of "kaleidoscope," and thereby display patterns and relationships that are otherwise invisible.

ASSESSMENT AND TRADITIONAL STATISTICAL METHODS

Bayesian and classical statistics are ideal evaluation tools in testing specific educational hypotheses if assessment exams or projects have been constructed as true (or quasi-) experimental designs since traditional statistical methods yield precise p-values for rejecting or failing to reject the null hypothesis. Bayesian methods have the advantage of taking into account subjective priors, but can lead to significantly different conclusions even when researchers are analyzing the same data. Classical statistics will always yield the same conclusions (p-values) from different researchers provided that they are analyzing the same data. If the analyst has high quality subjective priors, then Bayesian methods are superior; otherwise, classical methods are better. Most scientists used classical statistics so that their results will be directly comparable across individual researchers and experiments.

Aside from the need for specific educational hypothesis to test, Bayesian and classical statistics suffer from the need to satisfy the assumptions (normality, factor independency, continuous data, constant variance, and linearity) of the parametric statistical model. While some of the more advanced methods, such as ANOVA, are fairly robust with respect to minor departures from these assumptions, major violations can result in clearly incorrect conclusions (see Berk, 2004 for an extensive critique). Moreover, most educational assessment data is categorical or discrete in nature, and has the potential to contain significant nonlinear relationships. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.