Regularized Gaussian Discriminant Analysis through Eigenvalue Decomposition

Article excerpt

1. INTRODUCTION

The basic problem in discriminant analysis is to assign an observation with unknown group membership to one of K groups [G.sub.1], . . ., [G.sub.K] based on vector x = ([x.sub.1], . . ., [x.sub.d])[prime], with d denoting the number of variables. The assignment function is generally designed to minimize the expected overall error rate and involves assigning a measurement vector x to the group [G.sub.k] such that

[Mathematical Expression Omitted] (1)

where [[Pi].sub.k] denotes the a priori probability of group [G.sub.k] and [f.sub.k](x) denotes the group conditional density of x, (1 [less than or equal to] k [less than or equal to] K). Discriminant analysis models differ essentially in their assumptions about the group conditional densities [f.sub.k](x), (k = 1, . . ., K). The most commonly applied method, linear discriminant analysis (LDA), assumes that the group conditional distributions are d-variate normal with mean vectors [[Mu].sub.k] and identical covariance matrix [Sigma]. When the covariance matrices [[Sigma].sub.k] are not assumed equal, the method is called quadratic discriminant analysis (QDA). The parameters [[Mu].sub.k] and [[Sigma].sub.k] are unknown and must be estimated from a training set consisting of ([x.sub.i], [z.sub.i]), i = 1, . . ., n, where [x.sub.i] is the vector-valued measurement and [z.sub.i] is a group indicator for subject i. The parameters are generally chosen to maximize the likelihood of the training staple. It leads to the plug-in estimates

[Mathematical Expression Omitted], (2)

where [n.sub.k] = [summation of] I {[z.sub.i] = k} where i = 1 to n. For LDA,

[Mathematical Expression Omitted]; (3)

for QDA,

[Mathematical Expression Omitted]. (4)

Regularization became an important subject of investigation in discriminant analysis because in many cases the size n of the training dataset is small compared to the number d of variables (see McLachlan 1992, chap. 5), and standard methods such as QDA and LDA may have disappointing behavior in such cases. Generally, regularization techniques for discriminant analysis make use of real-valued regularization parameters. The regularized discriminant analysis (RDA) of Friedman (1989) specifies the value of a complexity parameter and of a shrinkage parameter to design an intermediate classifier between the linear, the quadratic, and the nearest-means classifiers. The complexity parameter [Alpha] (0 [less than or equal to] [Alpha] [less than or equal to] 1) controls the amount that the [S.sub.k] are shrunk toward S. The other parameter, [Gamma] (0 [less than or equal to] [Gamma] [less than or equal to] 1), controls shrinkage of the group conditional covariance matrix estimates toward a specified multiple of the identity matrix. RDA performs well but does not provide easily interpretable classification rules.

In this article we propose an alternative approach to designing regularized classification rules in the Gaussian framework. Following Banfield and Raftery (1993) and Flury, Schmid, and Narayanan (1993), our approach is based on the reparameterization of the covariance matrix [[Sigma].sub.k] of a group [G.sub.k] in terms of its eigenvalue decomposition,

[[Sigma].sub.k] = [[Lambda].sub.k][D.sub.k][A.sub.k][D[prime].sub.k], (5)

where [[Lambda].sub.k] = [absolute value of [[[Sigma].sub.k]].sup.1/d], [D.sub.k] is the matrix of eigenvectors of [[Sigma].sub.k], and [A.sub.k] is a diagonal matrix such that [absolute value of [A.sub.k]] = 1, with the normalized eigenvalues of [[Sigma].sub.k] on the diagonal in decreasing order. The parameter [[Lambda].sub.k] determines the volume of the density contours of group [G.sub.k]; [D.sub.k] determines its orientation, and [A.sub.k] determines its shape. By allowing some but not all of these quantities to vary between groups, we obtain easily interpreted Gaussian discriminant methods. Variations on assumptions on the parameters [[Lambda]. …