# Tests of Hypotheses in Overdispersed Poisson Regression and Other Quasi-Likelihood Models

## Article excerpt

1. INTRODUCTION

Regression analysis of count data using maximum likelihood techniques based on the Poisson distribution (Frome, Kutner, and Beauchamp 1973) is increasingly used in toxicology (Stead, Hasselblad, Creason, and Claxton 1981), epidemiology (Breslow and Day 1987; Frome and Checkoway 1985; Holford 1983), and other biomedical research areas. A common complication is that the observed variation among replicate or near replicate counts exceeds that predicted from Poisson sampling theory. Although such excess variation has little effect on estimation of the regression coefficients of primary interest (Cox 1983), standard errors, tests, and confidence intervals may be seriously in error unless it is appropriately taken into account.

A classical approach to the problem is to treat the Poisson means associated with each observed count as latent variables that are sampled from a specified parametric distribution. Most authors have considered a gamma mixing distribution, which leads to a negative binomial distribution for the observed data (Manton, Woodbury, and Stallard 1981; Margolin, Kaplan, and Zeiger 1981). A log-normal mixing distribution has been advocated also (Hinde 1982). If the shape parameter of the gamma mixing distribution is held constant while the mean varies, the likelihood equations based on the negative binomial model are unbiased and the maximum likelihood estimates of the mean parameters are consistent, regardless of the true variance (Lawless 1987). On the other hand, holding the gamma scale parameter constant while the mean varies leads to negative binomial likelihood equations that are unbiased only under the assumed parametric model (Hausman, Hall, and Griliches 1984).

An alternative approach, which is abbreviated QL/M in the sequel, is to use only the mean and variance structure implied by the mixed Poisson model, estimating the regression coefficients by quasi-likelihood and the variance parameter by the method of moments (Breslow 1984; Williams 1982). Although quasi-likelihood yields asymptotically efficient estimates of the regression coefficients for the negative binomial model with fixed shape parameter, with the method of moments there is some efficiency loss for variance estimation (Lawless 1987). Quasi-likelihood, however, leads quite generally to consistent estimates of the regression coefficients even if the variance function is misspecified (Moore and Tsiatis 1989; Williams 1988).

An important problem in applications is to test the statistical significance of added variables in a regression model. Three tests are available under likelihood and quasi-likelihood theory: the Wald test based on comparison of estimated coefficients with their standard errors; the (quasi) likelihood ratio test based on comparison of deviances under full and reduced models (McCullagh and Nelder 1983, sec. 8.4); and the (quasi) likelihood score test (McCullagh and Nelder 1983, sec. 114; Pregibon 1982) that uses the estimating equations themselves for inference. Potential drawbacks to the Wald and deviance tests include the fact that both require fitting of the full model, that the deviance is not always explicitly calculable, and that the Wald test is not invariant under model reparameterization (Vaeth 1985). Thus the score test is often the method of choice.

The primary goal of this article is to develop versions of these tests, especially the score test, that are applicable to overdispersed quasi-likelihood models and that are robust against possible misspecification of the mean/variance relationship. A secondary goal is to evaluate the performance of QL/M estimates and test statistics in moderately sized samples, using computer simulation. Although the simulations and much of the discussion are limited to mixed Poisson regression problems, with little extra work we develop estimating equations and test statistics that apply more generally to quasi-likelihood models with structural parameters in the variance function. …