Academic journal article Psychonomic Bulletin & Review

Using Priors to Formalize Theory: Optimal Attention and the Generalized Context Model

Academic journal article Psychonomic Bulletin & Review

Using Priors to Formalize Theory: Optimal Attention and the Generalized Context Model

Article excerpt

Published online: 7 August 2012

© Psychonomic Society, Inc. 2012

Abstract Formal models in psychology are used to make theoretical ideas precise and allow them to be evaluated quantitatively against data. We focus on one important-but underused and incorrectly maligned-method for building theoretical assumptions into formal models, offered by the Bayesian statistical approach. This method involves capturing theoretical assumptions about the psychological variables in models by placing informative prior distributions on the parameters representing those variables. We demonstrate this approach of casting basic theoretical assumptions in an informative prior by considering a case study that involves the generalized context model (GCM) of category learning. We capture existing theorizing about the optimal allocation of attention in an informative prior distribution to yield a model that is higher in psychological content and lower in complexity than the standard implementation. We also highlight that formalizing psychological theory within an informative prior distribution allows standard Bayesian model selection methods to be applied without concerns about the sensitivity of results to the prior. We then use Bayesian model selection to test the theoretical assumptions about optimal allocation formalized in the prior. We argue that the general approach of using psychological theory to guide the specification of informative prior distributions is widely applicable and should be routinely used in psychological modeling.

Keywords Model selection . Bayesian statistics . Bayesian inference . Informative priors

(ProQuest: ... denotes formulae omitted.)

Introduction

The Bayesian statistical framework is becoming increasingly important and popular for implementing and evaluating psychological models, including models of psychophysical functions (Kuss, Jäkel & Wichmann, 2005), stimulus representations (Lee, 2008), category learning (Lee & Vanpaemel, 2008; Vanpaemel & Storms, 2010), signal detection (Rouder & Lu, 2005), response times (Rouder, Lu, Speckman, Sun&Jiang, 2005) and decision making (Wetzels, Grasman & Wagenmakers, 2010). It is widely recognized in statistics (Gelman, Carlin, Stern&Rubin, 2004; Jaynes, 2003) and, increasingly, in psychology (Dienes, 2011; Kruschke, 2010, 2011; Lee & Wagenmakers, 2005) that the Bayesian approach offers a complete and coherent framework for making inferences using models and data. Bayesian parameter estimation allows model parameters to be estimated in a way that naturally represents uncertainty and is applicable even when there are few data. Bayesian model selection allows for models to be compared in a way that automatically implements an Ockham's razor for balancing goodness of fit with complexity, including when models are nonnested (Lee, 2008; Myung & Pitt, 1997; Shiffrin, Lee, Kim & Wagenmakers, 2008; Vanpaemel & Lee, 2012).

Priors as problems, priors as opportunities

Even advocates of the Bayesian approach, however, often view these benefits as coming at a cost, in the form of having to specify prior distributions. In Bayesian parameter estimation, placing priors on parameters is relatively uncontroversial because, as long as the data available are sufficiently informative, the choice of the prior has little impact on inference. In contrast, Bayesian model selection is generally sensitive to the exact choice of the prior. Because priors are often deemed to bring an unwanted level of arbitrariness to the conclusions, placing prior distributions on parameters is much more controversial. For example, in their seminal paper on Bayesian model selection in psychology, Myung and Pitt (1997, p. 91) say that "the Bayesian method [for model selection], however, has its drawbacks. One is that parameter priors are required to compute the marginal likelihoods [and hence, the Bayes factor]." In practice, these objections have led researchers often to uniform, flat, or otherwise weakly informative priors, in an attempt to limit the information injected into the model selection (e. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.