Decision Theory, Reinforcement Learning, and the Brain

Article excerpt

Decision making is a core competence for animals and humans acting and surviving in environments they only partially comprehend, gaining rewards and punishments for their troubles. Decision-theoretic concepts permeate experiments and computational models in ethology, psychology, and neuroscience. Here, we review a well-known, coherent Bayesian approach to decision making, showing how it unifies issues in Markovian decision problems, signal detection psychophysics, sequential sampling, and optimal exploration and discuss paradigmatic psychological and neural examples of each problem. We discuss computational issues concerning what subjects know about their task and how ambitious they are in seeking optimal solutions; we address algorithmic topics concerning model-based and model-free methods for making choices; and we highlight key aspects of the neural implementation of decision making.

(ProQuest: ... denotes formulae omitted.)

The abilities of animals to make predictions about the affective nature of their environments and to exert control in order to maximize rewards and minimize threats to homeostasis are critical to their longevity. Decision theory is a formal framework that allows us to describe and pose quantitative questions about optimal and approximately optimal behavior in such environments (e.g., Bellman, 1957; Berger, 1985; Berry & Fristedt, 1985; Bertsekas, 2007; Bertsekas & Tsitsiklis, 1996; Gittins, 1989; Glimcher, 2004; Gold & Shadlen, 2002, 2007; Green & Swets, 1966; Körding, 2007; Mangel & Clark, 1989; McNamara & Houston, 1980; Montague, 2006; Puterman, 2005; Sutton & Barto, 1998; Wald, 1947; Yuille & Bülthoff, 1996) and is, therefore, a critical tool for modeling, understanding, and predicting psychological data and their neural underpinnings.

Figure 1 illustrates three paradigmatic tasks that have been used to probe this competence. Figure 1A shows a case of prediction learning (Seymour et al., 2004). Here, human volunteers are wired up to a device that delivers variable strength electric shocks. The delivery of the shocks is preceded by visual cues (Cue A through Cue D) in a sequence. Cue A occurs on 50% of the trials; it is followed by Cue B and then a larger shock 80% of the time or by Cue D and then a smaller shock 20% of the time. The converse is true for Cue C. Subjects can, therefore, in general expect a large shock when they get Cue A, but this expectation can occasionally be reversed. How can they learn to predict their future shocks? An answer to this question is provided in the Markov Decision Problem section; as described there, these functions are thought to involve the striatum and various neuromodulators. Such predictions can be useful for guiding decisions that can have deferred consequences; formally, this situation can be characterized as a Markov decision problem (MDP) as studied in the fields of dynamic programming (Bellman, 1957) and reinforcement learning (Sutton & Barto, 1998).

Figure 1B depicts a decision task that is closely related to signal detection theory (Green & Swets, 1966) and has been particularly illuminating about the link between neural activity and perception (Britten, Newsome, Shadlen, Celebrini, & Movshon, 1996; Britten, Shadlen, Newsome, & Movshon, 1992; Gold & Shadlen, 2001, 2002, 2007; Shadlen, Britten, Newsome, & Movshon, 1996; Shadlen & Newsome, 1996). In the classical version of this task, monkeys watch a screen that shows moving dots. A proportion of the dots is moving in one direction; the rest are moving in random directions. The monkeys have to report the coherent direction by making a suitable eye movement. By varying the fraction of the dots that moves coherently (called the coherence), the task can be made easier or harder. The visual system of the monkey reports evidence about the direction of motion; how should the subject use this information to make a decision? …