Eye Movements and the Integration of Visual Memory and Visual Perception

Article excerpt

Because visual perception has temporal extent, temporally discontinuous input must be linked in memory. Recent research has suggested that this may be accomplished by integrating the active contents of visual short-term memory (VSTM) with subsequently perceived information. In the present experiments, we explored the relationship between VSTM consolidation and maintenance and eye movements, in order to discover how attention selects the information that is to be integrated. Specifically, we addressed whether stimuli needed to be overtly attended in order to be included in the memory representation or whether covert attention was sufficient. Results demonstrated that in static displays in which the to-be-integrated information was presented in the same spatial location, VSTM consolidation proceeded independently of the eyes, since subjects made few eye movements. In dynamic displays, however, in which the to-be-integrated information was presented in different spatial locations, eye movements were directly related to task performance. We conclude that these differences are related to different encoding strategies. In the static display case, VSTM was maintained in the same spatial location as that in which it was generated. This could apparently be accomplished with covert deployments of attention. In the dynamic case, however, VSTM was generated in a location that did not overlap with one of the to-be-integrated percepts. In order to "move" the memory trace, overt shifts of attention were required.

The world contains more information than an observer can sample and process at any one moment. Physical properties of the eye limit the scope of perception by constricting the proportion of the environment from which information is received and the quality of the information extracted over different spatial regions. Foveal vision, corresponding to the center of gaze, resolves high spatial frequency and color components of a scene but covers only 2° of the visual world. In contrast, peripheral vision is tuned to lower spatial frequencies and derives degraded color information. In addition to these physical limitations that restrict sampling space, cognitive limitations on memory and attention set bounds to informationprocessing capabilities. Visual short-term memory (VSTM) is a limited capacity store estimated to hold three to five items (Irwin, 1992; Irwin & Andrews, 1996; Luck & Vogel, 1997). Although the scope of attention may vary (e.g., C. W. Eriksen & St. James, 1986), it has been estimated to have a maximum span of as little as 1° of visual angle (B. A. Eriksen & C. W. Eriksen, 1974), with more diffuse states of attention leading to less refined processing (Shulman & Wilson, 1987). To view a scene in its entirety, then, observers shift their attention and their gaze from place to place.

Given the temporal extent of visual processing, information extracted from the world at one moment must be analyzed in conjunction with that which was obtained previously. What mechanism enables this analysis? A recent proposal is that visual percepts can be integrated with the contents of VSTM (Brockmole, Irwin, & Wang, 2003; Brockmole & Wang, 2003; Brockmole, Wang, & Irwin, 2002). That is, when an observer generates a representation of a visual stimulus in memory, a subsequently perceived stimulus can be directly incorporated into the existing representation. Thus, discontinuous, but nevertheless related, visual information can be represented as a single unit in memory, thereby cognitively linking once independent pieces of information.

Support for the memory-percept integration hypothesis has been obtained using a temporal integration paradigm (e.g., Di Lollo, 1980; C. W. Eriksen & Collins, 1967). Two arrays of dots were serially presented within a square grid. Together, the two arrays filled all but one space in the grid; subjects had to report the unfilled grid location. As one would expect, the time that separated the arrays was critical. …