Academic journal article Perception and Psychophysics

Spatial Context Learning in Visual Search and Change Detection

Academic journal article Perception and Psychophysics

Spatial Context Learning in Visual Search and Change Detection

Article excerpt

Humans conduct visual search more efficiently when the same display is presented for a second time, showing learning of repeated spatial contexts. In this study, we investigate spatial context learning in two tasks: visual search and change detection. In both tasks, we ask whether subjects learn to associate the target with the entire spatial layout of a repeated display (configural learning) or with individual distractor locations (nonconfigural learning). We show that nonconfigural learning results from visual search tasks, but not from change detection tasks. Furthermore, a spatial layout acquired in visual search tasks does not enhance change detection on the same display, whereas a spatial layout acquired in change detection tasks moderately enhances visual search. We suggest that although spatial context learning occurs in multiple tasks, the content of learning is, in part, task specific.

Recent studies suggest that normal adults can rapidly acquire spatial contextual knowledge. For example, when subjects search for a T target among L distractors, search speed is faster for displays that have previously been seen than for novel displays (Chun & Jiang, 1998; for a review, see Chun, 2000). The distractor locations on repeated displays form a consistent visual context, which guides spatial attention to the associated target's location. Such learning, known as contextual cuing, is surprisingly powerful. It occurs after just five or six repetitions and lasts for at least a week (Chun & Jiang, 2003; Jiang, Song, & Rigas, 2005). It is also implicit, because subjects rarely notice the repetition, nor can they recognize repeated displays after learning.

What have people learned from a repeated visual search display? Subjects may learn the global spatial layout-the imaginary outline pattern formed by all the items-and know that whenever a given configuration is presented, the target will be in the upper left corner. Alternatively, subjects may learn part of the display or even individual locations. Chun and Jiang (1998) hypothesized that people benefit from repeated displays in visual search because they have learned the global layout-the configuration-of repeated displays. Such a layout includes interitem spatial relationships, in which individual items are represented with reference to one another (Wolfe, 1998a). Configurai processing is seen in object tracking (Yantis, 1992) and visual working memory (Jiang, Olson, & Chun, 2000; Phillips, 1974). It allows multiple individual items to be organized into a larger chunk, simplifying the encoding of individual locations.

To find out whether subjects can learn the global layout, Jiang and Wagner (2004) first trained subjects to conduct visual search among displays centered at fixation. After they had seen a given display 20 times, the subjects were tested in a transfer session, during which the previously learned displays were now rescaled (expanded or contracted) and shifted (left, right, up, or down). Even though individual item locations no longer matched those seen during training, the subjects still searched more quickly from rescaled and shifted displays than from novel displays, suggesting that preserving the learned configuration permits transfer.

Contextual learning in visual search also has a nonconfigural component, however. When subjects were trained on two sets of distractor layouts, both of them associated with the same target location, their learning transferred completely to a recombined layout, which was created by swapping the locations of a random subset of distractors from one trained layout with those of the other trained layout (Jiang & Wagner, 2004). The recombined display produced an entirely different global layout, since the interitem spatial relationships had changed, but each individual distractor had previously been associated with the target's location. The complete transfer of learning to the recombined condition suggests that visual search supports nonconfigural association, as well as configural learning (see also Chun & Jiang, 1998; Jiang & Chun, 2001 ; Olson & Chun, 2002). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.