For inefficient search, target detection is faster for repeated than for regenerated layouts. This effect, called contextual cuing, was assumed to arise from implicit learning of local spatial relationships between targets and distractors. However, a more global influence from distractors far from the target has not been tested. In this study, the search field was divided into upper and lower halves containing a repeated and a regenerated configuration set, respectively. The positions of the two sets were or were not exchanged, meaning that their relative as well as their absolute positions were the same or different (Experiment 1). In Experiment 2, the repeated set appeared alone in either the same or the other half of the screen (same or different absolute position). The contextual cuing effect remained when only absolute position was changed, but not when both absolute and relative positions were changed. These results suggest that contextual cuing depends on relative positional information.
Visual context, such as the spatial configuration of objects, is a critical factor for the guidance of attention to a target location (Biederman, 1972; Biederman, Mezzanotte, & Rabinowitz, 1982). For example, Biederman et al. (1982) showed that a target placed in an irregular location was harder to detect than a target placed in its typical location. Recently, Chun and his colleagues have developed a paradigm using a visual search task to examine how the visual context of the search display affects search performance. In a series of studies, they demonstrated that the association between a target location and the distractor configuration is encoded when the same display is presented repeatedly, and that the configuration can be used as a cue for the target location in latter blocks, even if observers are unaware that the configurations are repeated (Chun, 2000; Chun & Jiang, 1998,2003; Chun & Nakayama, 2000; Olson & Chun, 2002). This facilitatory effect is termed contextual cuing.
In the paradigm used by Chun and Jiang ( 1998), participants were asked to search for a target (e.g., T) among distractors (e.g., Ls). There were two conditions: the old and new conditions. In the old condition, the search array presented in the first block was repeatedly presented throughout the latter blocks (i.e., the locations of the target and distractors were fixed). In the new condition, only the target location in a given search array was fixed across the blocks, and the distractor locations were randomly changed in every block. The results showed that reaction times (RTs) became gradually faster in the old condition than they were in the new condition. Participants did not notice the repetition of the search arrays in the old condition, indicating that the visual context (i.e., the association between the target location and the distractor configuration) was implicitly learned and was used to search for a target.
How is visual context represented? Recently, some studies have investigated the nature of the representation of spatial configuration. Jiang and Wagner (2004, Experiment 2) demonstrated that contextual cuing could be observed even when the search display was rescaled from 18° × 12° of visual angle in the training phase to 22.5° × 15° (or to 14.4° × 9.6°) of visual angle in the test phase, suggesting that the visual context is encoded on the basis of the relative positions of the search items rather than of the absolute position of each item in the CRT display. This account is also supported by the results of a study using 3-D search arrays. Chua and Chun (2003) investigated whether contextual cuing was affected by increasing the angle of rotation in depth (0°, 15°, 30°, and 45°) away from the training views. They reported that contextual cuing for the 3-D configuration was observed in the training phase, but that the learning effect decreased and disappeared with increases in the rotation angle. This finding indicates that the representation of spatial configuration is viewpoint dependent, so that contextual cuing is weakened by warping the relative positions of the search items in the 2-D image. …