Academic journal article Canadian Journal of Experimental Psychology

Heterogeneity Effects in Visual Search Predicted from the Group Scanning Model

Academic journal article Canadian Journal of Experimental Psychology

Heterogeneity Effects in Visual Search Predicted from the Group Scanning Model

Article excerpt

Abstract The group scanning model of feature integration theory (Treisman & Gormican, 1988) suggests that subjects search visual displays serially by groups, but process items within each group in parallel. The size of these groups is determined by the discriminability of the targets in the background of distractors. When the target is poorly discriminable, the size of the scanned group will be small, and search will be slow. The model predicts that group size will be smallest when targets of an intermediate value on a perceptual dimension are presented in a heterogeneous background of distractors that have higher and lower values on the some dimension. Experiment 1 demonstrates this effect. Experiment 2 controls for a possible confound of decision complexity in Experiment 1. For simple feature targets, the group scanning model provides a good account of the visual search process.

Visual perception is often conceptualized as a two-stage process, with the first stage being the registration of visual information, and the second stage the analysis of this information to recognize or categorize scene elements. The distinction between these stages has been discussed widely (e.g., Fodor, 1983; Fodor & Pylyshyn, 1981; Hoffman, 1978, 1979; Marr, 1982; Neisser, 1967; Treisman, 1985, 1986; Pylyshyn, 1984), and has become central to the study of visual cognition (e.g., Pinker, 1984; Pylyshyn, 1988, 1989; Ullman, 1984).

Feature integration theory is one such two-stage model (Treisman, 1982, 1985, 1986, 1988, 1991; Treisman & Gelade, 1980; Treisman & Gormican, 1988; Treisman, Sykes, & Gelade, 1977). In Stage 1, visual features are encoded in parallel as local activity on independent, retinotopic feature maps (Treisman, 1982, 1988; Treisman & Gelade, 1980). For example, a "red horizontal line" produces activation on the feature map for the color "red" and on the map for the orientation "horizontal," at the locations corresponding to the line's position in the field. Only the presence of the features is signalled at this stage, not their locations. Locations become available at Stage 2, through processing by attention. Attention operates over a master map of locations that serves as a functional link between the individual feature maps. Attention is a serial process, being shifted from map location to map location. Attending to a master map location brings the corresponding locations on the feature maps into register, so that the independently coded features present at the same scene location can be integrated into object representations (Treisman, 1985; 1986; 1988; Treisman & Gelade, 1980).

Feature integration theory predicts that when a target in a visual search task has a unique feature that is not shared with the distractors (e.g., a red 'X' among blue 'X's), the time to detect it will be unaffected by the number of distractors present because a unique feature map is activated (Stage 1 processing). Empirically, this independence is shown when a search latency function is flat (e.g., Treisman & Gelade, 1980), and is referred to as "pop out," because subjects perceive the target effortlessly and automatically. However, pop out will not occur when the target is defined only by being a unique conjunction of features present elsewhere in the display (e.g., a green 'T' in a field of green 'X's and brown 'T's), because no feature map is uniquely activated. In this case, response times will increase with increases in the number of distractors because each display item must be examined in turn by serial attention to combine the features of the conjunction target (Treisman & Gelade, 1980).

DISCRIMINABILITY AND VISUAL SEARCH

The original feature integration model has been modified by Treisman and Gormican (1988) to account for findings such as search asymmetries (Treisman & Gormican, 1988; Treisman & Souther, 1985). In this group scanning model, attention processes groups of visual elements. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.