In visual perception, part segmentation of an object is considered to be guided by image-based properties, such as occurrences of deep concavities in the outer contour. However, object-based properties can also provide information regarding segmentation. In this study, outer contours and interpretations of object configurations were manipulated to examine differences between image-based and object-based segmentation in a visual search task. We found that locating a two-dimensional object configuration with deep concavities in the outer contour depends on the type of outer contour of the surrounding distractors. In addition, locating a three-dimensional object configuration was harder when it was surrounded by object-based-disconnected distractors, as compared with object-based-connected distractors, regardless of image-based connections in these distractors. We conclude that segmentation based on the outer contours of a target facilitates its localization. However, when three-dimensional information is available, segmentation strongly depends on object-based properties, rather than on image-based properties.
In visual perception, segmentation is the process of dividing proximal stimulations into separate objects. Several approaches to segmentation that the visual system may take have been proposed. Among these are segmentation based on concavities in the outer contour (Hoffman & Richards, 1984; Hoffman & Singh, 1997), segmentation based on necks versus limbs (Siddiqi, Tresness, & Kimia, 1996), and segmentation based on the so-called shortcut rule (Singh, Seyranian, & Hoffman, 1999). However, the relative ease with which the visual system seems to generate interpretations of objects leads to the question of whether not only image-based (IB) properties, such as those employed in the approaches above, but also objectbased (OB) properties play a role in segmentation. In this study, we investigate the differential role of IB versus OB properties in the segmentation of objects.
Consider the objects presented in Figure 1. In Figure 1 A, the locations of segmentation based on, for example, concavities in the outer contour are indicated by the arrows. In this case, the segmentation process results in the perception of one object with two protrusions. Note that, any IB approach to segmentation would result in the same segmentation of the object as that presented in Figure IA. In Figure IB, the same outer contour is presented, but now inner junctions (i.e., the intersections of line segments that are not part of the outer contour) are also drawn. Again, the locations of segmentation based on the outer contours are indicated, as well as the locations of segmentation based on the inner junctions. Figure IB is readily perceived as showing an object with one protrusion and with a second object in front of it. Note that segmentation based on outer contours does not have to be incorrect when inner junctions are made visible. After all, on the basis of the outer contour in Figure IA, the likelihood of perceiving a single object with two protrusions is higher than the likelihood of perceiving two objects. That is, for two objects (as represented in Figure IB) to result in the outer contour presented in Figure IA, requires a highly accidental positioning of the objects. The example is given only to illustrate the fact that the interpretation of stimuli (on the basis of OB properties) can result in one's perceiving separate objects although the outer contour remains the same.
To investigate differential effects in the processing of visual features such as IB and OB properties, a visual search task can be used. In a visual search task, a target has to be detected among an increasing number of distractors (Treisman & Gelade, 1980). The target can, for example, differ from the distractors on the basis of a single stimulus property. When this target is detected equally quickly among an increasing number of distractors, the slope of search time as a function of display size will be low or, perhaps, close to zero. …