In a mental rotation task, participants must determine whether two stimuli match when one undergoes a rotation in 3-D space relative to the other. The key evidence for mental rotation is the finding of a linear increase in response times as objects are rotated farther apart. This signature increase in response times is also found in recognition of rotated objects, which has led many theorists to postulate mental rotation as a key transformational procedure in object recognition. We compared mental rotation and object recognition in tasks that used the same stimuli and presentation conditions and found that, whereas mental rotation costs increased relatively linearly with rotation, object recognition costs increased only over small rotations. Taken in conjunction with a recent brain imaging study, this dissociation in behavioral performance suggests that object recognition is based on matching of image features rather than on 3-D mental transformations.
Starting from the early 1970s, Shepard and colleagues (Shepard & Cooper, 1982; Shepard & Judd, 1976; Shepard & Metzler, 1971 ) conducted a series of experiments showing that mental transformations of 2-D shapes and 3-D objects appeared to follow the same laws as physical transformations of real objects. This finding of mental rotation was a cornerstone in the foundations of the emerging field of cognitive science, since it showed that the physical world constrains internal mental representations. The typical mental rotation task is one in which participants are asked to determine whether two stimuli are identical or mirror reflections (i.e., differing in handedness); the difficulty lies in the fact that the stimuli are rotated relative to each other. The hallmark of mental rotation is that response times form a linear function of orientation difference between the two stimuli.
Whereas mental rotation tasks require a discrimination between mirror images of the same object across viewpoints, studies of object recognition often involve discriminations among different objects across viewpoints. Many object recognition studies have found results similar to those of mental rotation studies, with recognition judgments becoming slower (thus, more difficult) as an object is rotated away from a studied viewpoint (e.g., Hayward & Williams, 2000; Jolicoeur, 1985; Tarr & Pinker, 1989;Tarr, Williams, Hayward, & Gauthier, 1998). Mental rotation has often been invoked to account for these effects (see, e.g., Jolicoeur, 1990; Tarr & Pinker, 1989). According to these theories, objects are represented at specific viewpoints; when encountered from a novel view, the perceived stimulus is mentally rotated until it matches a stored view, at which point the recognition decision can be made.
In recent years, however, a number of studies have contested the view that rotated object recognition relies on mental rotation, both on theoretical grounds (e.g., Corballis, 1988) and from empirical evidence. For example, Lawson and Jolicoeur (2003) showed that viewpoint costs for objects rotated in the picture plane contained nonlinearities that cannot be accommodated by mental rotation. Willems and Wagemans (2001) tested recognition of misoriented objects rotated around a number of different axes; they found no effect of axis on performance, even though an explicit mental rotation task showed large differences. Jolicoeur, Corballis, and Lawson (1998) rotated objects in the picture plane and had participants either identify the object (object recognition) or say whether it would face left or right in an upright orientation (mental rotation). They found that perceived or actual rotation of the object affected mental rotation but not object recognition, again casting doubt on a common mechanism for both tasks. De Caro and Reeves (2000) examined naming and orientation judgments for misoriented objects that were backward masked; orientation judgments showed roughly linear viewpoint costs, but naming judgments showed costs that asymptoted past 60° rotations. …