Structural Properties of Spatial Representations in Blind People: Scanning Images Constructed from Haptic Exploration or from Locomotion in a 3-D Audio Virtual Environment

Article excerpt

When people scan mental images, their response times increase linearly with increases in the distance to be scanned, which is generally taken as reflecting the fact that their internal representations incorporate the metric properties of the corresponding objects. In view of this finding, we investigated the structural properties of spatial mental images created from nonvisual sources in three groups (blindfolded sighted, late blind, and congenitally blind). In Experiment 1, blindfolded sighted and late blind participants created metrically accurate spatial representations of a small-scale spatial configuration under both verbal and haptic learning conditions. In Experiment 2, late and congenitally blind participants generated accurate spatial mental images after both verbal and locomotor learning of a full-scale navigable space (created by an immersive audio virtual reality system), whereas blindfolded sighted participants were selectively impaired in their ability to generate precise spatial representations from locomotor experience. These results attest that in the context of a permanent lack of sight, encoding spatial information on the basis of the most reliable currently functional system (the sensorimotor system) is crucial for building a metrically accurate representation of a spatial environment. The results also highlight the potential of spatialized audio-rendering technology for exploring the spatial representations of visually impaired participants.

The research reported here was intended to account for the properties of mental representations of spatial information, when these representations are constructed from nonvisual sensory modalities. We therefore compared representations constructed by sighted and blind people. By doing this, we connected two fields of research-namely, mental imagery and blindness. The connection between these two fields has been amply documented (e.g., Cornoldi & Vecchi, 2000; De Beni & Cornoldi, 1988; Ernest, 1987; Kaski, 2002; Marmor & Zaback, 1976; Zimler & Keenan, 1983). Here, we extended this effort of connecting the two fields of research by using a method that seemed likely to provide a useful new perspective on this domain-namely, the image-scanning paradigm.

Image scanning is conceived of as the systematic shifting of attention across visualized patterns-for instance, in the context of a task in which a person tries to check for the presence of an object in a scene or of a specific detail within an object (see Denis & Kosslyn, 1999; Kosslyn, Thompson, & Ganis, 2006). The main finding from image- scanning studies has been that when people mentally scan the image of an object or a scene, their scanning time increases linearly as the scanned distance increases (e.g., Beech, 1979; Borst & Kosslyn, 2008; Borst, Kosslyn, & Denis, 2006; Dror, Kosslyn, & Waag, 1993; Kosslyn, Ball, & Reiser, 1978; Pinker, Choate, & Finke, 1984). This correlational pattern is generally taken to reflect the structural isomorphism between a visuospatial representation and the spatial layout from which the representation has been constructed. Thus, spatial mental images preserve the relative metric properties of the layout.1

A critical issue here is to determine the extent to which the properties of images-in particular, their capacity of preserving the metric properties of the objects that they evoke-depend on their visuospatial substrate. The images constructed from visual experience have been shown to possess an internal structure that displays the metric relationships between the parts of the corresponding objects or scenes. However, does the analogical nature of such representations essentially depend on their visuospatial nature? This is a pertinent question because most of the experiments that have been conducted on this topic so far have used visual information as an input for the creation of visual images. We therefore wanted to find out whether the acquisition of spatial information mediated by nonvisual modalities-in particular, by visually impaired or blind people-results in internal representations that have the same properties. …