Academic journal article Attention, Perception and Psychophysics

Cross-Modal Associations between Vision, Touch, and Audition Influence Visual Search through Top-Down Attention, Not Bottom-Up Capture

Academic journal article Attention, Perception and Psychophysics

Cross-Modal Associations between Vision, Touch, and Audition Influence Visual Search through Top-Down Attention, Not Bottom-Up Capture

Article excerpt

Abstract Recently, Guzman-Martinez, Ortega, Grabowecky, Mossbridge, and Suzuki (Current Biology : CB, 22(5), 383-388,2012) reported that observers could systematically match auditory amplitude modulations and tactile amplitude modulations to visual spatial frequencies, proposing that these cross-modal matches produced automatic attentional effects. Using a series of visual search tasks, we investigated whether informative auditory, tactile, or bimodal cues can guide attention toward a visual Gabor of matched spatial frequency (among others with different spatial frequencies). These cues improved visual search for some but not all frequencies. Auditory cues improved search only for the lowest and highest spatial frequencies, whereas tactile cues were more effective and frequency specific, although less effective than visual cues. Importantly, although tactile cues could produce efficient search when informative, they had no effect when uninformative. This suggests that cross-modal frequency matching occurs at a cognitive rather than sensory level and, therefore, influences visual search through voluntary, goal-directed behavior, rather than automatic attentional capture.

Keywords Visual search * Attention * Cross-modal * Visual selection

Introduction

Visual search underpins many everyday tasks, such as looking for a book on a shelf or a friend in a crowd. Search sensitivity and specificity can also be trained and optimized for tasks such as baggage screening and radiographic diagnosis (McCarley, Kramer, Wickens, Vidoni, & Boot, 2004; Nodine & Kirndel, 1987; Wang, Lin, & Drury, 1997). As such, visual search is an ecologically relevant paradigm that gives insight into visual processing under the influence of strategic goals. In the experimental visual search paradigm, manipulation of the number of elements in the display can be used to assess search efficiency; search times generally increase by 150-300 ms per item if eye movements are required (Wolfe & Horowitz, 2004). However, if a feature captures attention, such increases may be nullified, since the target is rapidly separated from the surrounding elements without serial self-terminating scanning.

There are a number of attributes shown to allow efficient visual search, including color, motion, orientation, and size (for a discussion of these and other attributes, see a review by Wolfe & Horowitz, 2004). Targets that are unique along one of these dimensions are termed singletons and create a pop-out effect. The clearest example is a red target displayed among green nontarget items. Analogous to this effect in vision, search within the auditory domain is also influenced by features able to capture attention. One such example is the "cocktail-party" effect (Cherry, 1953; see Bronkhorst, 2000, for a review). Words of importance-for instance, one's name-have been shown to pop out of an unattended conversation. Similarly, in touch, a movable item has been shown to create a pop-out effect in a display of static items (van Polanen, Tiest, & Kappers, 2012), as has a rough surface in a display of fine-textured items (Plaisier, Bergmann Tiest, & Kappers, 2008). Auditory and tactile cues have also been shown to influence visual search. This has been shown when the auditory or tactile cue was spatially informative (Bolia, D'Angelo, & McKinley, 1999; Jones, Gray, Spence, & Tan, 2008; Rudmann & Strybel, 1999), when the auditory or tactile cue was temporally synchronous with a change in color of the target (Ngo & Spence, 2010; Van der Burg, Cass, Olivers, Theeuwes, & Alais, 2010; Van der Burg, Olivers, Bronkhorst, & Theeuwes, 2008b, Van der Burg, Olivers, Bronkhorst, & Theeuwes, 2009; Zannoli, Cass, Mamassian, & Alais, 2012) and when the auditory cue was semantically congruent with the target object (lordanescu, Grabowecky, Franconeri, Theeuwes, & Suzuki, 2010; lordanescu, Gravowecky, & Suzuki, 2011; lordanescu, Guzman-Martinez, Grabowecky, & Suzuki, 2008). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.