Academic journal article Canadian Journal of Experimental Psychology

Examining the Interactivity of Lexical Orthographic and Phonological Processing

Academic journal article Canadian Journal of Experimental Psychology

Examining the Interactivity of Lexical Orthographic and Phonological Processing

Article excerpt

Abstract The number and type of connections involving different levels of orthographic and phonological representations differentiate between several models of spoken and visual word recognition. At the sublexical level of processing, Borowsky, Owen, and Fonos (1999) demonstrated evidence for direct processing connections from grapheme representations to phoneme representations (i.e., a sensitivity effect) over and above any bias effects, but not in the reverse direction. Neural network models of visual word recognition implement an orthography to phonology processing route that involves the same connections for processing sublexical and lexical information, and thus a similar pattern of cross-modal effects for lexical stimuli are expected by models that implement this single type of connection (i.e., orthographic lexical processing should directly affect phonological lexical processing, but not in the reverse direction). Furthermore, several models of spoken word perception predict that there should be no direct connections between orthographic representations and phonological representations, regardless of whether the connections are sublexical or lexical. The present experiments examined these predictions by measuring the influence of a cross-modal word context on word target discrimination. The results provide constraints on the types of connections that can exist between orthographic lexical representations and phonological lexical representations.

The identification of spoken and written words involves integration of the target stimulus and relevant contextual sources of information from the environment. It has been demonstrated that listeners integrate both auditory and visual sources of information during auditory perception. The classic "McGurk effect" (e.g., MacDonald & McGurk, 1978; McGurk & MacDonald, 1976) illustrates that when listeners are presented with an auditory stimulus (e.g., /ba-ba/) that does not match visually presented vocal gestures (e.g., mouth movements for /ga-ga/), the auditory and visual information are integrated during auditory perception (e.g., the listener hears "da-da"). People are often presented with concurrent spoken and printed stimuli that are not necessarily congruent. For example, students are often required to attend to text printed on overheads while also attending to a lecturer's spoken words, and parents often read stories to their children while their children follow the printed words. Thus, how concurrent visual and auditory stimuli are integrated has been an important issue for models of language processing (e.g., Borowsky, Owen, & Fonos, 1999; Fowler & Deckle, 1991; Frost & Katz, 1989; MacDonald & McGurk, 1978; Massaro, Cohen, & Thompson, 1988; McGurk & MacDonald, 1976).

Models of visual word recognition differ in the number and type of nonsemantic connections between orthographic and phonological representations. These connections between orthographic and phonological representations serve as a processing route for these processing subsystems to communicate with one another. The types of communication from one subsystem to another may be unidirectional or bidirectional. For example, the dual-route cascade model has unidirectional connections that map graphemes onto phonemes, and bidirectional connections that map orthographic lexical representations onto phonological lexical representations (Coltheart, Rastle, Perry, Langdon, & Zeigler, 2001). In contrast, the neural network models of Seidenberg and colleagues (Harm & Seidenberg, 1999; Plaut, McClelland, Seidenberg, & Patterson, 1996; Seidenberg & McClelland, 1989) implement a single nonsemantic route with one set of fully recurrent connections between orthographic and phonological units to handle both sublexical and lexical levels of translation from print to sound, and thus they are often referred to as "single-route" models. Neural network models typically group together the orthographic levels of representation (e. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.