Academic journal article Memory & Cognition

Dynamics of Activation of Semantically Similar Concepts during Spoken Word Recognition

Academic journal article Memory & Cognition

Dynamics of Activation of Semantically Similar Concepts during Spoken Word Recognition

Article excerpt

Semantic similarity effects provide critical insight into the organization of semantic knowledge and the nature of semantic processing. In the present study, we examined the dynamics of semantic similarity effects by using the visual world eyetracking paradigm. Four objects were shown on a computer monitor, and participants were instructed to click on a named object, during which time their gaze position was recorded. The likelihood of fixating competitor objects was predicted by the degree of semantic similarity to the target concept. We found reliable, graded competition that depended on degree of target-competitor similarity, even for distantly related items for which priming has not been found in previous priming studies. Time course measures revealed a consistently earlier fixation peak for near semantic neighbors relative to targets. Computational investigations with an attractor dynamical model, a spreading activation model, and a decision model revealed that a combination of excitatory and inhibitory mechanisms is required to obtain such peak timing, providing new constraints on models of semantic processing.

(ProQuest: ... denotes formula omitted.)

In typical speech contexts, listeners must correctly identify and interpret from 100 to 150 words per minute.1 This is done seemingly without effort, despite the noise and ambiguity inherent in the speech signal and the complexity of the semantic knowledge that must be accessed. Because the process is so fast and the semantic structure is so complex, the dynamics of spoken word recognition are challenging to study. The many different theories regarding the structure of semantic knowledge can be grouped according to a few critical distinguishing properties. One distinction is the granularity of representations, with approaches varying from those in which concept is the lowest level of analysis or representation to those in which subconceptual elements or features are the lowest level. In network models of knowledge, this is a distinction between localist and distributed representations. Under the localist view, each concept is a unique node in a network (e.g., Collins & Loftus, 1975; Steyvers & Tenenbaum, 2005) and the connections among the nodes in the network explicitly determine their effects on one another. Under the distributed view, concepts are represented by patterns of activation over the same set of units (e.g., Landauer & Dumais, 1997; Lund & Burgess, 1996; McRae, Cree, Seidenberg, & McNorgan, 2005; Vigliocco, Vinson, Lewis, & Garrett, 2004) and effects of concepts on one another are an emergent property of processing dynamics and the patterns of overlap.

A second critical distinguishing property is the proposed structure of semantic relations. Conceptual relatedness has been hypothesized to depend on membership in the same category (e.g., Chiarello, Burgess, Richards, & Pollock, 1990; Hines, Czerwinski, Sawyer, & Dwyer, 1986), association by co-occurrence in text or speech (e.g., Nelson, McEvoy, & Schreiber, 2004), or shared perceptual, action, or other features (e.g., Barsalou, 1999; McRae et al., 2005; Vigliocco et al., 2004). Of particular interest are cases in which these approaches make different behavioral predictions. There is general agreement that, as a word is processed, words with related meanings are partially activated, but the different approaches make different claims about which meanings are related. Specifically, under a strict category hierarchy view, only category coordinates should be activated; under an association-based view, only associates should be activated; and under a feature-based view, co-activation is determined by feature overlap (although activation can also result from semantic association).

This issue has been addressed in a number of studies using semantic priming, but with mixed results. Shelton and Martin (1992) found priming for associated word pairs but not for semantically related word pairs that were not associated, suggesting that associations-not featurebased semantic relatedness-form the basis of semantic structure. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.