Linguistic processing, especially syntactic processing, is often considered a hallmark of human cognition; thus, the domain specificity or domain generality of syntactic processing has attracted considerable debate. The present experiments address this issue by simultaneously manipulating syntactic processing demands in language and music. Participants performed self-paced reading of garden path sentences, in which structurally unexpected words cause temporary syntactic processing difficulty. A musical chord accompanied each sentence segment, with the resulting sequence forming a coherent chord progression. When structurally unexpected words were paired with harmonically unexpected chords, participants showed substantially enhanced garden path effects. No such interaction was observed when the critical words violated semantic expectancy or when the critical chords violated timbral expectancy. These results support a prediction of the shared syntactic integration resource hypothesis (Patel, 2003), which suggests that music and language draw on a common pool of limited processing resources for integrating incoming elements into syntactic structures. Notations of the stimuli from this study may be downloaded from pbr.psychonomic-journals.org/content/supplemental.
The extent to which syntactic processing of language relies on special-purpose cognitive modules is a matter of controversy. Some theories claim that syntactic processing relies on domain-specific processes (e.g., Caplan & Waters, 1999), whereas others implicate cognitive mechanisms not unique to language (e.g., Lewis, Vasishth, & Van Dyke, 2006). One interesting way to approach this debate is to compare syntactic processing in language and music. Like language, music has a rich syntactic structure in which discrete elements are hierarchically organized into rule-governed sequences (Patel, 2008). As is the case with language, the extent to which the processing of this musical syntax relies on specialized neural mechanisms is debated. Dissociations between disorders of the processing of language and music (aphasia and amusia) suggest that, in both, syntactic processing relies on distinct neural mechanisms (Peretz & Coltheart, 2003). In contrast, neuroimaging studies reveal overlapping neural correlates of musical and linguistic syntactic processing (e.g., Maess, Koelsch, Gunter, & Friederici, 2001; Patel, Gibson, Ratner, Besson, & Holcomb, 1998).
A possible reconciliation of these findings distinguishes between syntactic representations and the processes that act on those representations. Although the representations involved in language and music syntax are probably quite different, both types of representation must be integrated into hierarchical structures as sequences unfold. This shared syntactic integration resource hypothesis (SSIRH) claims that music and language rely on shared, limited processing resources that activate separable syntactic representations (Patel, 2003). The SSIRH thereby accounts for discrepant findings from neuropsychology and neuroimaging by assuming that dissociations between aphasia and amusia result from damage to domain-specific representations, whereas the overlapping activations found in neuroimaging studies reflect shared neural resources involved in integration processes.
A key prediction of the SSIRH is that syntactic integration in language should be more difficult when these limited integration resources are taxed by the concurrent processing of musical syntax (and vice versa). In contrast, if separate processes underlie linguistic and musical syntax, syntactic integration in language and music should not interact. Koelsch and colleagues (Koelsch, Gunter, Wittfoth, & Sammler, 2005; Steinbeis & Koelsch, 2008) provided electrophysiological evidence supporting the SSIRH by showing that the left anterior negativity component elicited by syntactic violations in language was reduced when paired with a simultaneous violation of musical syntax. …