Statistical Learning of Adjacent and Nonadjacent Dependencies among Nonlinguistic Sounds

Article excerpt

Previous work has demonstrated that adults are capable of learning patterned relationships among adjacent syllables or tones in continuous sequences but not among nonadjacent syllables. However, adults are capable of learning patterned relationships among nonadjacent elements (segments or tones) if those elements are perceptually similar. The present study significantly broadens the scope of this previous work by demonstrating that adults are capable of encoding the same types of structure among unfamiliar nonlinguistic and nonmusical elements but only after much more extensive exposure. We presented participants with continuous streams of nonlinguistic noises and tested their ability to recognize patterned relationships. Participants learned the patterns among noises within adjacent groups but not within nonadjacent groups unless a perceptual similarity cue was added. This result provides evidence both that statistical learning mechanisms empower adults to extract structure from nonlinguistic and nonmusical elements and that perceptual similarity eases constraints on nonadjacent pattern learning. Supplemental materials for this article can be downloaded from

Statistical learning studies have demonstrated that adults, young children, and infants are capable of rapidly learning consistent relationships among temporally adjacent speech sounds or musical tones and of grouping these elements into larger coherent units, such as words or melodies (Aslin, Saffran, & Newport, 1998; Perruchet & Pacton, 2006; Saffran, Aslin, & Newport, 1996; Saffran, Johnson, Aslin, & Newport, 1999; Saffran, Newport, & Aslin, 1996; Saffran, Newport, Aslin, Tunick, & Barrueco, 1997). Similarly, adults and infants are capable of grouping temporally adjacent patterned visual elements into coherent units (Fiser & Aslin, 2002; Kirkham, Slemmer, & Johnson, 2002).

In contrast, however, the ability to learn dependencies among nonadjacent elements is more selective. Natural languages exhibit only certain limited nonadjacent dependencies among sounds and word classes (Chomsky, 1957). In artificial language experiments, only certain types of nonadjacent patterns are readily learned (Cleeremans & McClelland, 1991; Gómez, 2002; Newport & Aslin, 2004; Onnis, Monaghan, Richmond, & Chater, 2005) and are particularly difficult to learn when the materials are complex or are presented in lengthy or continuous streams. Newport and Aslin showed that statistical learning of patterns between nonadjacent syllables is difficult1 but that similar relationships between nonadjacent segments (consonants or vowels), which are common in natural languages, can be learned quite easily. They suggested that, although nonadjacent relationships are more difficult to acquire than adjacent ones, this difficulty could be ameliorated when the nonadjacent elements are perceptually similar to one another (e.g., all consonants) and distinct from the intervening elements (e.g., vowels). Creel, Newport, and Aslin (2004) showed that patterns among nonadjacent tones could be learned if the nonadjacent elements are of a similar pitch range or timbre.

The results of the present experiments significantly broaden previous results through the examination of the same questions for patterns composed of nonlinguistic and nonmusical elements. We used nonlinguistic noises that have no names and that do not fall along a single dimension (e.g., pitch for tones). We asked whether such unfamiliar noises show the same signature properties of statistical learning that have been demonstrated for familiar speech materials-in particular, whether adults readily learn adjacent groupings and whether nonadjacent groupings are more selectively learned on the basis of whether the related elements are perceptually similar.

The materials and procedures were analogous to those used in previous studies of speech and tonal melodies. …