The hypothesis that melodies are recognized at moments when they exhibit a distinctive musical pattern was tested. In a melody recognition experiment, point-of-recognition (POR) data were gathered from 32 listeners (16 musicians and 16 nonmusicians) judging 120 melodies. A series of models of melody recognition were developed, resulting from a stepwise multiple regression of two classes of information relating to melodic familiarity and melodic distinctiveness. Melodic distinctiveness measures were assembled through statistical analyses of over 15,000 Western themes and melodies. A significant model, explaining 85% of the variance, entered measures primarily of timing distinctiveness and pitch distinctiveness, but excluding familiarity, as predictors of POR. Differences between nonmusician and musician models suggest a processing shift from momentary to accumulated information with increased exposure to music. Supplemental materials for this article may be downloaded from http://mc.psychonomic-journals.org/content/supplemental.
A popular 1950s radio show called Name That Tune allowed participants to wager on how few notes they would require in order to identify some well-known tune. Experienced listeners are often able to recognize a melody within just a few notes (Dalla Bella, Peretz, & Aronoff, 2003; Schellenberg, Iversen, & McKinnon, 1999; Schulkind, Posner, & Rubin, 2003). The present study provides an examination of how this is accomplished. Specifically, it investigates the factors that contribute to the time course of recognizing or identifying a melody.1
Cohort theory posits that word recognition is based on the distinction of the particular phoneme sequence of the word in the context of some lexicon. When applied to music, an initial cohort of melodies would be activated on the basis of the first notes of a melodic sequence. Thereafter, members of the initial cohort that do not match the increased information provided by the unfolding melodic sequence are dropped until the correct melody is isolated. A consequence of cohort theory is that a melody's point of recognition (POR) should correlate with an increase in information, or distinctiveness, of the melodic sequence. This is because a distinctive melodic event would serve to eliminate irrelevant melodies from the cohort, leading to isolation and recognition. Some melodies are more distinctive or unusual than others. For example, many melodies begin with an ascending perfect fourth interval, whereas few melodies begin with an ascending tritone. Hence, the first two notes of "Maria" from Leonard Bernstein's West Side Story are far more distinctive than the initial notes for "The Farmer in the Dell." Few melodies begin with the same rhythm as "Happy Birthday," whereas many begin with a series of isochronous durations, including "Frère Jacques."
In addition to their role at retrieval distinctive events may also be important in melody recognition because of their salience at encoding (see Reder, Paynter, Diana, Ngiam, & Dickison, 2008), with McAuley, Stevens, and Humphreys (2004) speculating that melodies that are not distinctive (or "catchy") are not well attended to. Studies of melody identification (Hébert & Peretz, 1997; White, 1960), recall, and expectancy (Carlsen, 1981) have posited a contribution of musical distinctiveness without directly testing it. Schulkind et al. (2003) examined the musical features that facilitate melody identification and, so, questioned what type of information (e.g., phrase boundaries, melodic interval, musical ornaments) contributes to melody recognition. In their study, 28 participants, who were not selected for musical training, identified 34 songs presented note by note. The relationship of recognition to serial position (i.e., note number) exhibited an inverted U shape, leading the authors to conclude that melodies generally were identified after the presentation of a moderate amount of information (namely 5-7 notes), enough information for unique identification. …