Our ability to process words during a lexical decision task depends on a number of lexical features. These lexical features include, but are not limited to, word frequency (i.e. the number of times a word appears in written text), orthographic neighborhood count (i.e. the total number of new words that can be created by changing one letter at a time), letter count, and number of syllables (Andrews, 1992; Sears et al.. 2008;). For example, words that are high in frequency are responded to more quickly and accurately than low frequency words (Rubenstein, Garfield & Millikan, 1970; Scarborough, Cortese & Scarborough, 1977; Stone & Van Orden, 1993). When designing a lexical decision experiment researchers often vary specific lexical features while matching other features. For example, if one were to investigate the frequency effect, two groups of words would be created that differed on frequency but matched on other lexical features, such as average orthographic neighborhood, letter count, and number of syllables.
When creating a list of items for a lexical decision task, researchers report the mean values for each of the lexical features they are controlling and manipulating. However, researchers typically do not refer to the amount of variability for each of the lexical features that are being controlled. The amount of variability in a list of items can have an effect on lexical decision performance. For example, in two separate studies, Glanzer and Ehrenreich (1979) and Gordon (1983), the variability of the frequency ratings were manipulated. In each study, participants received six blocks of trials that contained either a pure low frequency list of items, a pure medium frequency list of items, a pure high frequency list of items, or one of three mixed frequency, high variability lists. The mixed frequency lists were created by choosing an equal number of items from each of the three pure frequency lists. Their results indicated that increasing the variability of a list of word items both increased reaction time and decreased the error rate.
According to Balota and Chumbley (1984), a lexical decision may be performed based on an early familiarity measure. When an item is presented during a lexical decision task, a representation begins to form that becomes more and more like something that is stored in memory. For word items, the similarity between the representation that is forming and memory will increase at a rate that is greater than the rate for nonwords. Words will also reach a higher level of familiarity than nonwords. Therefore, a familiarity decision criterion that falls somewhere between the average level of familiarity for words and nonwords could be used as a basis for categorization. If the familiarity decision criterion is determined based on the average familiarity for words and nonwords, increasing the variability of a list of words should have an effect on accuracy but no effect on reaction time. Increasing the variability of a word list will not affect the average level of familiarity; however, a highly variable word list will contain items that are not very familiar and these items would have a greater chance of incorrectly being called nonwords.
There are a number of lexical decision models that utilize decision criteria that are based on some measure of familiarity or similarity between the representation that is forming and memory (Granger & Jacobs, 1996; Joordens, Piercey, & Azarbehi, 2009; Ratcliff, Gomez, & McKoon, 2004). However, the purpose of this study is not to perform a critical evaluation of the existing models of lexical decision. Rather, the purpose is to determine how changes in the variability of a word list will affect lexical decision performance.
The mixed lists utilized by Glanzer and Ehrenreich (1979) and Gordon (1983) were created by combining high, medium, and low frequency words. Therefore, the resulting lists of mixed words had an average frequency rating that fell somewhere between the low and high frequency pure lists and had a greater variability than the pure lists. …