Academic journal article Memory & Cognition

Lure Similarity Affects Visual Episodic Recognition: Detailed Tests of a Noisy Exemplar Model

Academic journal article Memory & Cognition

Lure Similarity Affects Visual Episodic Recognition: Detailed Tests of a Noisy Exemplar Model

Article excerpt

Summed-similarity models of visual episodic recognition memory successfully predict the variation in false alarm rates across different test items. With data averaged across subjects, Kahana and Sekuler (2002) demonstrated that subjects' performance appears to change along with the mean similarity among study items; with high interstimulus similarity, subjects were less likely to commit false alarms to similar lures. We examined this effect in detail by systematically varying the coordinates of study and test items along a critical stimulus dimension and measuring memory performance at each point. To reduce uncontrolled variance associated with individual differences in vision, the coordinates of study and test items were scaled according to each subject's discrimination threshold. Fitting each of four summed-similarity models to the individual subjects' data demonstrated a clear superiority for models that take account of interitem similarity on a trialwise basis.

Kahana and Sekuler (2002) adapted Sternberg's (1966, 1975) procedure to study episodic recognition memory for series of textures, which were created by linearly summing sinusoidal gratings. This adaptation made it possible to quantify and characterize interference in memory among successively presented stimuli. Unlike semantically rich stimuli, such as words or images of recognizable and nameable objects, multidimensional textures are not burdened by the complexities of extralaboratory associations, and they resist symbolic coding (Della-Maggiore et al., 2000; Hwang et al., 2005). Because of their well-defined, natural metric representations in a low-dimensional space (Kahana & Bennett, 1994), compound grating stimuli facilitate manipulation of interitem similarity relations, which are important determinants of visual episodic recognition (Kahana & Sekuler, 2002; Sekuler, Kahana, McLaughlin, Golomb, & Wingfield, 2005; Zhou, Kahana, & Sekuler, 2004). The availability of a natural stimulus metric for defining similarity relations among items enables detailed mathematical accounts of recognition memory to be applied to results from individual stimulus lists.

Using Nosofsky's (1984, 1986) generalized context model (GCM) as our starting point, we developed NEMO, a noisy exemplar model, which combines core aspects of GCM with significant new assumptions. First, NEMO follows the tradition of multidimensional signal detection theory (e.g., Ashby & Maddox, 1998) in assuming that stimulus representations are coded in a noisy manner, with a different level of noise associated with each dimension. second, NEMO augments the summed-similarity framework of item recognition (Brockdorff & Lamberts, 2000; Clark & Gronlund, 1996; Humphreys, Pike, Bain, & Tehan, 1989; Lamberts, Brockdorff, & Heit, 2003; Nosofsky, 1991,1992) with the idea that within-list summed similarity (not just probe-to-list-item similarity) influences recognition decisions. In fitting NEMO to data from two experiments, Kahana and Sekuler (2002) found that subjects were more likely to say yes to lures following study of lists whose items had low interitem similarity than to lures following lists whose items had high interitem similarity. Subjects appear to interpret probe-to-list similarity in light of within-list similarity, with greater list homogeneity leading to a greater tendency to reject lures that are similar to one or more of the studied items. The impact of within-list similarity was confirmed by Nosofsky and Kantner (2006), using color patches as stimuli.

In vision research, many studies focus on the individual performance of a small number of subjects. In contrast, memory research tends to focus on performance averaged across subjects. The statistical advantage of averaging is obvious, but this advantage exacts a toll. It can introduce qualitative changes into the pattern of data, thereby distorting the outcome of quantitative modeling (e.g., Maddox, 1999). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.