Academic journal article Memory & Cognition

Examining Recognition Criterion Rigidity during Testing Using a Biased-Feedback Technique: Evidence for Adaptive Criterion Learning

Academic journal article Memory & Cognition

Examining Recognition Criterion Rigidity during Testing Using a Biased-Feedback Technique: Evidence for Adaptive Criterion Learning

Article excerpt

Recognition models often assume that subjects use specific evidence values (decision criteria) to adaptively parse continuous memory evidence into response categories (e.g., "old," "new"). Although explicit pretest instructions influence criterion placement, these criteria appear extremely resistant to change once testing begins. We tested criterion sensitivity to local feedback using a novel biased-feedback technique designed to tacitly encourage certain errors by indicating they are the correct choices. Experiment 1 demonstrated that fully correct feedback had little effect on criterion placement, whereas biased feedback during Experiments 2 and 3 yielded prominent, durable, and adaptive criterion shifts, with observers reporting that they were unaware of the manipulation in Experiment 3. These data suggest that recognition criteria can be easily modified during testing through a form of feedback learning that operates independently of stimulus characteristics and observers' awareness of the nature of the manipulation. This mechanism may be fundamentally different from criterion shifts following explicit instructions and warnings, or shifts linked to manipulations of stimulus characteristics combined with feedback highlighting those manipulations.

A common way of characterizing recognition decisions is as a unidimensional signal detection process. In its simplest form, this assumes that performance is governed by a scalar indication of the amount of mnemonic evidence (signal), and a value or set of values that is used to parse that evidence into discrete response categories (decision criterion/criteria) (Macmillan & Creelman, 1991). In the most common model, researchers assume two normal evidence distributions-one for targets and one for luresoffset by a distance representing the discriminability of the item types. For simple old/new classification, only a single criterion is used to divide the continuum. In contrast, when asked to make confidence ratings, observers are assumed to employ multiple criterion values to parse the continuum into the required number of confidence ratings. When the old and new item endorsement rates are cumulated from most to least confident, a characteristic curve known as the receiver operating characteristic (ROC) is traced out (Figure 1). The ROC function is important because it specifies all the old and new item response rates that are permissible for a given level of accuracy (

Critically, although observers are assumed to have little strategic control over the distributions of evidence that are available during recognition testing, the placement of decision criteria is often assumed to be under a high level of observer control, with observers being capable of shifting criterion positions in order to favor particular outcomes. For example, if subjects are rewarded for detecting old items, yet face no punishment for incorrectly endorsing new items, then it would be advantageous to rapidly shift the old/new criterion to a more lax position by increasing the correct old rate and, hence, increasing the reward. However, despite its intuitive appeal and clear adaptive advantage, the extant evidence is highly mixed with regard to the ability of subjects to adaptively reposition recognition decision criteria. We briefly review this evidence below, beginning with the one manipulation that does appear to reliably invoke adaptive memory criterion positioning, namely, explicit pretest instructions or warnings.

Several instruction methods yield adaptive memory criterion shifts when presented prior to testing. These include instructing subjects to favor either high-confidence "old" or "new" responses (Azimian-Faridani & Wilding, 2006), providing them with either veridical or misleading information about upcoming target-lure ratios (Hirshman & Henzler, 1998; Rotello, Macmillan, Reeder, & Wong, 2005; Strack & Förster, 1995), or informing them about differences in relative payoffs for certain outcomes (Van Zandt, 2000). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.