History Effects on Induced and Operant Variability

Article excerpt

Two experiments evaluated history effects on induced and operant variability. College students typed three-digit sequences on a computer keyboard. Sequence variability was induced (by no reinforcement or variation-independent reinforcement) or reinforced (by variation- or repetition-dependent reinforcement). Conditions with induced and operant variability were presented according to a reverse between-groups design. In Experiment 1, we examined transitions from the variation or repetition contingencies to no reinforcement, and vice versa. In Experiment 2, the variation or repetition contingencies were followed or preceded by variation-independent reinforcement. The results showed that (1) a history of no reinforcement impaired operant variability learning; (2) induced variability levels were higher and lower after a history of reinforcement for variation and repetition, respectively; (3) repetition was more easily disrupted by no reinforcement and independent reinforcement than was variation; and (4) response variability and stability were a function of past and current reinforcement conditions. These results indicate that reinforcement history influences both induced and operant variability levels.

(ProQuest: ... denotes formula omitted.)

Behavioral variability is influenced by the environment (Neuringer, 2002, 2003, 2004). One source of influence comes from exposure to schedules of reinforcement. When reinforcement is plentiful, behavior usually becomes stereotyped (e.g., under a continuous schedule of reinforcement, CRF; Schwartz, 1980, 1982). When the frequency of reinforcement is reduced (e.g., under intermittent schedules; Eckerman & Vreeland, 1973; Gharib, Gade, & Roberts, 2004), or reinforcement is withheld (e.g., under extinction; Antonitis, 1951; Morgan & Lee, 1996; Stokes, 1995), variability increases in comparison with that observed in the CRF schedule. This increase in variability is said to be induced, rather than reinforced, because there is no contingent relation between variability and reinforcer presentation or omission.

The second source of environmental control is reinforcement. Behavioral variability, as with other behavioral dimensions (e.g., force, topography, duration, rate), is sensitive to operant contingencies. For example, variability is greater when it is required for reinforcement than in the absence of such a requirement (e.g., Page & Neuringer, 1985). In addition, the degree of variation can be precisely controlled by reinforcement contingencies, so that requirements of low, intermediate, and high variability generate the corresponding level of variation (Machado, 1989; Stokes, 1999; Wagner & Neuringer, 2006). Moreover, studies show that discriminative stimuli can affect the probability of varying as opposed to repeating (Denney & Neuringer, 1998; Page & Neuringer, 1985; Ward, Kynaston, Bailey, & Odum, 2008).

A large body of evidence indicates that responding is determined by both past and current contingencies of reinforcement (cf. Lattal & Neef, 1996; Wanchisen & Tatham, 1991). Accordingly, some studies have indicated that past exposure to conditions that induce variability- and exposure to variation and repetition contingencies of reinforcement-alters the degree of variation in a subsequent condition. For example, Neuringer, Kornell, and Olufs (2001) reported that extinction-induced variability in rats was greater after a variation than after a repetition history. Interestingly, when the degrees of variation during training and extinction were compared, increases in variability were found to be lower after the variation than after the repetition training, indicating that repetition was more disrupted by extinction than was variation. Another interesting finding was a mix of variability and stability. That is, extinction-induced variation was characterized by an increase in the frequency of sequences that were rarely emitted during the variation and repetition training. …