In their article "The Negative Effects of Positive Reinforcement in Teaching Children with Developmental Delay," Biederman, Davey, Ryder, and Franchi (1994) sought to determine "the relative efficiency of hand-over-hand (active) instruction versus passive observation" (p. 460). Passive observation was contrasted with hand-over-hand instruction with and without verbal reinforcement. The finding of Biederman et al. was that "the verbal reinforcement used in the interactive modeling accounted for poor performance" (p. 462). They concluded from this finding that verbal reinforcement, in combination with interactive modeling strategies, may produce confusion in children with language and learning difficulties; the child may be uncertain about exactly what behavior is being reinforced, or the reinforcement may serve to distract the child from what is for him or her, a difficult sequence of behavior" (p. 464).
In this response, I assert that Biederman et al. (1994) have misunderstood the phenomenon of reinforcement. At issue is the definition and measurement of the independent and dependent variables that have led to an unwarranted conclusion. I maintain that their finding cannot be functionally related to the independent variable they called "positive reinforcement" and described as "verbal feedback and other sorts of reinforcement" because they did not use measurement tactics that were able to document the phenomenon of reinforcement.
THE PROBLEM OF DEFINITION
There is an old adage that begins, "If it looks like a duck, sounds like a duck, walks like a duck ..." This adage (in all its variants) is commonly used to make the case that if something appears as it appears, then it probably is the thing it appears to be. As a general rule of thumb the adage is probably good advice. As a scientific rule it is insufficient as an explanatory mechanism. My contention is that the criteria one uses to assess phenomena such as reinforcement must be those that define its existence. In defining a duck, zoologists would not be limited to its physical appearance (e.g., shape, size); they would observe its behavior (e.g., quacking, waddling) and at times assess its structure and biology (e.g., skeleton and blood chemistry). There are two necessary and sufficient criteria used in examining the phenomenon of reinforcement: its operational description and its process or functional effect. "As an operation, reinforcement refers to the occurrence of a consequence subsequent to a behavior. As a process, reinforcement refers to the increase in responding as a function of the occurrence of the consequence" (Cooper, Heron, & Heward, 1987, p. 257). The existence of either criteria without the other inhibits an analysis of reinforcement.
Consequences that increase behavior are called reinforcers (Cooper et al., 1987). In contrast, consequences that decrease the probability of occurrence of a behavior are termed punishers; and the phenomenon is termed punishment. Reinforcement and punishment are commonly confused and misunderstood. Part of the problem lies in the failure to use the term reinforcement for the phenomenon when behavior increases following the contingent application of a reinforcer, and the term punishment for the phenomenon when behavior decreases following the contingent application of a punisher. Often, if it looks like reinforcement, and sounds like reinforcement, it is assumed to be reinforcement. However, the only criteria that adequately define reinforcement (or punishment) are (a) the temporal relation between the behavior and its reinforcer and (b) the effect that consequences have on the occurrence of behavior (Alberto & Troutman, 1990; Cooper et al., 1987; Malott, Whaley, & Malott, 1993; Martin & Pear, 1983; Skinner, 1953). Thus the theme of this commentary is: "It may look and sound like a reinforcer, but unless it increases the frequency of the behavior it follows, it's not a reinforcer. …