Academic journal article The Psychological Record

Effects of an Online Stimulus Equivalence Teaching Procedure on Research Design Open-Ended Questions Performance of International Undergraduate Students

Academic journal article The Psychological Record

Effects of an Online Stimulus Equivalence Teaching Procedure on Research Design Open-Ended Questions Performance of International Undergraduate Students

Article excerpt

Many behavior analysts have been working towards translating highly effective teaching procedures (which were developed and tested mostly in laboratories and laboratory-like settings) into applied and service-delivery contexts, such as schools (Johnson and Street 2004) and college classrooms (Neef et al. 2011), online education (Walker and Rehfeldt 2012), among other settings.

Stimulus equivalence is among the procedures that have empirical evidence on its efficacy and efficiency to teach different skills to different populations in different settings (de Rose et al. 1996; Fienup et al. 2010; Fienup and Critchfield 2010). It is an attempt to explain how the myriad of arbitraly relations among signs and their referents, which characterize human symbolic functions, are formed. Equivalence-based instructions are considered important because they aim at teaching generatively (Fienup et al. 2010; Fienup and Critchfield 2010). This implies programming procedures in a way that involves direct teaching of a few conditional discriminations that will yield untaught performances (Fienup et al. 2010; Green and Saunders 1998; Sidman 1971). The possibility of yielding new, untaught behaviors is very important when one considers the limited instructional time that one might have to teach a given content. As discussed by Lovett et al. (2011), Walker et al. (2010), and Walker and Rehfeldt (2012), this possibility becomes even more noteworthy when one can yield topography-based responses from teaching selection-based responses.

In the last few years, several studies have investigated the feasibility of using of stimulus equivalence in higher education instruction (Critchfield and Fienup 2010; Fields et al. 2009; Fienup et al. 2010; Fienup and Critchfield 2010, 2011; Lovett et al. 2011; Ninness et al. 2005, 2006, 2009; Walker and Rehfeldt 2012; Walker et al. 2010). Target topics of instruction included statistics (Critchfield and Fienup 2010; Fields et al. 2009; Fienup and Critchfield 2010, 2011), brain--behavior relations (Fienup et al. 2010), mathematical formulas and their graphed analogues (Ninness et al. 2005, 2006, 2009), disabilities (Walker et al. 2010), and single-subject designs (Lovett et al. 2011; Walker and Rehfeldt 2012). Overall, the dependent variables of interest in these studies included percentage of correct responses, average number of correct responses and number of correct trials in testing, number of trials (or block of trials) to mastery criterion in teaching sessions, average time to complete the tasks, total time of engagement in tasks, errors during pretest, generalization to novel relations and responses, and social validity of the procedures. Teaching procedures included MTS tasks, train to mastery, and accuracy feedback. Additional procedures included, but were not limited to, introductory lectures, computer assisted instruction, error correction procedures, and differential reinforcement with gradual fading. The teaching procedures have been presented in a variety of formats, such as computerized; paper and pencil; online; and live, oral instruction.

These studies advanced the application of stimulus equivalence to teach complex verbal behaviors (i.e., college-level topics) to advanced learners. The studies' results were successful in teaching participants to a level of performance accuracy in taught relations, in showing the emergence of many novel relations, in demanding very little student time investment. In addition, these studies provided demonstrations of stimulus equivalence efficacy and efficiency under different conditions (i.e., different instruction formats, material formats, and in different settings). For example, while Ninness et al. (2005) used instructor-generated explanations (i.e., lecture) before the MTS tasks, Fienup and Critchfield (2010) did not provide this type of explanation, thus offering a demonstration of the efficacy of equivalence procedures in a more direct way. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.