Academic journal article Educational Technology & Society

Designing Effective Video-Based Modeling Examples Using Gaze and Gesture Cues

Academic journal article Educational Technology & Society

Designing Effective Video-Based Modeling Examples Using Gaze and Gesture Cues

Article excerpt

Introduction

Over the past decade, learning from videos in which a human model demonstrates and (often) explains how to complete a certain task, has rapidly gained popularity, both in formal and informal educational settings (e.g., YouTube). Such so-called video-based modeling examples provide an opportunity for example-based learning, which is a very effective type of instruction, especially for novice learners (for a review, see van Gog & Rummel, 2010). However, video-modeling examples come in many forms, and little is known about design characteristics that make such examples effective in terms of attention guidance and learning (van Gog & Rummel, 2010). For instance, in video examples in which the model is standing next to a whiteboard or smartboard on which the learning task that the model is explaining is visualized (a typical modern classroom situation), it is possible that the presence of the model creates a type of split-attention effect. The split-attention effect is the adverse effect on learning that is found when students have to mentally integrate information from multiple sources (Ayres & Sweller, 2014). On the other hand, gaze direction and pointing gestures made by the model can automatically trigger attention shifts (Sato, Kochiyama, Uono, & Yoshikawa, 2009). In this way, gaze and gesture cues might be able to timely guide the learners' attention toward relevant aspects of the learning material and thereby alleviate such split attention. The question addressed in the present study is: What do learners attend to in a modeling example in which the model is visible, and can the model effectively guide learners' attention by gazing or gesturing at parts of the task?

The model as a potential source of split attention

The reason why seeing the model in the video example might evoke a division of attention between the model and the task that the model is referring to, is that people's attention is automatically drawn to other people's faces. There is probably no other object that is looked at as often as the human face, and face perception might well be the most highly developed visual skill in humans, who possess an extensive neural brain circuit involved in face perception and processing (Haxby, Hoffman, & Gobbini, 2000). Moreover, it has been shown that humans prefer to look at faces from a very young age (Tzourio-Mazoyer et al., 2002).

In a study by Gullberg and Holmqvist (2006), in which observers had to listen to and recall an event described by a visible speaker, it was shown that observers focused primarily on the speaker's face. Eye tracking was used to investigate the amount of viewing time spent looking at a speaker's face in three conditions: (1) the speaker was telling about the event directly to the addressee, (2) a video (recorded in condition 1) of the speaker was presented at life-size or, (3) that same video was presented on a 28 inch TV screen. Results showed that over 90% of viewing time was spent looking at the speaker's face (95.6%, 94.2% and 90.8% in condition 1, 2, and 3 respectively). Although observers had to recall the event the speaker talked about, the speaker did not demonstrate a task, so this study did not investigate how we attend to human modeling examples in which a task is demonstrated and explained to learners.

Even though the findings reviewed above suggest that the model's face is likely to receive a substantial amount of attention, it is unlikely that learners would look at the model 90% of the time, since they know they have to observe the demonstration and will be tested on their ability to perform that task themselves later on. Indeed, in a recent study using video-based modeling examples in which it was demonstrated how to solve a puzzle problem by manipulating objects (the model was seated behind a table; the puzzle's objects were placed on the table), half of the participants saw a version of the example in which the face of the model was visible and the other halve saw a version of the same example in which the face of the model was not visible. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.