Human-Computer Interaction (HCI) in Educational Environments: Implications of Understanding Computers as Media

Article excerpt

This article is a review of the literature in the field of Human Computer Interaction (HCI) as it may apply specifically to educational environments. The origin of HCI and its relationship to other areas of study such as human factors, usability, and computer interface design are examined. Additionally, the notion of computers as a medium was investigated in order to understand the unique properties of HCI as opposed to other forms of man-machine interaction. The article seeks to answer questions about current HCI issues, its relevance to education, and to sketch out a research agenda for the future.

Designing for the little screen on the desktop has the most in common with designing for the Big Screen. Interactive software needs the talents of a Disney, a Griffith, a Welles, a Hitchcock, a Capra... (Nelson, 1995, p. 243)

History of Media in Education

The history of educational technology shows a pattern of moments of exaggerated promise at the introduction of new technology, followed by disappointment. Thomas Edison predicted in 1913 that books would be replaced by motion pictures (Cuban, 1986; Metlitzky, 1999). In 1940, George F. Zook, in his American Council on Education report, described film as "the most revolutionary instrument introduced in education since the printing press" (Hoban, 1942, p. 16). However, after these early periods of great promise, the history of the use of technology in education is one of resistance to change and disappointment. Hoban (1942) blames this resistance partially on the Puritanical belief in the power of words, and a suspicion of any education that seems pleasurable. While film came into wide use in educational environments during WWII when the military needed a device to speed up the training of masses of soldiers with various skill levels and education, it never gained acceptance in higher education in the same way (Hoban, 1942).

The literature on the use of film and TV in educational environments is striking in the manner which one finds much written and published in the period of 1930-1950, and then very little afterwards. Research in the uses of film in education has, in the opinion of one of the leading researchers in this area, remained almost at a standstill since 1950 (Hoban, 1971). In the 1960s and 1970s, a few authors focused on how to use films to teach creatively as an augmentation and resource in the classroom (Schillaci & Culkin, 1970; Worth, 1981), while others argued about the educational value of film and television, especially Sesame Street (Goldman & Burnett, 1971; Cook, Appleton, Conner, Shaffer, Tamkin, & Weber, 1975). Overall, there is surprisingly little written about the uses of film and television in education.

With the introduction of the personal computer, large claims were once again made for educational applications. The programmed learning movement, or auto-instructional movement, began with the introduction of computers and, early on, emphasized B.F. Skinner's model of operant conditioning, response mode, error rate, and reinforcement (DeCecco, 1964). Later, Computer-Aided Instruction (CM) and Intelligent Computer-Aided Instruction (CAI) developed, seeking to combine artificial intelligence capabilities (Frasson & Gauthier, 1990). However, neither of these movements had much success in either elementary or higher education.

Computer Medium

Computers are usually viewed as tools or instruments for storing and manipulating data (Oren, 1995). However, at times in the literature on human-computer interaction (HCI), there are suggestions that the computer is a medium, not a tool, and that it might be fruitful to investigate this notion further (Baecker & Small, 1995; Head, 1999; Kay, 1995; Oren, 1995). As the use of computers in educational environments increases, the need for a more sophisticated understanding of computer design issues becomes more important--an understanding of computers as a medium brings this kind of complexity to the research.

The literature suggests that computers might parallel the evolution of other forms of media (Mountford, 1995; Oren, 1995). According to Shneiderman (1998), Marshall McLuhan (1964) pointed out that new media are dependent on old media until the unique features of the new media are appreciated and developed. In the way that early movies relied on novels and plays for content, early computing automated the work of typewriters and accounting ledgers. Software engineering has been dominated by engineers (Mountford, 1995), as was the development of the filmmaking process first controlled by engineers. In this way, much of the current educational software simply automates typical classroom tasks.

Now that we are reaching a more mature stage in the cycle of the development of the personal computer, it is time to look more closely at the specific characteristics of the medium that might be well suited for educational applications. Consequently, this article seeks to find the answers to the following questions: What is the history of human-computer interaction (HCI)? What are the current issues in HCI research? What aspects of HCI research are relevant to education? Finally, what should be the HCI research agenda for the future, particularly if the discussion is broadened in the context of computers as a medium?


The general study of human-machine interaction began in WWII with a focus on understanding the psychology of soldiers interacting with weapon and information systems such as signal detection and cockpit instrument displays (Card, Moran, & Newell, 1983). After the war, human-machine interaction began to be examined more broadly in relationship to work and consumer product environments (Helander, 1998). Human-computer interaction (HCI) developed from this work and is a multi-disciplinary field involving computer science, psychology, engineering, ergonomics, sociology, anthropology, philosophy, and design (Card, Moran & Newell, 1983; Faulkner, 1998; Head, 1999). HCI is concerned with the design, evaluation, and implementation of interactive computing systems for human use (Head, 1999; Card, Moran, & Newell, 1983).

The subject of HCI has had various labels and acronyms over the years. It is generally used to mean human-computer interaction, but sometimes is described as human-computer interface. Additionally, CHI, or computer-human interaction, is sometimes used, as well as manmachine interface, or MMI (Faulkner, 1998). The primary focus of HCI is the user. The field, as a whole, tries to better understand the interactions between the user and computer (Faulkner, 1998; Head, 1999; Maddix, 1990). The primary factors considered in examining human-computer interactions are organizational, environmental, cognitive, task, constraints, and functionality (Preece & Shneiderman, 1994; Head, 1999).

Cognitive research and principles developed in the 1980s provided much of the early HCI framework (Faulkner, 1998). The literature on HCI focuses in part on cognitive processes, especially in terms of the capacities of users and how these affect users' abilities to carry out specific tasks with computer systems. In contrast to behaviorism, which argues that action must be understood in terms of observable behavior between humans and the environment, cognitive psychology focuses on mental processes, sometimes expressed in computational terms (Wooffitt, Fraser, Gilbert & McGlashan, 1997).

In terms of cognitive issues, HCI focuses on motor, perceptual, and cognitive systems and two types of memory: working and long-term (Card, Moran & Newell, 1983). According to Card, Moran, and Newell (1983), the most effective technique for retaining information is to associate it with something already in long-term memory. Thus, much of this literature on the cognitive aspects of HCI is concerned with the relationship between long- and short-term memory. Accordingly, memory is broken down into the following aspects: processor cycle time, memory capacity, memory decay rate, and memory code type (Card, Moran, & Newell, 1983).

Human Factors

HCI is a subset of the field of human factors that also includes interface design, system/user communications, and end-user involvement (Carey, 1991; Reisner, 1987). The term "human factors" is defined by Carey (1991) as "the study of the interaction between people, computers, and their work environment" (p. 2). The objective of human factors research is to create information systems and work environments that help to make people more productive and more satisfied with their work life. However, the overall emphasis of human factors is on system performance, not on human satisfaction (Carey, 1991). Today, most computer and software companies have human factors staff (Helander, 1998), and Shneiderman (1987) claims that the diverse use of computers is stimulating widespread interest in human factors issues. He points to five primary human factors: "time to learn, speed of performance, rate of errors by users, subjective satisfaction, and retention over time" (Shneiderman, 1987, p. 15).

Human error studies are part of the human factors literature. This research has taken two different routes: natural science and cognitive science approaches (Reason, 1990). HCI is concerned with the cognitive science approach, and we shall see that the field is very much focused on learning how to minimize user error. Reason (1990) identifies three basic kinds of errors: (a) skill-based, (b) rule-based, and (c) knowledge-based. He argues that errors are bound with stored knowledge structures retrieved in response to situational demands.


Another major area of study that overlaps with HCI is usability. Usability refers to the degree to which a computer system is effectively used by its users in the performance of tasks (Carey, 1991). Usability evaluates whether a computer system functions in the manner it was designed and if it fits the design purpose (Faulkner, 1998). This evaluation of usability includes the user interface, dialogue design, cognitive match with the user, quality of documentation, and online help (Maddix, 1990). Interface design is one aspect of usability (Cohen, 1997). As opposed to the traditional mechanical point of view, usability focuses on the cognitive and social aspects of users when designing computer applications as well. Consequently, usability has a communications medium dimension mediating between users and the designer. In this way, usability focuses on the evolving process of communication and supporting organizational processes (Adler & Winograd, 1992). Maddix (1990) emphasizes the process aspect of usability by suggesting a parallel with the concept of "gestalt," implying the understanding of computer systems as a totality, rather than as a collection of individual parts.

Shneiderman (1999) argues that designers of older technologies such as telephones and television have reached the goal of universal usability, but computers are still too difficult to use. Designing for experienced users is difficult, but designing for a broad audience of unskilled users presents a far greater challenge. Consequently, Shneiderman suggests three usability principles, including supporting a broad range of hardware and software, accommodating users with different skills and needs, and bridging the gap between what users know and what they need to know (Shneiderman, 1999).

The literature on usability also includes information on access for special needs populations. The ACM's Special Interest Group on Computers and the Physically Handicapped, (SIGCAPH) promotes accessibility for disabled users. The European conferences on User Interfaces for All also deal with interface design strategies, and the Web Accessibility Initiative ( of the World Wide Web Consortium has a guidelines document to support special needs users (Shneiderman, 1999).

Interface Design

Computer interface design is a subset of HCI and focuses specifically on the computer input and output devices such as the screen, keyboard, and mouse. Research on the task interface has its roots in the ergonomic study of instrument panels during WWII. This research has led directly to the current computer interface design literature (Sime & Coombs, 1983). Much of this literature focuses on principles of good computer interface design. Donald Norman (1998; 1988; 1987), one of the leading researchers in this field, suggests seven principles of good design, including using knowledge of both the world and in the user's head, simplifying the structure of tasks, making functions visible, using conceptual maps, exploiting constraints and limitations, expecting user error, and standardizing functions. Head (1999) references IBM's design principles: (a) recommending a focus on users, (b) continual user testing, (c) interactive design, and (d) integrated design. Additionally, the literature is full of design truisms that tend to be repeated such as: consistency eases learning of system, and use no more than four colors (Head, 1999). Oddly, these design "tips" are found both in the academic-style literature as well as the more popular design guides.


The following are the primary areas of debate and research within HCI.

Goals, Operations, Methods, and Selection (GOMS) Models

The GOMS Model is one of the basic HCI principles often discussed in the literature (Card, Moran, & Newell, 1983). Goals are set to provide a memory point for return if there is failure or to refer to a navigational history. The operator is output such as the keyboard and mouse. Methods are learned procedures that the user already knows, rather than plans created during the completion of a task. Section is the use of a set of selection rules, often using an if-then logic (Card, Moran, & Newell, 1983). GOMS was one of the first attempts to infer a cognitive model to describe how users perform tasks, and some see GOMS as a major advance in looking at models that predict human behavior (Reisner, 1987). Black, Kay, and Soloway (1987) see GOMS as a well-developed model for the study of story and narrative understanding in computer environments. More recently, more complex models have been proposed, using variations on linguistic grammar theory and production systems, and task-action grammar (Woof-fitt, Fraser, Gi lbert, & McGlashan, 1997).

Wooffitt, Fraser, Gilbert, and McGlashan (1997) criticize the GOMS method because users often behave by first acting when thrown into a situation, and only then devising a goal afterwards. They argue that an individual's actions are produced on a moment-by-moment basis, and that their behavior in particular circumstances is not rule-governed. Consequently, the GOMS method may have limitations.

Command Language Versus Direct Manipulation

In the HCI literature, two types of interaction styles are generally recognized: command language or direct manipulation systems. Command language systems are also known as linguistic manipulation systems, or dialogue systems, and were often used in the early days of computers in which users communicated with the computer through text command. Direct manipulation systems are the graphic user interfaces (GUI) now common to users in the Windows environment (Faulkner, 1998). Shneiderman (1995) is credited with introducing "direct manipulation" as a phrase for interfaces with the following characteristics: continuous representation, physical actions instead of typed commands, and rapid impact on objects with the results becoming immediately visible (Helander, 1998).

Shneiderman (1997b) argued that the usefulness of direct manipulation stemmed from the visibility of the objects of interest so that there is little need for the mental decomposition of tasks into multiple commands. Each action produces a result in the task domain that is visible in the interface. He related the basic principle to stimulus-response compatibility discussions in the human-factors literature. He claimed that the difficulty with direct manipulation was to come up with an appropriate representation or model of reality (Shneiderman, 1987). Later, we see how this discussion is renewed in the literature on metaphor and simulation.


Hypertext is an important issue in HCI research. It is connected to the literature on cognitive issues because hypertext is said to mimic the associative manner in which the brain works. Often, it is argued that hypertext may alter the way in which people read, write, and organize information, and it may be crucial in the development of nonlinear thinking (McKnight, Dillon, & Richardson, 1991). This literature claims that linear text limits an author's ability to address the range of needs and interests of readers. Hypertext solves this problem, the argument goes, by presenting text in a nonlinear arrangement linked by key phrases in the text (Osgood, 1994). Additionally, in line with the media discussion, one of the most important advantages of hypertext is that it is a method for integrating three technologies and industries that have been separate until recently: publishing, computing, and broadcasting in the form of television and film (Nielsen, 1990).

However, the literature shows that one of the primary problems with hypertext is that it causes severe difficulty with navigation for users (McKnight, Dillon, & Richardson, 1991; Osgood, 1994). Additionally, hypertext indexing methods are often inadequate and not necessarily focused on what the user most wants to follow (McKnight, Dillon, & Richardson, 1991). Researchers, recognizing the problem of navigation, work on helping users better navigate through text, including better forms of indexing (Osgood, 1994).

Some argue that the opposition between hypertext and print reading is a false dichotomy. McKnight, Dillon, and Richardson (1991) point out that reading is not really a linear activity, but instead involves a great deal of skimming. Particularly in experienced readers, rarely is a document read straight through from beginning to end. The problem with hypertext is that its theoretical basis, which is an implied criticism of normal text forms, is inaccurate, and consequently the alternative is not the advantage that proponents imagine (McKnight, Dillon, & Richardson, 1991). While hypertext represents a change in the presentation of text, it may not alter the way in which words are read by a reader (McKnight, Dillon, & Richardson, 1991).

Graphic/visual Issues

In addition to visual interface issues, the HCI literature also touches on topics related to visual perception and how the specifics of human visual perception may impact human-computer interaction. The important issues include how light transmits information to the eye of the perceiver, how that information is processed, and finally, how that information results in conscious experience of the external world. The notion of the perceiver as a processor of information is the central focus of the psychology of visual perception (Haber & Hershenson, 1973). Some interested in broader visual research have examined the relationship between visual imagery and mental imagery in human perception (Klima, 1974).

Arnheim's work (1974) is central (and often cited) in this discussion of perceptual issues in HCI. He argued that "gestalt," the German word for shape or form, has been applied since the beginning of the 20th century to a body of scientific principles that were derived mainly from experiments in sensory perception. Arnheim points to Christian von Ehrenfels, who claimed that the sum of the experience of 12 observers who listen to one of the 12 tones of a melody is quite different from the experience of someone listening to the whole melody. Arnheim (1974) argues that, in a similar manner, vision is not a mechanical recording of individual elements, but rather the recognition of patterns. Consequently, much of the research has focused on visual pattern recognition.

According to Shneiderman (1997b), visual perception is underutilized by today's graphical user interfaces. His work on the HomeFinder and the FilmFinder demonstrated that users could find information faster with graphical user interfaces than with natural language queries, and that user comprehension and satisfaction was high for these interfaces (Shneiderman, 1997b). Furthermore, the literature suggests that there is evidence to support that humans recall pictures better than words (Faulkner, 1998).


Interface metaphors are often discussed in HCI literature as they pertain to interface design. The use of an interface metaphor--such as the desktop and window--is widespread in computer software design as an ideal method for providing a quick and easy foundation for users to understand how applications work (Cohen, 1997). Interface metaphors work by exploiting previous user knowledge of a mental model (Helander, 1998; Klima, 1974). There are three main approaches to metaphor research: measuring behavioral effects, cognitive mappings between metaphor and meaning, and the constraints of context and goals when using particular metaphors (Helander, 1998).

In the literature on interface metaphors, critics claim that metaphors stand in the way of making new connections and associations (Nelson, 1995). Research in cognitive psychology supports this notion that using similar representations is helpful, but can be detrimental to user behavior under specific conditions, particularly if the metaphor does not fit appropriately (Cohen, 1997). Nelson (1995) believed that metaphors are counterproductive because they kept designers from finding new design principles that might lead to a new conceptual organization. In a similar fashion, Oren (1995) saw the use of metaphor as a genre in which familiarity to images and conventions prevented users from taking a more active role.


Animation is another subject discussed in the HCI literature, usually a addressed along with interface design issues. The term "animation" is not used

to describe drawn figures, but rather to describe movements of either text or

graphics on the computer screen. It is the use of graphic art occurring over time (Baecker & Small, 1995). Animation is not used as much as it could be in human-computer interactions. Many in the literature argue that it can be very effective in establishing mood, in increasing sense of identification in the user, for persuasion, and for explication (Baecker & Small, 1995; Morris, Owen,

& Fraser, 1994). Baecker and Small (1995) describe many specific uses for animation including reviewing, identification of an application, emphasizing transitions to orient the user, to provide choices in complex menus, to demonstrate actions, to provide clear explanations, give feedback on computer status, show history of navigation, and to provide guidance when a user needs hel p.

In terms of assessment, there is disagreement about the effectiveness of animation. Morris, Owen, and Fraser (1994) claimed that several studies have explored the effectiveness of animations in educational contexts, while Bederson (1998) argued there have been few studies providing clear evidence of the positive affects of animation for the user.

Organizational Issues

The literature on HCI also addresses issues having to do with how computers are used in organizations. Increasingly, HCI researchers are looking at not just the individual characteristics of the user, but at interactions among people mediated by computers (Malone, 1987; Faulkner, 1998; Wooffitt, Fraser, Gilbert, & McGlashan, 1997). Rather than focusing on the user, this approach looks at groups of users and how to design computer systems in such a way that they fit naturally and appropriately into human organizations (Malone, 1987). Malone also identified four basic aspects of the organizational issues in HCI including economic, structural, human relations, and political. Maddix (1990) saw the emphasis on organizational HCI analysis as rising as organizational changes lead to workgroups characterized by a collective mission instead of individuals. In fact, some argued that differences in users' interactions with systems are not the result of individual psychological and physical differences, but social struc tured differences (Wooffitt, Fraser, Gilbert, & McGlashan, 1997).


HCI literature also addressed the use of various forms of artificial intelligence (AI) in the service of users including agents, text filtering, predictive text generation, and simulation. Agents are active and ever-present software components that perceive, appear to reason, act, and communicate (Huhns & Singh, 1998). Agents, also referred to as guides and personal assistants, first appeared in the form of travel agents helping users make their way through applications (Oren, Salomon, Kreitman, & Abbe, 1995). The key aspects of agents are anthropomorphic presentation, adaptive behavior, multi-modal, dialogue based, ability to work with vague goal specification (mixed initiative), supply what you need, and work unattended (Schneiderman, 1995; Huhns & Singh, 1998). Also, agents suggest a natural way to present multiple voices and points of view (Oren, Salomon, Kreitman, & Abbe, 1995) and involve a degree of improvisation (Chapman, 1991). Many believe that human-human interaction is a good model for human-comp uter interaction and, consequently, look to agents as a perfect HCI solution (Shneiderman, 1997b).

Agents are viewed in two extreme views reflecting viewpoints on the degree of artificial intelligence used in their construction. One sees agents as conscious, cognitive entities. The second major view is that agents are only programs responding to commands or command sets made in advance (Huhns & Singh, 1998). Applications involving information access, filtering, electronic commerce, education, and entertainment are becoming more prevalent and have in common a need for mechanisms for finding, fusing, using, presenting, managing, and updating information, all of which agents are intended to perform (Huhns & Singh, 1998). In recent years, much has been written on agents, as the trend has shifted from passive interfaces to active interfaces (Huhns & Singh, 1998).

Some designers promote the notion of adaptive and/or anthropomorphic agents who anticipate and carry out the users' intentions. The famous bow-tied, helpful young man in Apple Computer's 1987 video on the Knowledge Navigator, and Microsoft's unsuccessful Bob program are examples of early attempts at anthropomorphic computer agents.

Although the majority of the literature is highly optimistic about the promise of agents, some doubt that they will work because of the difficulties in understanding the context of information and the need for users to trust computer agents (Head, 1999; Shneiderman, 1995). Shneiderman (1995) argues that agents offer promise, but a good alternative to agents may be to expand the control-panel metaphor and establish personal preferences (Shneiderman, 1997b).

Text Filtering

Text filtering is another type of AI often mentioned in HCI literature. Text filtering may be one of the functions of an intelligent agent and is an information seeking process in which documents are selected for specific information needs (Oard & Marchionini, 1997; Shneiderman, 1997b). Luhn is credited with identifying a modern information filtering system and introducing the idea of a "Business Intelligence System" in 1958. In this system, library workers would create profiles for individual users and produce lists of new documents for each user (Qard & Marchionini, 1997). Selective Dissemination of Information (SDI) became a field and resulted in the creation of the Special Interest Group on SDI (SIG-SDI) of the American Society for Information Science. By 1969, 60 operational systems were being used, generally following Luhn's model (Oard & Marchionini, 1997).

Denning coined the term "information filtering" and broadened a discussion that had traditionally focused on generation of information to include reception of information as well. He described a need to filter information arriving by e-mail in order to separate urgent messages from routine ones and customize to the needs of the user. Malone introduced an alternative approach called social or collaborative filtering, where a document is based on annotations to that document made by previous readers (Oard & Marchionini, 1997).

Predictive Text Generation

Predictive text generation is another form of artificial intelligence that uses a context-sensitive technique for enhancing expressive communication to suggest what the user might want to type next, on the basis of preceding input (Darragh & Witten, 1992). Predictive text generation is now familiar to many that use the latest Microsoft Office products. Many of the traditional uses for this form of HCI are for those with special needs. It works by accelerating typewritten communication with a computer system by predicting what the user is going to type next. Good touch-typists are likely to find predictive text generation a hindrance, but moderate to poor typists find it helpful, especially for highly structured text (Darragh & Witten, 1992).


A final use of artificial intelligence in HCI repeatedly described in the literature is for visualization and creative endeavors. Shneiderman (1999b) described the need to support creativity as a challenge for HCI designers. His model, called "genex," includes four stages focusing on: a) collecting previous works stored in digital libraries, b) relating with peers and mentors at multiple stages, c) creating through exploration and discovery, and d) donating by disseminating the creative results to digital library collections (Shneiderman, 1999b). To this scheme, Shneiderman (1999) adds visualization, free association, and replaying histories as areas of needed research. He sees visualization as supporting creative work by enabling users to find relevant information, and identify patterns (Shneiderman, 1 999b). Further, important aspects of computer assistance with creativity include constructing meaningful overviews, zooming in on desired items, filtering out undesired items, and showing relationships among items (Shneiderman, 1999b).

North and Shneiderman (1999) propose the use of multiple coordinated views for exploring information creatively. Each view is a visualization of some part of the information, and views are tightly linked so that they operate together as a unified interface (North & Shneiderman, 1999). Spotfire (, Xerox PARC's perspective wall, Yale computer science professor David Gelernter's LifeStreams, and LifeLines are other systems which take a similar approach to information exploration (Shneiderman, 1997).

Pickover (1991) is another of the main proponents for the use of computers as aids to imagination. He argues that computers are providing mankind with an unlimited unparalleled aid for the imagination. He proposes visualization for scientific use through both simple and advanced computer graphics as a way to help understand complicated data (Pickover, 1991).


Simulation is a major part of the literature on HCI, particularly as it applies to educational environments. A review of the literature shows the number of published simulation articles at approximately 200 for each of the years 1986 to 1990 (Pickover, 1991). Especially in educational environments, simulation can be very effective as an HCI tool. Because of the rich multimedia computer environment, learners can better bridge the gap between reality and the simulated task. Learning by doing is accomplished through simulation and is especially useful where actual environments are expensive and impractical to recreate constantly (Feifer, 1994). Simulation is effective because it can create a context for learning (Feifer, 1994). Pickover (1991) encourages the use of the computer as an instrument for both simulation and discovery, particularly in science. Feifer (1994) argues that the difficulty of creating good simulations and the difficulty of learners using simulations by themselves are two factors limiting th e use of computer simulations in teaching.

Schank (1997) argues that learning in computer simulations, or virtual learning, offers the best opportunity for students to learn by doing in an apprenticeship-type model. He stated that one of the biggest issues for learners is that they have trouble failing in public, while computers offer an ability to fail independently. Schank focuses on the use of stories in simulators, and the use of both expert and non-expert storytelling in simulations. He argues that in workplace learning, stories are at the root of organizational knowledge. By simulating scenarios based on common organizational stories, employees can quickly acquire needed knowledge.


A review of the literature in HCI reveals many principles that should be embraced in the development of educational software. First, principles of human factors and usability need to be incorporated in educational software design. Schneiderman's (1987) focus on the five human factors: (a) time to learn, (b) speed of performance, (c) rate of errors, (d) subjective satisfaction, and (e) retention over time are very useful in education. In particular, the philosophical approach of usability ties in very well with learner-centered educational approaches. Maddix's (1990) emphasis on the process aspect of usability by viewing computer systems as a totality also has important ramifications for educational software that currently often emphasizes individual drills and testing. Furthermore, the usability emphasis on supporting a range of user skills and needs is essential in education.

In looking at specific issues in HCI, interface metaphors, animation, and collaboration tools are relevant to education. Metaphors may stand in the way of making new connections and associations (Nelson, 1995; Oren, 1995). On the other hand, effects such as animation can be very effective in establishing mood, in increasing sense of identification, for persuasion, and for explication (Baecker & Small, 1995; Morris, Owen, & Fraser, 1994). Also, some argue that integrated classroom tools should support collaborative processes (Norman, 1997; Shneiderman, 1998b). Networked classrooms enable a variety of collaboration opportunities (Shneiderman et al., 1995), and improved collaborative software could facilitate easier management of teams of learners (1998b).

The understanding of computers as a medium may be a key to re-envisioning educational software. Oren (1995) argues that understanding computers as a medium means enlarging HCI to include issues such as the psychology of media, evolution of genre and form, and the societal implications of media. Computers began to be used in educational environments much later than film, and some claim that we are still using computers, instructionally, at very low levels of sophistication (Gibbons & Fairweather, 1998). If computers are a new medium, what is unique about the computer medium? What are its specific advantages for education?

Gibbons and Fairweather (1998) identified five attributes that make the computer, as instructional medium, unique: (a) dynamic display, (b) ability to accept student input, (c) speed, (d) ability to select, and (e) flawless memory. One of the distinct advantages of learning in computer environments might be this ability to have a record of learning. Plaisant, Rose, Rubloff, Salter, and Shneiderman (1999) state that such a record of learning could help students monitor their behavior, reflect on their progress, and experiment with revisions of their experiences.

Some argue that computer environments are particularly useful in giving users rich learning experiences, a direct result of its media nature. Shaffer and Resnick (1999) describe a "thick authenticity" in computer simulation which is personally meaningful and connected to the real world. Shneiderman argues in a similar fashion that constructivist notions of learning as activity, exploration, and creation are well suited to the computer environment. His view is that traditional education is passive, and that computers offer an opportunity for engagement that is powerful and new. Shneiderman (1993) claimed that the constructivist approach to computer learning is very different from the teaching machines, computer-assisted instruction, intelligent computer-assisted instruction, and intelligent tutoring systems. The constructivist view focuses on interactive learning environments and discovery learning (Schneiderman, 1993).


It is clear from the HCI literature review that education can learn a great deal from human factors, usability, and interface design approaches to software design. These areas need to be explored with education in mind. Furthermore, specific topic areas such as interface metaphors, the use of agents, predictive text generation, text filtering, and simulation are especially relevant to educational environments and should be pursued vigorously. In addition, the following are primary areas for research on the uses of HCI as media in education:

Research Media Properties of Computing

The whole area of HCI research focusing on computers as a medium is especially important for education. Kay (1995) quoted McLuhan in suggesting that if the personal computer is a truly new medium then the very use of it will change cultural and individual thought patterns. Could the development of this new medium change education? Kay (1995) argued that for users to receive messages embedded in a medium, they need to have internalized the medium. While American film has developed an elaborate code over the years, the conventions of which are clearly understood by the general viewing audience, computers have yet to develop such complex viewing conventions. We need to develop these computer viewing/using conventions, particularly for learners.

Although there are similarities between computers and other media (film in particular), there are unique properties as well, that are transformative to the user. Printed books transformed society by allowing users to preserve and share information. The computer and digital communications are, again, transforming society in a similarly large way. The important question then is, what are the specific properties of the new medium? What content is best delivered through this medium? Westland (1994) asks, "if the medium is the message, then what kinds of messages are facilitated by multimedia?" (Westland, 1994, p. 359).

Identify Other Media Parallels to Computers

Another key research question is if the computer is a medium, which is most relevant to the computer? Nelson (1995) argued that movies have the greatest experience in working with psychological and visual effects on screens. It is for this reason that film is most relevant as a medium to computer. Nelson points out that particular talents are required for the effective use of both computers and film as a medium, most importantly, a unifying vision. Baecker and Small (1995) agree with Nelson in arguing that designers should look to the language of cinema for models of how computer interfaces are structured. Cinematic theory and conventions developed from American film, integrated with new early computer conventions (especially from video games), might lead to the establishment of a computer medium language (Westland, 1994). This computer medium language is one that is essential for the further development of educational software.

Understand Transitions

One obvious application of this parallel between computers and film is in transitions such as the cut, fade in, fade out, dissolve, wipe, overlay, and the cinematic effects such as multiple exposure, panning, zooming in, and emphasized camera angles (Baecker & Small, 1995). However, some feel that the use of film-style transitions is overused, and used without specific intentional meaning in current software (Westland, 1994). Westland (1994) argued that the parallel editing style of film--in which two sequences are intercut in order to build tension and contrast content-should be used to organize content in software. Film as a language has developed to the point that it understands the importance of context and order (or syntax) of images (Worth, 1981). The film viewer assumes intentionality, while this may not be the case with the computer medium because of interactivity and user control. What are the editing principles of the new computer medium? How can they best be used in educational environments?

Levels of Media Understanding

Oren (1995) suggests that, with more developed forms of media such as written language, there may be levels of language use and understanding. In the same way as reading a novel, one can understand it on a superficial level of plot as well as a deeper textual level, computers as a medium may develop levels of convention and sophistication (Oren, 1995). What does this mean for teaching strategies in this new medium?

Understand the Nature of Interaction

One clear difference in computers, as opposed to film, is user interaction and the ability to manipulate symbols. Oren (1995) describes it as a "metamedium" in that it can involve the manipulation of various kinds of media by the user. Interactivity is a key aspect of the computing medium and needs to be an object of much more study.

Investigate Better Use of Sound

Another large difference between films and computers is the use of sound (Westland, 1994). While in the last 20 years in American cinema, directors such as Speilberg and Lucas have focused on the narrative uses of sound, most computer applications vastly under use sound. Computers, thus far, heavily emphasize the visual message over the auditory (Westland, 1994). Research needs to be performed on the use of sound in educational software.

Finally, why is the computer-as-medium notion important for HCI in education? It redirects and broadens further research to include not only a notion of how humans interact with computers, but how humans interact with a new medium. In the future, a more accurate label for HCI research might be HCMI, human-computer media interaction. As a new medium, computers may finally realize the potential of educational technology to transform education.


Adler, P.S. & Winograd, T.A. (1992). Usability: Turning technologies into tools. Oxford, UK: Oxford University.

Arnheim, R. (1974). Art and visual Perception: The psychology of the creative eye. Berkeley, CA: University of California.

Baecker, R., & Small, I. (1995). Animation at the interface. In B. Laurel, (Ed.) The art of human-computer interface design. Reading, MA: Addison-Wesley.

Bederson, B.B. (1998). Does animation help users build mental maps of spatial information? Computer Science Department Human-Computer Interaction Lab, University of Maryland. Unpublished.

Black, J.B., Kay, D.S., & Soloway, E.M. (1987). Goal and plan knowledge representation. In J.M. Carroll, (Ed.), Interfacing thought: Cognitive aspects of human-computer interaction. Cambridge, MA: The MIT Press.

Card, S.K., Moran, T.P., & Newell, A. (1983). The psychology of human-computer interaction. Hillsdale, NJ: Lawrence Erlbaum.

Carey, J. M. (Ed). (1991). Human factors in information systems: An organizational perspective. Norwood, NJ: Ablex.

Chapman, D. (1991). Vision, instruction, and action. Cambridge, MA: The MIT Press.

Cohen, A.D. (1997). The value of metaphors in conceptual user-interface design. Unpublished dissertation, Evanston, IL: Northwestern University.

Cook, T.D., Appleton, H., Conner, R.F., Shaffer, A., Tamkin, G., & Weber, S. (1975). Sesame Street revisited. New York: Russell Sage Foundation.

Crook, C. (1994). Computers and the collaborative experience of learning. London: Routledge.

Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920. New York: Teachers College.

Darragh, J.J. & Witten, I. H. (1992). The reactive keyboard. Cambridge, UK: Cambridge University.

DeCecco, J.P. (1964). Educational technology: Readings in programmed instruction. New York: Holt, Rinehart, and Winston.

Faulkner, C. (1998). The essence of human-computer interaction. New York: Prentice Hall.

Feifer, R.G. (1994). Cognitive issues in the development of multimedia learning systems. In S. Reisman (Ed.), Multimedia computing: Preparing for the 21st century. Harrisburg, PA: Idea Group Publishing.

Frasson, C., & Gauthier, G. (1990). Intelligent tutoring systems: At the crossroads of artificial intelligence and education. New Jersey: Ablex.

Gibbons, A.S., & Fairweather, P.G. (1998). Computer-based instruction: Design and development. Englewood Cliffs, NJ: Educational Technology.

Goldman, F., & Burnett, L.R. (1971). Need Johnny read? Practical methods to enrich humanities courses using films and film study. Dayton, OH: Pflaum.

Haber, N., & Hershenson, M. (1973). The psychology of visual perception. New York: Holt, Rinehart and Winston.

Head, A. (1999). Design wise: A guide for evaluating the interface design of information resources. Medford, NJ: Cyberage Books.

Helander, M. (Ed). (1998). Handbook of human-computer interaction. Amsterdam: North-Holland.

Hoban, C.F. (1942). Focus on learning: Motion pictures in the school. Washington, DC: American Council on Education.

Huhns, M.N., & Singh, M.P. (Eds.). (1998). Readings in agents. San Francisco: Morgan Kaufmann.

Kay, A. (1995). User interface: A personal view. In B. Laurel (Ed.). The art of human-computer interface design. Reading, MA: Addison-Wesley.

Klima, G. (1974). Multi-media and human perception. New York: Meridian Press.

Maddix, F. (1990). Human-computer interaction: Theory and practice. New York: Simon & Schuster.

Malone, T.W. (1987). Computer support for organizations: Toward an organizational science. In J.M. Carroll (Ed.). Interfacing thought: Cognitive aspects of human-computer interaction. Cambridge, MA: MIT Press.

McKnight, C., Dillon, A., & Richardson, J. (1991). Hypertext in context. Cambridge, UK: Cambridge University.

McLuhan, M. (1964). Understanding media: The extension of man. New York: McGraw-Hill.

Metlitzky, L. (1999). Bridging the gap for the mainstream faculty: Understanding the use of technology in instruction. Unpublished dissertation, Claremont, CA: Claremont Graduate University.

Morris, J.M., Owen, G.S., & Fraser, M.D. (1994). Practical issues in multimedia user interface design for computer-based instruction. In S. Reisman (Ed). Multimedia computing: Preparing for the 21st century. Harrisburg, PA: Idea Group.

Mountford, S.J. (1995). Tools and techniques for creative design. In B. Laurel (Ed.). The art of human-computer interface design. Reading, MA: Addison-Wesley.

Nelson, T.H. (1995). The right way to think about software design. In B. Laurel (Ed.). The art of human-computer interface design. Reading, MA: Addison-Wesley.

Nielsen, J. (1990). Hypertext & hypermedia. Boston: Academic Press.

Norman, D.A. (1998). The invisible computer. Cambridge, MA: The MIT Press.

Norman, D.A. (1987). Cognitive engineering--cognitive science. In J.M. Carroll (Ed.). Interfacing thought: Cognitive aspects of human-computer interaction. Cambridge, MA: The MIT Press.

Norman, D.A. (1988). The design of everyday things. New York: Currency Doubleday.

Oard, D.W., & Marchionini, G. (1997). The state of the art in text filtering. User modeling and user-adapted interaction. ( 1/filter.html)

Oren, T. (1995). Designing a new medium. In B. Laurel (Ed.), The art of human-computer interface design. Reading, MA: Addison-Wesley.

Oren, T., Salomon, G., Kreitman, K., & Abbe, D. (1995). Guides: Characterizing the interface. In B. Laurel (Ed.), The art of human-Computer Interface Design. Reading, MA: Addison-Wesley.

Osgood, R.E. (1994). The conceptual indexing of conversational hypertext. Unpublished dissertation, Evanston, IL: Northwestern University. (

Pickover, C.A. (1991). Computers and the imagination. New York: St. Martin's.

Preece, J., & Shneiderman, B. (1995). Survival of the fittest: The evolution of multimedia user interfaces. Unpublished dissertation, College Park: University of Maryland. ( ml/96-02.html)

Reason, J. (1990). Human error. Cambridge, UK: Cambridge University Press.

Reisner, P. (1987). Discussion: HCI, what is it and what research is needed? In J.M. Carroll (Ed.), Interfacing thought: Cognitive aspects of human-computer interaction. Cambridge, MA: MIT Press.

Schank, R.C. (1997). Virtual learning. New York: McGraw-Hill.

Schillaci, A., & Culkin, J.M. (Eds.). (1970). Films deliver: Teaching creatively with film. New York: Citation Press.

Shaffer, D.W., & Resnick, M. (1999). "Thick" authenticity: New media and authentic learning. Journal of Interactive Learning Research, 10(2).

Shneiderman, B. (1999). Universal usability: Pushing human-computer interaction research to empower every citizen. Position paper for National Science Foundation & European Commission meeting on human-computer interaction research agenda, Toulouse, France. To be published in book form. ( 17html/99-17 .html)

Shneiderman, B. (1999b). Supporting creativity with advanced information abundant user interfaces. Position paper for National Science Foundation & European Commission meeting on human-computer interaction research agenda, Toulouse, France. To be published in bookform. ( ml/99-16.html)

Shneiderman, B. (1998). Codex, memex, genex: The pursuit of transformational technologies. International Journal of Human-Computer Interaction, 10(2), 87-106. ( Bibliography/3862HTML/3862.html)

Shneiderman, B. (1997b). Direct manipulation for comprehensible, predictable and controllable user interfaces. Proceedings ACM International Workshop on Intelligent User Interfaces '97, ACM, New York, NY, 33-39. ( tml)

Shneiderman, B. (1995). Looking for the bright side of user interface agents. ACM Interactions, 2(1), p. 13-15.

Shneiderman, B. (1993, June). Education by engagement and construction: Experiences in the AT&T teaching theater. Keynote for ED-MEDIA 93, Orlando, FL

Shneiderman, B. (1987). Designing the user interface. Reading, MA: Addison-Wesley.

Sime, M.E., & Coombs, M.J. (1983). Designing for human-computer communication. New York: Academic Press.

Westland, J.C. (1994). Cinema theory, video games, and multimedia production. In S. Reisman (Ed), Multimedia computing: Preparing for the 21st century. Harrisburg, PA: Idea Group.

Wooffitt, R., Fraser, N.M., Gilbert, N., & McGlashan, S. (1997). Humans, computers, and wizards. New York: Routledge.

Worth, S. (1981). Studying visual communication. Philadelphia: University of Pennsylvania Press.


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.