Academic journal article Human Factors

Factors Influencing Analysis of Complex Cognitive Tasks: A Framework and Example from Industrial Process Control

Academic journal article Human Factors

Factors Influencing Analysis of Complex Cognitive Tasks: A Framework and Example from Industrial Process Control

Article excerpt

We propose that considering four categories of task factors can facilitate knowledge elicitation efforts in the analysis of complex cognitive tasks: materials, strategies, knowledge characteristics, and goals. A study was conducted to examine the effects of altering aspects of two of these task categories on problem-solving behavior across skill levels: materials and goals. Two versions of an applied engineering problem were presented to expert, intermediate, and novice participants. Participants were to minimize the cost of running a steam generation facility by adjusting steam generation levels and flows. One version was cast in the form of a dynamic, computer-based simulation that provided immediate feedback on flows, costs, and constraint violations, thus incorporating key variable dynamics of the problem context. The other version was cast as a static computer-based model, with no dynamic components, cost feedback, or constraint checking. Experts performed better than the other groups across material con ditions, and, when required, the presentation of the goal assisted the experts more than the other groups. The static group generated richer protocols than the dynamic group, but the dynamic group solved the problem in significantly less time. Little effect of feedback was found for intermediates, and none for novices. We conclude that demonstrating differences in performance in this task requires different materials than explicating underlying knowledge that leads to performance. We also conclude that substantial knowledge is required to exploit the information yielded by the dynamic form of the task or the explicit solution goal. This simple model can help to identify the contextual factors that influence elicitation and specification of knowledge, which is essential in the engineering of joint cognitive systems.


There has been much discussion of the apparent fact that different task properties induce different cognitive activities and performance (e.g., Hammond, 1988; Hammond, Hamm, Grassia, & Pearson, 1987; Simon, 1996). However, little empirical research has investigated what happens to the knowledge elicitation outcomes when the parameters of the cognitive task analysis method are systematically varied. In this article we report on such an experiment.

The construction of effective joint cognitive systems often necessitates various and detailed investigations into how humans reason from a variety of perspectives (e.g., Hoc, Cacciabue, & Hollnagel, 1995; Vicente, 1999). The motive is to explicate the knowledge of a human expert (or knowledgeable collaborator) so a computational infrastructure for a similarly (though not necessarily equivalently) functioning system may be designed and constructed (Buchanan & Smith, 1989; Woods, 1994).

There are other objectives in the development of joint cognitive systems, such as examining how a group of referent experts varies in reasoning (Prietula & March, 1991); examining how experts develop or adapt to flexible strategies (Feltovich, Johnson, Mohier, & Swanson, 1984; Feltovich, Spiro, & Coulson, 1997); gaining insights into aspects of human-computer interaction within a system in general (Card, 1996; Card, Moran, & Newell, 1983); addressing interaction issues such as explanation capabilities or trust (Lerch, Prietula, & Kulik, 1997); and defining, preserving, controlling, and disseminating organizational knowledge (Borron, Morales, & Klahr, 1996; Klein, 1992).

The explication of human reasoning can be difficult and time consuming and can yield results that are difficult to interpret. Nevertheless, human reasoning must be examined if designers are to satisfy one or more of the objectives just described. Such analyses not only inform designers; they also sometimes suggest criteria against which the success of the delivered system can be judged. Efforts have been made to bring together some of the major methods and practices of knowledge elicitation (e. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.