Paul C. Schutte and Anna C. Trujillo NASA Langley Research Center
Human error has been attributed to over 70% of all aircraft accidents ( Boeing, 1996). These errors are often mistakes in reasoning, what Reason calls Rule-based and Knowledge-based errors ( Reason, 1990). Most of these mistakes are believed to be due to poor situation awareness ( Endsley, 1995). Often the correct recognition of the situation leads almost directly to the correct response ( Klein et al., 1993). When flight crews follow procedures that correspond to the specific system malfunction alert provided by an alerting system (e.g., Boeing EICAS), they are very likely to do the right thing. The number of accidents averted due to this improved rule-based response has not been documented, but the decrease in the accident rate after implementation of improved alerts and their associated procedures is not likely to be coincidental. Thus, airframe and avionics manufacturers have gone to great lengths to account for possible failure modes and to develop corresponding checklists (i.e., procedures).
However, not all failures can be anticipated. One such unexpected event occurred in 1989 when United Airlines Flight 232 lost all hydraulics due to a tail engine failure in the DC-10 ( NTSB, 1990). The crew had to determine the most appropriate way to deal with the situation since no procedures existed for this failure. While the crew's performance was heroic, some evidence suggests that they were not aware of the total ineffectiveness of the wheel and column and, most importantly, of the excess drag on one side of the aircraft. The aircraft was using asymmetric thrust to maintain straight and level flight. On landing, the crew reduced the throttles together. This allowed the asymmetric drag to dominate the dynamics of the aircraft which caused it to cartwheel. Had the crew maintained asymmetric thrust, the aircraft might have been able to land safely.
In 1984, a group of researchers at NASA Langley Research Center set out to develop a decision aid that would assist flight crews in dealing with inflight system failures on the aircraft. The concept developed, Faultfinder, provided explicit information for skill-based (i.e., monitoring) and knowledge-based (i.e., model-based) reasoning in order to augment the current rule-based (i.e., procedures) reasoning. Evaluations of the concept proved that it was successful in correctly detecting and diagnosing failures ( Schutte, 1989; Shontz et al., 1993). But could the flight crew use this information in an unanticipated and untrained-for scenario? Could this information provide operational value (e.g., savings in equipment, fuel, or time)? The answers to these questions are pivotal in the decision to invest time and money in developing such concepts for commercial use. This paper describes an evaluation of the Faultfinder fault management concept in a full mission, full workload simulation.
The Faultfinder fault management concept was developed to enhance the flight crew's understanding of novel failures ( Schutte & Abbott, 1986). Faultfinder addresses two of the four fault management tasks ( Rogers et al., 1997) and was instantiated in two computer prototypes: fault monitoring, which determines if an abnormality has occurred; and fault diagnosis, which determines why an abnormality has occurred.
The fault management process is usually triggered by the occurrence of symptoms or events. These symptoms indicate in some degree of detail when the parameter is abnormal rather than normal. To detect
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Book title: Automation Technology and Human Performance:Current Research and Trends. Contributors: Mark W. Scerbo - Editor. Publisher: Lawrence Erlbaum Associates. Place of publication: Mahwah, NJ. Publication year: 1999. Page number: 240.