Finally, error monitoring should not depend on whether functions are enabled in traditional avionics.
To consider GPWS again, its processing has no concept of consequences. Of course, the designers knew that flight below a certain altitude could have most severe consequences. However, none of that consequential reasoning is present in the GPWS unit itself. It merely compares the radar altitude to the threshold and sets off an alarm if the threshold is transgressed. As a result, GPWS can be considered to cause many false alarms, at least when evaluating the true state of the aircraft with respect to the distance to the ground. In other words, if the aircraft continues on its current trajectory, how far is it from the edge of the cocoon? The GPWS has no representation about cocoon borders.
The point is that a situation is a hazard only if the potential consequences are severe. Evaluating errors requires a structural orientation toward consequences within the monitor. Other approaches that have been tried include omission of prescribed actions and human error theory. Experience with the omission of actions is that the severity of the error is usually unknown without other information about consequences. Human error theory can suggest what might be done about repairing the error (e.g., omission or repetition errors are somewhat self-diagnosing) or explain why it happened. Understanding the cause for an error may be useful for the designer or the pilot (in a debrief), but it serves little purpose in alerting the pilot to a serious error ( Greenberg, Small, Zenyuh, & Skidmore, 1995).
A high-level architecture for an intelligent interface has been described. The description represents a family of solutions, not an individual solution. The model structures described provide a sufficient framework for dealing with the problems of automation. One key property of the intelligent interface is that it increases the level of intelligence in the avionics to correspond more nearly with the authority already granted. Historically, the intelligent interface represents the next generation of automation that is built on the current layers of flight management systems and autopilots. The purpose of the intelligent interface is to support the pilot's decision making. This differs from the purpose of traditional automation, which is to automate tasks for the pilot.
System engineering becomes an essential effort for any system constructed with an intelligent interface. To build an intelligent interface component requires a thorough understanding of the purpose, benefits, and employment of each subsystem component to be installed on the aircraft. This understanding is a necessary part of the system engineering because knowledge engineering about the subsystem is necessary. The questions asked include:
What are the effects of using the subsystem in each of its modes on the aircraft and environment? This is aimed at producing a device level model of the subsystem.
When is it appropriate to use the subsystem?