Younho Seong Ann M. Bisantz State University of New York at Buffalo
Automation has played an important role in supporting human and system performance in complex modern systems, such as aviation and process control. The advent of automation has changed the role of the human operator from performing direct manual control to the management of different levels of computer control. Human operators assume roles as supervisory controllers, interacting with the system through different levels of manual and automatic control ( Sheridan & Johannsen, 1976). Therefore, the human operator must understand how to interact with the automated system, how the automation works, how to respond to system outputs, and how and when to intervene in the process, if the process fails. One factor affecting this interaction is the operator's trust in the automated system. Sheridan ( 1980) emphasizes the importance of human trust in automation as playing a key role in determining the level of a human operator's reliance on and the degree of intervention in automation and appropriate use of automation. Trust has been studied mainly from a sociological perspective which focused on interpersonal relationship between individuals. Following the sociological definitions of trust, more recent studies ( Lee & Moray, 1992; Muir & Moray, 1996) have constructed models of human operator's trust in automated systems and shown how human trust in automated process control systems may affect system performance. These studies have focused on determining the extent to which human operator's trust in machines might affect system performance, and if so, identifying potential factors affecting the level of the operator's trust. An important concept regarding human trust is the notion of calibration: operators must have an appropriate level of trust in the information or automated system, given the characteristics of the situation. As Muir ( 1994) indicated, "well-calibrated" operators are better able to utilize automated systems. In case of aided adversarial decision- making systems, understanding how well an operator judges the level of data integrity, based on the observable characteristics of the situations becomes a very critical issue. We propose that Brunswik's Lens Model of human judgments ( Brunswik, 1955; Hammond, Stewart, Brehmer, & Steinmann, 1975) may be useful in formalizing the study of trust. The Lens Model provides dual models of a human judge and the environment to be judged, and allow the extent to which an individual's judgment behavior captures the structure of the environment to be assessed. This extent can provide a description of how well an operator's trust in the information, is calibrated to the actual environmental situation, as described by the relationship between those characteristics and the actual integrity of the information.
Rempel, Holmes, and Zanna ( 1985) definition of trust contains critical aspects of trust which can be used to examine human trust in automation from the human factors perspective. They emphasized not only components of interpersonal trust, but also the dynamic characteristics of trust toward a partner, regarding trust as a generalized expectation related to the subjective probability which an individual assigns to the occurrence of some set of future events ( Rempel, et al, 1985). That is, the study suggested that humans evaluated their partners based on the characteristics they observed. Therefore, these characteristics