To provide situational context to the paged analyst, NASA-Goddard has developed a number of visualization techniques within the Visual Analysis Graphical Environment (VisAGE). Available visualizations include 2-dimensional (2D) and 3-dimensional (3D) representations generated by test data. For example, the set of visualizations includes both 2D and 3D alphanumeric representations of the data, 2D and 3D histograms, 2D and 3D strip charts, and a 2D pie chart.
During planned usability studies, operational personnel will interact with the VisAGE tool and respond to items on a questionnaire. Participants will be asked to describe the information they consult in isolating and resolving anomalies in their own operational environments and to answer factual questions based on the visualizations displayed to them. These questions will test the effectiveness of the various visualizations in conveying information to an operator/analyst in a simulated lights-out context. To assess their levels of trust in automation, participants will be pre- and post-tested on their attitudes toward automation. A similar set of evaluations will be conducted on the VisAGE tool with undergraduate students at the University of Maryland. Measures of all participants' performance will include the accuracy of extracted information and response time.
A research environment under development in the LAP mimics some of the LOGOS capabilities. Known as MOCHA (Mars Observer Calls Home Again), it assumes the existence of an agent community that is capable of monitoring spacecraft operations and of paging a human expert when necessary. In experimental trials to be conducted in the coming months, three independent variables will be manipulated: 1) the level of automated aid provided to the participant, who plays the role of the off-site analyst; 2) the selector of the visual representation(s) viewed by the participant; and 3) agent reliability.
Automated aid will have three levels: 1) one 2D representation of the data; 2) one 3D representation of the data; and 3) a 2D and a 3D representation of the data. The selector will be the participant (self) or a simulated software agent (automated). Agent reliability will vary between 50% and 75% diagnostic accuracy. Each participant will be assigned at random to an experimental condition. The task will be to confirm/disconfirm the existence of the agent-reported anomaly and to assign a level of confidence in each confirm/disconfirm decision. Dependent variables will include accuracy in confirming or disconfirming the presence of an anomaly, response time, and level of confidence in the decision to confirm or disconfirm.
It is expected that the two-representation condition will support better performance than will either of the one-representation conditions, but only if the 2D and 3D displays provide different information. It is expected that results for the dependent measures will correlate significantly with trust in automation and spatial visualization ability (SVA), both of which will be assessed before the experimental trials. SVA will be assessed by a standard test administered on line, and trust will be assessed by questionnaire. It is expected that software agents will facilitate performance when participants report high trust in the automation.
Autonomous capabilities in complex, automated systems take the human operator/analyst beyond supervisory control into a new role, that of off-site, expert troubleshooter. This new paradigm requires a shift in framing metaphors, from out-of-the-loop process control to the medical model. Consideration of cognitive issues will help to ensure adequate design support for the paged analyst. Research underway at NASA-Goddard and the University of Maryland includes usability studies and experiments designed to investigate human performance issues in the context of autonomous spacecraft operations. Results of this