The introduction of technology into modern work settings is quickly moving well beyond computer interfaces to include various forms of automation geared to gain people's attention, help them make decisions, and so on. The three chapters in this section present studies addressing these issues from the perspective of the lens model, its extensions, and empirical findings on the effects of feedback on human learning and performance. In chapter 7, Pritchett and Bisantz present a study of human interaction with alerting automation in an aviation collision detection task. As was the case with the chapters in the previous section, a more sophisticated than typical version of the lens model had to be created for analysis and modeling, due to the desire to capture the relationships not only between human judgments and a task ecology but also relationships between these entities and a variety of automated alerting algorithms. Using such an nsystem lens model, Pritchett and Bisantz were able to identify the information used (and, just as important, unused) by their participants in making collision detection judgments, and they examine the coherence and lack of coherence between the judgment strategies used by their human participants and those embedded within alerting automation. Their technique has potential applications beyond aviation to include alerting systems in human– computer interaction, automobiles, health care, and the like.
In chapter 8 Seong and colleagues take the matter one step further and consider the situation in which human judgments are not only made in concert with automation but also in part on the basis of the information provided by an automated decision aid. This added layer of complexity leads them to develop a framework using both n-system lens modeling and also hierarchical lens modeling. Of particular interest to Seong and his coauthors is how feedback information (informed by hybrid nsystem/hierarchical lens model analysis) might be used to influence and perhaps calibrate an appropriate level of human trust in decision aids. Factors known to influence trust, such as aid reliability and validity, were manipulated where these measures were grounded in the values of various lens model parameters. In this way they were able to determine that poor aid validity had more severe detrimental effects on human judgments than did poor aid reliability. Interestingly, however, providing participants using aids with poor reliability and/or validity with cognitive feedback (or instruction) on the manner in which the aid functioned (e.g., cue weights, etc.) allowed participants in their poor aid