Human Implications of Human-Robot Interaction
Metzler, Ted, Lewis, Lundy, AI Magazine
The international and multidisciplinary gathering at this workshop sensed timeliness and importance in its topic. Human-robot interaction (HRI) already has established itself as a domain with substantive "how to" problems for technical communities, and it appears that these problems can involve open questions regarding humans as well. Moreover, technological advances in areas such as humanoid service robotics increasingly are drawing attention within the humanities and social sciences. Artificially intelligent artifacts displaying convincing humanlike behavior and appearance tend to be viewed readily as more than "tools"-they become "mirrors" reflecting distinctively human issues of ethics, personhood, privacy, and the like.
The workshop's opening presentation, for example, investigated "consequences for human beings" of machine ethics research-particularly, research supporting creation of ethical robots. Noting a need for this kind of research in applications such as robotic elder care, the authors identified other human benefits that might be expected from the work. One of these benefits should be familiar for all who have noticed that writing software to mimic something typically enhances one's understanding of the subject itself; indeed, machine ethics research plausibly could become a useful "laboratory" for clarifying ethical theories within moral philosophy. Creation of ethical robots was, of course, a concern of the science-fiction author, Isaac Asimov. In fact, assessments of his celebrated "Three Laws of Robotics" by researchers in robotics and AI were reported during another of the workshop's presentations. Widespread awareness of the laws notwithstanding, these assessments predominantly judged them unsuitable for actual implementation, citing problems such as ambiguity and logical ordering in Asimov's formulations. Application of general prescriptions against harming humans, for instance, can-as one member of a leading robotics institute observed-be difficult to represent in software. Compounding such problems of representation, another workshop presenter added that, according to the ethical theory of theologian Paul Tillich, guidance for application of moral "rules" depends critically upon experience of unconditional love (agape)! On the other hand, the presenter also noted that apparent human dispositions to assign moral status to humanlike robots might produce corresponding challenges for communities endorsing Tillich's ethics. The theologian argued, for example, that moral motivation involved provision of divine grace, which could entail some problematic conclusions within such communities regarding availability of such grace to machines. …