Utilizing Augmented Reality, 3D Audio, and
Venkataraman Sundareswaran,Reinhold Behringer,Steven Chen,and
Rockwell Science Center, Thousand Oaks, CA 91360, USA
Human Computer Interaction can be greatly enhanced if information from the computer can be projected directly in the field of view of a user as he goes about performing his tasks in the real world. This is the goal of Augmented Reality (AR): to display computer-generated information to appear embedded within the view of the real world.
Consider this future application scenario: a maintenance worker, wearing a light-weight see-through head-mounted display (HMD) approaches a complex machine; a CAD model of the machine appears in the HMD visually superimposed on the machine. As the maintenance worker moves around the machine, the pose and orientation of the display is dynamically modified so that the model appears to cling to the real machine. Using speech commands the maintenance worker asks for the machine error and diagnostics to be displayed. The error and diagnostics are displayed at 3D locations seemingly tethered to the problem spot, and 3D audio played over the headphones directs the user's attention to an adjacent machine that is part of the problem. At Rockwell Science Center (RSC), we have developed a system which is a conceptual prototype for HCI-rich AR applications. The novelty of our conceptual prototype is the integration of a new video-based AR technique, speech recognition, and 3D audio in a networked PC environment. The system demonstration is geared to address the needs in maintenance and training, to provide information not only just in time, but also just in place.