Al researchers are interested in building intelligent machines that can interact with them as they interact with each other. Science fiction writers have given us these goals in the form of HAL in 2001: A Space Odyssey and Commander Data in Star Trek: The Next Generation. However, at present, our computers are deaf, dumb, and blind, almost unaware of the environment they are in and of the user who interacts with them. In this article, I present the current state of the art in machines that can see people, recognize them, determine their gaze, understand their facial expressions and hand gestures, and interpret their activities. I believe that by building machines with such abilities for perceiving, people will take us one step closer to building HAL and Commander Data.
Building machines that can see has been one of the most exciting and challenging research quests of the last 30 years. Much effort has been expended on "automatic deduction of structure of a possibly dynamic three-dimensional world from two-dimensional images" (Nalwa 1993). There has been considerable progress in the areas of object recognition, image understanding, and scene reconstruction from single and multiple images. This progress, coupled with the improvements in computational power, has prompted a new research focus of making machines that can see people; recognize them; and interpret their gestures, expressions, and actions. In this article, I present methods that give machines the ability to see people, understand their actions, and interact with them. I present the motivating factors behind this work, examples of how such computational methods are developed, and their applications.
The basic reason for providing machines the ability to see people really depends on the task we associate with a machine. An industrial vision system aimed at extracting defects on an assembly line need not know anything about people. Similarly, a computer used for e-mail and text writing need not see and perceive the user's gestures and expressions. However, if our interest is to build intelligent machines that can work with us, support our needs, and be our helpers, then these machines should know more about who they are supporting and helping. If our computers are to do more than support our text-based needs such as writing papers, creating spreadsheets, and communicating by e-mail, perhaps taking on the role of being a personal assistant, then the ability to see a person is essential. Such an ability to perceive people is something that we take for granted in our everyday interactions with each other. This ability to perceive people and interact with them naturally is essential as we move toward building machines like HAL in 2001: A Space Odyssey and Commander Data in Star Trek: The Next Generation.
At present, our model of a machine, or more specifically of a computer, is something that is placed in the corner of the room. It is deaf, dumb, and blind and has no sense of the environment around it or of a person near it. We communicate with this computer using a coded sequence of tappings on a keyboard. Imagine a computer that knows you are near it, knows you are looking at it, and knows who you are and what you are trying to do. Such abilities in a computer are hard to imagine, unless it has an ability to perceive people. Research in speech recognition has made considerable progress toward perception of human speech (see Cole et al.  for a survey). Commercial systems capable of word spotting and recognition of continuous speech are now available. Analysis of the video signal to perceive people has become a challenging and exciting research avenue for the field of computer vision, resulting in significant progress in the recent years.
To make machines that see people, the computer must first determine if someone is near it (where) and count how many people are in its field of view. The next step is to identify who the people …