Computers Seeing People

Article excerpt

Building machines that can see has been one of the most exciting and challenging research quests of the last 30 years. Much effort has been expended on "automatic deduction of structure of a possibly dynamic three-dimensional world from two-dimensional images" (Nalwa 1993). There has been considerable progress in the areas of object recognition, image understanding, and scene reconstruction from single and multiple images. This progress, coupled with the improvements in computational power, has prompted a new research focus of making machines that can see people; recognize them; and interpret their gestures, expressions, and actions. In this article, I present methods that give machines the ability to see people, understand their actions, and interact with them. I present the motivating factors behind this work, examples of how such computational methods are developed, and their applications.

The basic reason for providing machines the ability to see people really depends on the task we associate with a machine. An industrial vision system aimed at extracting defects on an assembly line need not know anything about people. Similarly, a computer used for e-mail and text writing need not see and perceive the user's gestures and expressions. However, if our interest is to build intelligent machines that can work with us, support our needs, and be our helpers, then these machines should know more about who they are supporting and helping. If our computers are to do more than support out text-based needs such as writing papers, creating spreadsheets, and communicating by e-mail, perhaps taking on the role of being a personal assistant, then the ability to see a person is essential. Such an ability to perceive people is something that we take for granted in out everyday interactions with each other. This ability to perceive people and interact with them naturally is essential as we move toward building machines like HAL in 2001: A Space Odyssey and Commander Data in Star Trek: The Next Generation.

At present, our model of a machine, or more specifically of a computer, is something that is placed in the corner of the room. It is deaf, dumb, and blind and has no sense of the environment around it or of a person near it. We communicate with this computer using a coded sequence of tappings on a keyboard. Imagine a computer that knows you are near it, knows you are looking at it, and knows who you are and what you are trying to do. Such abilities in a computer are hard to imagine, unless it has an ability to perceive people. Research in speech recognition has made considerable progress toward perception of human speech (see Cole et al. [1995] for a survey). Commercial systems capable of word spotting and recognition of continuous speech are now available. Analysis of the video signal to perceive people has become a challenging and exciting research avenue for the field of computer vision, resulting in significant progress in the recent years.

To make machines that see people, the computer must first determine if someone is near it (where) and count how many people are in its field of view. The next step is to identify who the people are. After the computer has identified the people, it can interpret facial expression, hand gestures, and body language to determine what the people want or are doing in the scene and why. In the upcoming sections, I present the approaches to determine where, how many, who, what, and why with reference to people in a scene. The answer to each question is not possible independently, and each question depends on the other as dictated by the situation. Before getting into details, I briefly discuss the applications of such a technology.

Applications

Applications of computer vision methods aimed specifically at seeing people are many and encompass several different areas.

Effective human-computer interaction (HCI): Imagine computers that interact with you as we interact with each other, using speech and gestures. …