they may be written in vector notation. Corresponding to the three- dimensional space position vector R = (X, Y, Z) we introduce the three- dimensional image position vector r ≡ R/Z = (x, y, 1). The three-dimensional image velocity is defined as u ≡ ṙ = (vx, vy, 0). Then, recalling that Δvx ≡ δ, we can rewrite Equations 27a-c as
It is not coincidental that Equation 28 bears a strong resemblance to the relation for three-dimensional space velocity of a point induced by an observer's rigid body motion (i.e., U ≡ Ṙ - -(V + Ω x R). In fact, Equation 28 is exactly this relationship for U/Z! This same relation has recently been derived independently by Nagel ( 1985) for the purpose of recovering the parameters of space motion from time-varying intensity.
The analysis of time-varying imagery and recovery of three-dimensional interpretations has clearly come a long way in the last several years. Yet much remains to be done before motion analysis can truly be exploited by a machine vision system. In the context of our own approach, the low- level component has hardly been explored. Contour extraction, localization, and tracking need to be studied in the time-varying domain. The flow-segmentation algorithm needs to be automated with regard to neighborhood size and data density. Moreover, video-rate pipelined hardware or parallel processor arrays should be utilized if we are ever to achieve real-time capabilities. Similarly, parallel implementations of flow recovery and three-dimensional inference are necessary. A global pasting together of the recovered surface patches is then required. We have only scratched the surface of the stereo-motion fusion problem. Here, psychophysics may serve as a guide to future computational studies. Finally, the merging of motion analysis with conventional control theory may open new doors in robotic manipulation and navigation.
THE TRANSFORMATION RELATIONS
In order to solve the kinematic relations for the three-dimensional structure and motion of curved surfaces, it is necessary to rotate the image axes about the line of sight by an angle a, that is, (x, y) (x + ̄, y + ̄). The inverse transformation is given by
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Book title: Advances in Computer Vision. Volume: 1. Contributors: Christopher Brown - Editor. Publisher: Lawrence Erlbaum Associates. Place of publication: Hillside, NJ. Publication year: 1988. Page number: 218.
This material is protected by copyright and, with the exception of fair use, may not be further copied, distributed or transmitted in any form or by any means.