A simple and fast Bayesian algorithm
for computing ego-motion and structure
from point features
Jörg Conradt, Matthew Cook
{conradt, cook}@ini.phys.ethz.ch
Institute of Neuroinformatics, UZH / ETH-Zürich
When viewing video footage from a non-stationary source, it is possible to infer the motion of the camera, as well as the structure of the viewable scene, simply by tracking point features in the image. Such calculations are commonly referred to as solving the "ego-motion and structure" problem, which has attracted much attention over the years. In this paper we present a very simple and efficient algorithm which is designed to be practical for on-line applications requiring a continually-maintained estimate of position and/or spatial structure. The space and time complexity of our algorithm are both linear in the number of features currently being tracked. The algorithm is also mathematically simple, allowing it to run on simple microcontrollers without needing a math coprocessor. Although it is based on Gaussian estimates, no exponentials need to be computed. The algorithm consists of a quickly converging optimization procedure for estimating self location, followed by refining the feature location estimates using current observations on each step. The feature position estimates are updated using the posterior estimate of the previous step as the prior for the current step. Each update incorporates the current view of the feature as a conditionally independent noisy observation, conditioned on the feature’s true location. We have implemented our algorithm on a simple robot to test its real-world performance.
![]() |
The camera's field of view
![]() |