Thursday, December 08, 2005

Algorithm Improves Robot Vision

BY DAVID ORENSTEIN

This week Stanford computer scientists will unveil a machine vision algorithm that gives robots the ability to approximate distances from single still images.

"Many people have said that depth estimation from a single monocular image is impossible," says computer science Assistant Professor Andrew Ng, who will present a paper on his research at the Neural Information Processing Systems Conference in Vancouver Dec. 5-8. "I think this work shows that in practical problems, monocular depth estimation not only works well, but can also be very useful."

Stanley, the Stanford robot car that drove a desert course in the DARPA Grand Challenge this past October, used lasers and radar as well as a video camera to scan the road ahead. Using the work of Ng and his students, robots that are too small to carry many sensors or that must be built cheaply could navigate with just one video camera. In fact, using a simplified version of the algorithm, Ng has enabled a radio-controlled car to drive autonomously for several minutes through a cluttered, wooded area before crashing.

To give robots depth perception, Ng and graduate students Ashutosh Saxena and Sung H. Chung designed software capable of learning to spot certain depth cues in still images. The cues include variations in texture (surfaces that appear detailed are more likely to be close), edges (lines that appear to be converging, such as the sides of a path, indicate increasing distance) and haze (objects that appear hazy are likely farther).

A robot moving at 20 miles per hour and judging distances from video frames 10 times a second has ample time to adjust its path even with this uncertainty. Ng points out that compared to traditional stereo vision algorithms—ones that use two cameras and triangulation to infer depth—the new software was able to reliably detect obstacles five to 10 times farther away.

New algorithm improves robot vision

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home