Monocular vision SLAM



Some related discussion on OpenGL boards

Aim is to do an autonomous vacuum cleaner that can pass everywhere but efficiently (ie. contrary to Roomba), by doing Simultaneous Location And Mapping with monocular vision. The only sensor would be a single camera, and maybe some basic odometry and/or moustaches.

Current prototype : the video is captured during manual operation of Arges the monocular robot. the monocular robot Arges

3D SLAM is done offline on recorded video, with almost realtime performance : 3d monocular SLAM with OpenCV

(Different scene) A few 3D keyframes captured and automatically assembled in Blender : integration of multiple 3D textured keyframes

The ground and nearby obstacles are pretty accurate, but ... for everything else some work is still needed :-)

END OF FILE