Publication: Disparity mapping of stereo images for robot navigation
Loading...
Date
2012-06-01
Authors
Wong Teck Ken, Richard
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Stereo vision system is getting increasingly important in the research field of robotic navigation. The use of two cameras mimics the characteristic of most biological vision system, such as humans’ and predator animals’, with two eyes. These two cameras placed side by side, aligned horizontally, at a known distance are used to obtain differed views on a scene. Camera calibration is performed as the first step of this project by using MATLAB® Camera Calibration Toolbox. The purpose of camera calibration is to obtain the intrinsic and extrinsic parameters of the stereo cameras pair. The result of the calibration is used in rectification to transform the stereo images such that the corresponding epipolar lines exist collinear with each other. Object segmentation is then performed on the rectified images. The results obtained from object segmentation are images with objects only. These segmented images are inputs to the stereo matching algorithm. Stereo matching algorithm aims to identify the corresponding points in the stereo image and provide disparity. The stereo matching algorithm is made simple by the rectification and the object segmentation process. Rectification process reduces the correspondence problem to a one dimensional search problem while object segmentation reduces the points to be processed. In this proposed algorithm, object segmentation is used to identify object pixels from background pixels. The stereo matching techniques will process the object pixels but not the background pixels. The stereo matching techniques implemented in this project are Sum of Absolute Difference (SAD) and Sum of Squared Difference (SSD). Their performance is compared with the conventional stereo matching algorithm where all pixels are used for processing. The disparity map generated from the stereo matching algorithm is analysed to find the distance of the objects from the cameras in real world. The disparity is inversely proportional to the real world distance. A map with top-down view is constructed from the distance calculated. This map is useful in analysing the surrounding of the robot if implemented. From the result of this project, it is promising that the proposed algorithm reduces the processing time significantly by 35-52% and makes the real time implementation of disparity mapping in robotic navigation more feasible.