A Robust And Object-Independent Robot Visual Positioning System
Loading...
Date
2003-06
Authors
Dhanesh Ramachandram
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This work proposes a robotic system that may learn to perform visual
positioning. The task of the visual positioning system is to re-position a robot
manipulator from any arbitrary initial pose to a pre-defined reference pose. A neural
network is used to perform the non-linear sensory-motor transformation between the
features extracted from the image and the 3D pose of the robot. The emphasis of the
research is on the image representation and the positioning accuracy achievable by the
proposed system and as such, the dynamic issues involved in real-time visual control
are not considered.
This work presents an implementation of a modular structured illumination
unit for visual servoing applications. which may be mounted on the end-effector along
with the camera. The laser structured illumination unit projects a grid pattern onto the
target object. The projected pattern prominently reveals the local surface geometry of
the object, and this method proves useful for targets that lack visual complexity to
enable feature localisation using passive illumination. An added advantage of
structured illumination for visual servoing is the fact that the appearance of the
observed pattern is dependant on the projection angle and the surface structure of the
object. If the information encoded by the projected pattern is captured efficiently, a
robust visual servoing system may be constructed.
Subsequently, this thesis examines various methods of representing the
projected pattern. Instead of recovering the 3D surface description as in typical
structured illumination approaches, global image features are used to provide a global
description of the image. The use of global image features to characterise the
projected patterns effectively makes the feature extraction object-independent, robust
to occlusions and missing features, and does not require any form of feature labelling
or correspondence matching. Several global image descriptors are examined. In the
first approach, low-order image moment terms are used to provide geometrical
description of the observed image. The second approach utilises the Discrete Wavelet
Transform (DWT) as an effective means to extract salient features from the image
projection histogram of the observed image. Selected coefficients of this transform
are then used as image features for visual positioning. Finally, an approach based on
the histogram of edge directions is evaluated as a global image feature for visual
positioning. The translation invariant shape information encoded by the edge direction
histogram is augmented using low-order geometric moments to capture the
translations of the projected pattern in the image. Statistical properties of the
histogram and the coefficients of the discrete wavelet transform of the edge direction
histogram are then used as input features to the neural network.
In this work, a recursive-positioning scheme is used to move the robot to the
desired pose in a succession of motion steps. A thorough analysis of the attainable
positioning accuracy of the proposed approach is made with respect to positioning
complexity and two sensor-camera configurations.
Description
Keywords
Robotic system that may learn , to perform visual positioning