[campus icon] Accesskey [ h ] University of Paderborn - Home
EN english
Die Universität der Informationsgesellschaft

Bio-inspired Mobile Robot Localization and Mapping



The uncertainty in position of a robot grows over time due to errors in internals sensors and the randomness in environments. To cope with this problem, measurements from external sensors such as optical camera, laser range finder, etc., are obtained. Although these measurements are not accurate due to the susceptibility of external sensors to environmental conditions and inherent limitations, they are helpful to correct the position estimate of the robot (within an acceptable error bound). This process of concurrently estimating the robot's position while acquiring the knowledge of an environment is termed as "Simultaneous Localization & Mapping (SLAM)". Ample research has been done in past few decades to address the different aspects of the SLAM problem, such as loop closure detection, visual odometry, mapping large-scale and dynamic environments, and others.

With recent developments in computer vision, the problem of robotic mapping has been widely addressed in domain of place recognition, for instance appearance-based mapping [1]. The objective of appearance-based methods is to recognize visited places based on features observed in the scenes. The nature of this problem gets complex when a place has to be recognized in different lighting conditions, changing scene appearance and large-scale environments. A mainstream of existing work in place recognition is either based on offline learning or requires parameter tuning according to environments. In consequence, these approaches cannot be extended readily to perform mapping in unknown environments. The goal of this research is to address the challenges of real-time scene learning and place recognition for robotic mapping; whereas the biological aspect is to exploit the strengths of the human vision and learning system (i.e. certain areas of visual cortex and hippocampus) so that the natural responses to images can be extracted and learned incrementally.

Ongoing Research

Humans interpret the meaning of a scene within 200 ms of its presentation. The amount of information extracted during this period is referred to as "Gist" of a scene [2], provided the eye fixations or exposures to a new scene are separated by a gap of a few milliseconds. This indicates that a precise classification of the constituent objects of a scene is not needed in early vision. We make use of this information and introduced a modified version of growing self-organizing map (GSOM) to model the competitive behavior of the cells found in visual and perirhinal cortices [3]. The algorithm is fused as a place recognition front-end for RatSLAM [4] to perform toplogical mapping, as illustrated below:


1. Cummins, M., and Newman, P. (2011). Appearance-only SLAM at large scale with FAB-MAP 2.0. In: International Journal of Robotics Research (IJRR).
2. Oliva, A., and Torralba, A. (2001). Modeling the shape of the scene: A holistic representation of the spatial envelope. International Journal of Computer Vision (IJCV).
3. Kazmi, S. M. A. M., and Mertsching, B. (2015). Gist+RatSLAM: An Incremental Bio-inspired Place Recognition Front-End for RatSLAM. In: 9th International Conference on Bio-inspired Information and Communications Technologies (BICT).
4. Milford, M., and Wyeth, G. (2010). Persistent navigation and mapping using a biologically inspired SLAM system. In: International Journal of Robotics Research (IJRR).


For further queries (or comments), please contact: