Biologically Inspired Robotic Mapping
Summary |
||||||||||||||||
Learning a representation of an environment is of significant importance for the autonomous navigation of vehicles. In
this respect, the mechanical systems, e.g., a mobile robot or a vehicle, are typically equipped with the motion sensors and
their position is tracked. These sensors are prone to errors due to hardware wear out or randomness in an environment
(e.g. slippery or uneven terrains). To minimize the errors in position estimates, auxiliary sensors, such as optical camera
or laser range finders, are employed. Such a process of estimating the robot's position and at the same time building the
spatial representation of an environment (a Cartesian map) is termed as Simultaneous Localization and Mapping (SLAM) or Robotic Mapping.
Biological systems, including humans, have a remarkable ability to navigate through diverse environments. For this purpose, they principally rely on visual cues and acquire a topological layout of an environment instead of learning accurate geometric maps [1]. Hence, our interpretation of an environment in the brain is topological rather than being strictly metric. Ample research has been dedicated in recent years to develop the technical systems that use place recognition and/or topological relationship among places to perform mapping, e.g., appearance-based mapping [2]. A mainstream of existing work in appearance-based mapping is either based on offline learning or requires environment-specific parameter tuning. In consequence, these approaches cannot be extended readily to perform mapping in unknown environments. The goal of this research is to address the challenges of real-time scene learning and recognition for robotic mapping; whereas the biological aspect is to exploit the strengths of the human vision and learning system (i.e. certain areas of visual cortex and hippocampus) so that the natural responses to images can be extracted and learned incrementally. |
||||||||||||||||
Current Research |
||||||||||||||||
The focus of this research is to extend and develop technical systems that emulate the learning, recognition and
navigation mechanisms in biological neurons [3]. To leverage the efficiency and adaptability in
technical systems, optimization techniques are also under study.
| ||||||||||||||||
Extract Human-like Scene Representations | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Exploit the strengths of the human vision system to obtain the natural responses to images. In this regard,
the following approaches can be considered:
|
||||||||||||||||
Learning Representations of Places | ||||||||||||||||
The formation of spatial memory in brain is governed by cells that show competitive
behavior to learn the input, while inhibiting the activity of topologically farther neurons. To model a similar behavior,
a modified version of growing self-organizing map (GSOM) has been proposed [7]. Such a system
offers a robust solution to place recognition (i.e. loop-closure detection).
![]() This model is further extended to fuse the nearby context for learning a robust visual representation of the environment (cf. [8]). |
||||||||||||||||
Search Optimization | ||||||||||||||||
The growing size of the learned representation imposes demands for fast searching algorithms. In this respect, several search optimization methods are in consideration, e.g., locality sensitive hashing, tree structures, approximate nearest neighbor, etc. Moreover, the research also focuses on stochastic methods for learning optimal parameters settings, such as Bayesian optimization, etc. | ||||||||||||||||
Selected References |
||||||||||||||||
|
||||||||||||||||
Contact |
||||||||||||||||
For further queries (or comments), please contact: | ||||||||||||||||