[campus icon] Accesskey [ h ] University of Paderborn - Home
DE deutsch
Die Universität der Informationsgesellschaft


Visual Simultaneous Localization And Mapping Using a Monocular Camera
Date: 2014/01/14
Time: 16:00 h
Place: P 6.2.03
Author(s): Mohammad Hossein Mirabdollah

Am Dienstag, den 14. Januar 2014, hält Herr M. Sc. Mohammad Hossein Mirabdollah um 16:00 Uhr im Raum P 6.2.03 einen Vortrag mit dem Titel:

Visual Simultaneous Localization And Mapping Using a Monocular Camera


Simultaneous Localization and Mapping (SLAM) refers to the algorithms used by mobile robots (vehicles) to localize themselves properly in unknown environments and create geometric maps of landmarks in the environments. To have a rough estimation of a robot position, typically, sensors such as gyroscopes, wheel-encoders or visual odometry systems are applied. They provide instantaneous motion parameters of the robot which are used along with the motion model of the robot to estimate the position of the robot at each time instance. Nevertheless, the odometry modules cannot generally provide precise motion parameters, giving rise to accumulating errors concerning the robot position. To address this problem, SLAM algorithms employ new measurements and fuse data from different sensors to limit the increasing errors and generate a reliable map.

In conventional SLAM algorithms, the extra sensors return the relative distances and bearings between a robot and landmarks in environments. Nevertheless, due to their high costs or other limitations of the sensors, using monocular cameras has attracted the attention of SLAM researchers in the recent years. This type of SLAM is known in literature as monocular-SLAM or bearing only SLAM since a single camera can only work as a bearing sensor and the depths of landmarks are missing initially.

This work contributes to the monocular-SLAM problem in two parts. First contribution concerns visual odometry using a single camera. New iterative and non-iterative methods are offered, which performs much more robustly against measurement noises in comparison with the previous methods.

The second contribution is introducing new fusion methods for the 2D monocular SLAM problem. In the new methods, the distribution of a robot position is presented by weighted samples. Attached to each sample for each landmark a line segment is assumed, of which length shows the landmark uncertainty. Two methods will be proposed to filter the propagated assumptions of robot and landmark positions recursively. The first method formulates the problem in the context of particle filters, whereas, in the second method a subspace and a probabilistic triangulation techniques are utilized.