Convolutional Neural Network for Depth and Odometry Estimation from Monocular Video
Three-dimensional maps are an important source of information for the safe navigation of self-driving cars or autonomous robots. To create such maps, two data sources are required: On the one hand, continuous depth measurements have to be taken, while on the other hand the current position of the sensor has to be recorded to fuse all local measurements into a global map. However, exact measurements of depth and positional data are only possible with expensive equipment. To overcome this problem, the goal of this thesis is the development and implementation of a system for the simultaneous estimation of depth and camera ego-motion from monocular video streams. For this purpose, a convolutional neural network will be trained using a novel unsupervised learning method that enables learning of absolute scale depth and odometry without using ground-truth pose labels. To get depth and pose estimates that are as accurate as possible, different aspects of the developed learning method and the implemented system will be tested and evaluated for their relevancy. Since the quality of pose estimation depends significantly on an accurate depth estimation, methods which make use of temporal depth cues are tested for possible improvements. The developed system is evaluated regarding its performance and the estimated depth and odometry data is used to create 3D maps.