Semantic Motion Segmentation using Deep Learning
The analysis of complex and challenging environments by autonomous systems requires fast and accurate algorithms for the detection of semantic classes and motion states. In previous works, conventional methods used constraints to perform semantic motion segmentation. These constraints limit the results of the motion segmentation approaches to specific scenarios. As a result, more general approaches without limiting constraints are required to handle more challenging and variable environments. Recently, deep convolutional neural networks have been used for object detection in autonomous systems acting in dynamic real-world environments. These neural networks provide a superior performance in comparison to previous approaches. In this thesis, the task is to design a deep convolutional neural network, which incorporates optical flow and semantic segmentation to detect moving objects. Thus, the neural network learns to predict the semantic class as well as the motion state at each pixel from a pair of consecutive images. Finally, the proposed neural network is evaluated and examined with labeled datasets. Furthermore, it is tested on a real-world environment using a camera mounted on a car. The results show real-time performance for the semantic motion segmentation, while providing reasonably good performance.