Robust Motion Estimation for Qualitative Dynamic Scene Analysis
Dynamic scene analysis is the primary challenge for various applications such as Advanced Driver Assistance Systems (ADAS), and in any autonomous robot operation in dynamic environments. Autonomous robot/vehicle can carry out desired tasks without continuous human interaction. Distinctly, robust detection, tracking, and recognition of moving objects as well as an estimation of camera ego-motion in a scene are necessary expendables for many autonomous tasks. For instance, in mobile robotics, moving objects are possibly more insecure than stationary objects for safe navigation. In particular, rescue robot systems could increase their performance enormously if they were capable of interacting with moving victims. Robust detection/tracking of moving objects from a moving camera in an outdoor environment is a challenging task due to dynamically changing cluttered backgrounds, large motion, varying lighting conditions, less texture objects, partial object occlusion, and varying object viewpoints.
The work presented in this thesis cops with the problem of robust estimation of 2D motion and tracking of moving objects with the problems mentioned above. Therefore, this work introduces a new approach to improve the accuracy of the 2D motion estimation, which called optical flow, in case of large motion using the coarse-to-fine technique. The proposed algorithm estimates the optical flow of fast as well as slow objects correctly and with less processing cost. Moreover, the presented work proposes a novel optimization model for the optical flow estimation base on the texture constraint. The texture constraint assumes that object textures such as edges, gradients, or orientation-of-image features remain constant in case of objects or camera motion. The optimization model uses an objective function to minimize dissimilarity between image texture using local descriptors. The proposed model is not limited to any local texture descriptors, for instance, the histogram of oriented gradient (HOG), the modified local directional pattern (MLDP), the census transform, and other descriptors are used. Furthermore, we present the usage of the monocular epipolar line constraint to improve the accuracy of the optical flow in the case of texture-less regions. The new model estimates the optical flow correctly in most cases when most state-of-the-art approaches that depend on the brightness constancy of a pixel fail. Besides, we propose a new approach for detecting and tracking all moving objects. The proposed algorithm works with static as well as a moving camera, and the results show the successful detection, estimation, and tracking of moving objects in indoor and outdoor environments. Several experiments and applications have been conducted to test and evaluate the algorithms extensively. The results have shown that the proposed algorithms outperformed the state-of-the-art approaches based on the standard benchmark datasets.