Optical Flow Estimation Based on Depth Information and Scene Analysis

A. Raies


Optical flow fields are widely used for motion estimation of objects relative to the observer of a scene. The optical fow is normally estimated using the brightness constraint or texture constraints. These methods still perform poorly with fast motions, in areas of low texture or discontinuous brightness. Most of the existing approaches make generic assumptions about the spatial structure of the flow. More recent works try to utilise the fact that the optical flow variation depends highly on object class across an image. Different motion models are used for each element (object class) in the scene depending on the type of object. This approach provides a more robust solution but there’s still room for improvement. The goal of this work is to optimize the optical flow estimation, when the observer itself is moving, by exploiting the recent advances in image segmentation and disparity map. The robustness, accuracy and performance will be evaluated using the KITTI stereo and optical flow benchmark.