[campus icon] Accesskey [ h ] University of Paderborn - Home
EN english
Die Universität der Informationsgesellschaft


3 D Motion Analysis for an Active vision System
Datum: 2012/06/26
Uhrzeit: 16:00 Uhr
Ort: P 6.2.03
Autor(en): Mohamed Salah El-Neshawy Shafik

Am Dienstag, den 26. Juni 2012, hält Herr M. Eng. Mohamed Salah El-Neshawy Shafik um 16:00 Uhr im Raum P 6.2.03 einen Vortrag mit dem Titel:

Motion segmentation has evolved into one of the most challenging problems in computer vision. The process of detecting moving objects as well as the estimation of their motion parameters provides a significant source of information to better understand dynamic scenes. A 3D motion in terms of computer vision results from the spatio-temporal change of pixel information. The detection of such differences between two or more consecutive frames is the first step in determining the related motion. Therefore, the estimation of the motion parameters in addition to the segmentation process depends on the accuracy of the detection process. Computing a single 3D motion from a 2D image flow by finding the optimal coefficient values in a 2D signal transform has proven its efficiency. However, in the case of multiple 3D motions, the resulting segmentation suffers from several drawbacks, such as the inherent confusion between translation and rotation and the problem of degenerated motions especially if the input motion vector field (MVF) is very noisy. On the other hand, such techniques fail to handle spatially overlapping 3D motion vector fields (3D Transparent Motion).

In this work, we present a fast approach to estimate the motion parameter coefficients, which results in a significant reduction of the computational time of the 3D motion segmentation approach as well as a decrease in the mean error of the estimated parameters even with highly noisy MVF. Furthermore, a saliency-based approach for estimating and segmenting 3D motions of multiple moving objects represented by 2D motion vector fields (MVF) was developed. A classification module has been implemented to define the global motion of the mounted camera in order to overcome typical problems in autonomous mobile robotic vision such as noise, occlusions, and inhibition of the ego-motion defects of a moving camera head. Moreover, we propose a fast depth-integrated 3D motion parameter estimation approach which takes into consideration the perspective transformation and the depth information to accurately estimate biologically motivated classifier cells in the 3D space using the geometrical information of the stereo camera head. The results show a successful detection and estimation of predefined 3D motion patterns such as movements toward the robot which is a vital milestone towards a successful prediction of possible collisions.