[campus icon] Accesskey [ h ] University of Paderborn - Home
EN english
Die Universität der Informationsgesellschaft

3D Motion Analysis


3D motion interpretation has evolved into one of the most challenging problems in computer vision. The process of detecting moving objects as well as the estimation of their motion parameters provides a significant source of information to better understand dynamic scenes. The motion in computer vision is related to the change of the spatio-temporal information of pixels. Computing a single 3D motion from a 2D image flow by finding the optimal coefficient values in a 2D signal transform suffers from ambiguous interpretations concerning 3D motion especially motions in the Z direction. On the other hand, one of the main challenges facing the segmentation of 3D multi-moving objects in an active vision system is the segmentation of an incoherent MVF into partitions in reasonable computation time. This especially proved to be difficult when moving objects are partially visible and not connected. Hence, it is important to detect, estimate, and segment the MVF independently from a predefined spatial coherence such as object contours generated from image segmentation approaches.

Fig. 1: The diagram illustrates the result of motion segmentation approach on a sequence of real images. Left, input sequence from PETS dataset. Middle, resulting MVF. Right, result of motion segmentation representing the first and the second most salient motions.

Concept of Transparent Motion

One of the fundamental problems in the neural computation of 3D motion is the grouping of velocity signals into surfaces as in the case of motion transparency, where local moving elements appear to be grouped into two or more spatially overlapping surfaces. Hence, the challenge for modeling 3D motion transparency is raised in order to demonstrate how two different motion signals can appear perceptually co-localized in the same space. Hence, the 3D motion parameters estimation process requires a multi-valued representation for each point in the image or the co-localization of more global surface descriptors as shown in fig. 2 where two 3D motion are group together to give the impression of lacy overlapping surfaces despite the connectivity of the object.

Fig 2: Synthetic MVF representing the concept of transparent motion. (a) A 3D motion representing translation and rotation in the Z axis. (b) A 3D motion represents the same translation in the direction of Z axis with opposite rotation direction around the Z axis. (c) Random combination of both MVFs representing the concept of transparent motion. (d) Results of Motion Segmentation.


Do you have any questions or comments? Please contact: