# Optical Flow

The materials provided here have been published in [7].

### Introduction

The optical flow is the estimation of the 2D apparent motion field of two consecutive images in an image sequence, which can be presented as a 2D vector field on the image plane. It is an ill-posed problem as the motion is three dimensional while the images are projections of the 3D scene onto a 2D plane. The most common assumption used in optical flow estimation is the brightness constancy assumption. It states that the gray value of corresponding pixels in the two consecutive frames should be the same. The brightness constancy assumption fails in the case of changes in illumination or upon observation of transparent or non-Lambertian material [4]. Another source of ambiguity arises from the so-called aperture problem. The aperture problem arises as a consequence of motion ambiguity when an object is viewed through an aperture. If an untextured object moves within the image, the motion within the object area cannot be recovered without additional information. Even at object edges, where the gray value of the background differs from the gray value of the object (or object edge) itself, motion can only be recovered in one dimension without additional information. The 2D motion can only be calculated where more information is available, e.g. at object corners or if texture is available.

### Concept of Optical Flow

Due to the ill-pose character of optical flow, it needs to be reformulated for numerical treatment. Additional assumption have to be considered such as the smoothness of the optical flow field which known as smoothness. Optical flow methods can be categorized into the two general classes feature-based and variational approaches based on the regularization type.
Feature-based approaches compute the optical flow displacement for a pixel and its neighborhood independently for the optical flow solutions obtained at the other pixels in the image. These algorithms can again be categorized into pixel accurate optical flow and sub-pixel accurate optical flow approaches. Pixel accurate optical flow approaches assign pixels of the input image to pixels of the output image. This is usually done by evaluating a pixel matching score based on gray value of a pixel’s neighborhood. Algorithms of this class can be understood as a combinatorial task, because the number of possible flow vectors or displacement vectors for a certain pixel is bounded by the image size. Due to the combinatorial structure, pixel accurate optical flow algorithms can be parallelized yielding real-time efficiency on dedicated hardware. They have the nice property of robustness due to the restricted solution space (only discrete integer pixel positions are considered and outliers are less likely), but at the same time suffer from accuracy limitations, i.e. the displacements are only pixel-discrete.Assume that the gray value image sequence has a continuous domain, the change of a pixel’s gray value between two time instances can then be computed by evaluating the image gradient. The early work of Lucas and Kanade [1] is reviewed which employs the linearized version of the optical flow constraint.

$\underset{u,v}{min} \left \{ \underset{x^{'}\in N(x)}{\sum}\left ( I_{t}(x^{'}) +uI_{x}(x^{'})+ vI_{y}(x^{'}) \right )^2 \right \}$

Variational approaches take into account the optical flow solutions of neighboring pixels and imply smoothness assumptions on the optical flow field. Horn and Schunck [2] proposed an approach to cope with the under-determined optical flow constraint. The authors reverted to regularization, or smoothness, for the resulting flow field. Such smoothness was introduced by penalizing the derivative of the optical flow field, yielding an energy which was minimized by means of variational approaches.

$\underset{u,v}{min} \left \{ \sum \left ( (I_{t}+uI_{x}+vI_{y})^{2} +\lambda(\left | \nabla{u} \right |^{2} + \left | \nabla {v} \right |^{2}) \right ) \right \}$

### Ongoing Research on Motion Estimation in GET Lab

In the recent years, vast amount of approaches which estimate a dense flow field have been proposed. However, the accuracy and performance need to be improved. Also, there are challenge problems such as large scale or illumination change sequences. Our research focus on obtaining accurate large and small scale optical flow by using the combination of local and global approaches integrated with the point correspondences to refine the optical flow results, while persevering the motion boundaries. The second direction of our research is make use of stereo vision for estimating the motion in 3D (Scene flow).

The Motion Estimation Model
Local methods such as Lucas and Kanade are more robust under noise, while global techniques such as Horn and Schunck yield dense flow field. A. Bruhn[3] combines important advantages of local and global approaches into the so called CLG. It yields dense flow fields that are robust against noise. Our work based on the CLG approach.
Our contribution is the integration of the local approche algorithm with the L1 total variational algorithm in the energy function:

$E =\sum \left[\psi\left( w^{T}J_{\rho} \left(\nabla_3 f \right )w \right) + \lambda_1 \left(\psi\left(\nabla u \right) + \psi\left(\nabla v \right)\right) + \lambda_2 \left(\left(u-\hat{u}\right)^2 + \left(v-\hat{v}\right)^2\right)\right]$

Optimizing this energy function w.r.t (u,v) is done by using dual approach by divide this energy function into two diffrent equations. The first equation depend on the data term. It is calculated point-wise by using the Euler-Lagrange equation, then optimized by using least square minimization, while the second equation is depend on the smoothness term, and is optimized using a fixed-point iteration scheme. We can say that the optimization of these two equations equal to finding the optimal solution of (u,v) from the first equation and doing TV-L1 denoising of (u,v) in the second one.
The energy function we used is isotropic and propagates the flow in all directions regardless of local properties. In order to enhance the propagation, the weighted median filter has been used to solve the isotropic propagation problem. The weighted function of the median filter has been calculated based on spatio-temporal image segmentation approach. The optimization is done during the coarse to fine strategy, in which the optical flow values are propagate from the coarse levels to the finer one. The propagation of optical flow is done using interpolation which cause the lost of motion of small image details. A solution to this problem can be done using the correspondences points. in which accurate algorithm can be use to get the correspondences. Then these correspondences can be used to refine the optical flow values. In our work we used the modified census transform to get the correspondences.

### New Topics

• Gesture recognition using optical flow. Computer can understand an human behavior and do execution of an appropriate action. For example one can play games or operates some softwares using hand signs. Furthermore, a robot could be directed or operated using some hand signs.
• 2D Motion segmentation and moving object detection. Given the optical flow, the goal is to differentiate between objects motion and camera ego-motion. As a results, relative velocity for each moving object will be extracted and used as valuable information for other algorithms such as navigation.
• Real-time 3D scene flow using 3D sensor (kinect) or stereo camera. Given optical flow and depth map, the goal is to get a 3D motion vector for each 3D point.
• Group activity recognition. Action recognition with optical flow for a group of people.

### References

 1 Lucas, B., Kanade, T.: An Iterative Image Registration Technique with an Application to Stereo Vision. In: Image Understanding Workshop. (1981) 2 Horn, B., Schunck, B.: Determining Optical Flow. Volume 17., Elsevier (1981) 3 Bruhn, A., Weickert, J., Schnoerr, C.: Lucas/Kanade meets Horn/Schunck: Combining Local and Global Optic Flow Methods. Volume 61., Springer (2005) 4 Wedel, A., Cremers, D.: Stereo Scene Flow for 3D Motion Analysis. Springer (2011) 5 Mohamed, M.A., Mertsching, B.: TV-L1 Optical Flow Estimation With Image Details Recovering Based on Modified Census Transform. In: International Symposium on Visual Computing. (June 2012) 6 M. Mohamed, H. Rashwan, B. Mertsching, M. Garcia, and D. Puig. Illumination-Robust Optical Flow Using Local Directional Pattern. In: IEEE Transactions on Circuits and Systems for Video Technology, 2014, vol. 24, no. 9, pp. 1499-1508. ISSN 1051-8215 7 H. Rashwan, M. Mohamed, M. Garcia, B. Mertsching, and D. Puig. Illumination Robust Optical Flow Model Based on Histogram of Oriented Gradients. In: German Conference on Pattern Recognition, Springer Berlin Heidelberg, 2013, Lecture Notes in Computer Science.