Continuous Region-Based Processing of Spatiotemporal Saliency

J. Tünnermann and B. Mertsching


This paper describes a region-based attention approach on motion saliency, which is important for systems that perceive and interact with dynamic environments. Frames are collected to create volumes, which are sliced into stacks of spatiotemporal images. Color segmentation is applied to these images. The orientations of the resulting regions are used to calculate their prominence in a spatiotemporal context. Saliency is projected back into image space. Tests with different inputs produced results comparable with other state-of-the-art methods. We also demonstrate how top-down influence can affect the processing in order to attend objects that move in a particular direction. The model constitutes a framework for later integration of spatiotemporal and spatial saliency as independent streams, which respect different requirements in resolution and timing.