FG/BG Segmentation

Foreground/background segmentation is a crucial pre-processing step in many applications, aimed at the separation of moving objects (the foreground) from a background scene. Many techniques use this operation as part of their work flow. For instance, tracking algorithms may focus on foreground regions to detect moving objects (and therefore speed up object-matching), or to track objects in space and time.

Humans are able to easily distinguish moving objects from a background scene, but that remains one of the most challenging tasks in the field of computer vision despite many available techniques to detect moving objects in indoor and outdoor scenes. The following properties are usually expected from a detection algorithm: accurate detection of moving objects (in space and time), robustness to changing environmental condition, especially changes in illumination, real-time processing, and low latency. The last two properties are essential for tracking applications. More specifically, the real-time requirement, as explained above, is needed to achieve a reasonable tracking performance, since tracking approaches perform best or at least benefit when they operate on the camera frame rate.

During my research, a novel low-level foreground detection algorithms for real-time applications was developed, focusing on difficult and changing illumination conditions. We propose two approaches. The main idea is to apply a decision-tree-like approach to foreground/background segmentation, i.e. we calculate statistical measures at each node of the tree to classify a pixel either as foreground or as background pixel. As statistical background model, we use a long- and a short-term weighted average, based on different learning factors. In the proposed approaches, we use either image gradients or image intensities as statistical features of foreground and background regions. This research has been published at Advanced Concepts for Intelligent Vision Systems (ACIVS).

Example videos