Jump to content

Teknomo–Fernandez algorithm

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Iineko (talk | contribs) at 09:58, 24 July 2017 (Introduction, history, assumptions). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)
The TF algorithm produces the background image from a video of a street with many pedestrians crossing.

The Teknomo-Fernandez Algorithm, also known as TF Algorithm, is an efficient algorithm for generating the background image of a given video sequence. [1]

By assuming that the background image is shown in the majority of the video, the algorithm is able to generate the background image of a video using only a small number of binary operations.[2]

History

People tracking from videos usually involves some form of background subtraction to segment foreground from background. Once foreground images are extracted, then desired algorithms (such as those for motion tracking and facial recognition) may be executed using these images.[2]

However, background subtraction requires that the background image is already available and unfortunately, this is not always the case. Traditionally, the background image is searched for manually or automatically from the video images when there are no objects. More recently, automatic background generation through object detection, medial filtering, medoid filtering, approximated median filtering, linear predictive filter, non-parametric model, Kalman filter, and adaptive smoothening have been suggested; however, most of these methods have high computational complexity. [2]

The Teknomo-Fernandez Algorithm is also an automatic background generation algorithm. Its advantage, however, is its computational speed of only O(R)-time, depending on the resolution R of an image and its accuracy gained within a manageable number of frames. Only at least three frames from a video is needed to produce the background image assuming that for every pixel position, the background occurs in the majority of the videos. Furthermore, it can be performed for both grayscale and colored videos.[2]

Assumptions

  • The camera is stationary.
  • The light of the environment changes only slowly relative to the motions of the people in the scene.
  • The number of people does not occupy the scene for the most of the time at the same place.

Generally, however, the algorithm will certainly work whenever the following single important assumption holds:

For each pixel position, the majority of the pixel values in the entire video contain the pixel value of the actual background image (at that position).[2]

As long as each part of the background is shown in the majority of the video, the entire background image needs not to appear in any of its frames. The algorithm is expected to work accurately.[2]

  1. ^ Abu, Patricia Angela; Fernandez, Proceso. "Extendibility of the Teknomo-Fernandez Algorithm for Background Image Generation" (PDF): 28–37. {{cite journal}}: Cite journal requires |journal= (help)
  2. ^ a b c d e f Teknomo, Kardi; Fernandez, Proceso. "Background Image Generation Using Boolean Operations". {{cite journal}}: Cite journal requires |journal= (help)