A Probabilistic Approach for Vision-Based Fire Detection in Videos

A Probabilistic Approach for Vision-Based Fire Detection in Videos


Automated fire detection is an active research topic in computer vision. In this paper, we propose and analyze a new method for identifying fire in videos. Computer vision-based fire detection algorithms are usually applied in closed-circuit television surveillance scenarios with controlled background. In contrast, the proposed method can be applied not only to surveillance but also to automatic video classification for retrieval of fire catastrophes in databases of newscast content. In the latter case, there are large variations in fire and background characteristics depending on the video instance. The proposed method analyzes the frame-to-frame changes of specific low-level features describing potential fire regions. These features are color, area size, surface coarseness, boundary roughness, and skewness within estimated fire regions. Because of flickering and random characteristics of fire, these features are powerful discriminants. The behavioral change of each one of these features is evaluated, and the results are then combined according to the Bayes classifier for robust fire recognition. In addition, a priori knowledge of fire events captured in videos is used to significantly improve the classification results. For edited newscast videos, the fire region is usually located in the center of the frames. This fact is used to model the probability of occurrence of fire as a function of the position. Experiments illustrated the applicability of the method.

Existing System:

This has been mainly motivated by the interest of broadcasters in building large digital archives of structured assets ready for search, retrieval, and reuse. A significant amount of time and money is spent by news networks to find in their archives events related to newly occurred event. In this context, catastrophe-related news are one of the most common topics that require automated retrieval, which require faster than real-time analysis. As a consequence, this task has recently been subject to large research projects. In the catastrophe news scope, fire events are one of the most common topics, along with bombings and floods.






Proposed System:

In this paper, we propose an efficient vision-based event detection method for identifying fire in videos, extending the work presented. Most vision-based fire detection techniques proposed in the literature target surveillance applications with static cameras and consequently reasonably controlled or static background. Otherwise, they propose the use of filter banks, frequency transforms, and motion tracking, requiring more computational processing time, making them unsuitable for video retrieval.