Probability-based Framework to Fuse Temporal Consistency and Semantic Information for Background Segmentation

Zhi Zeng, Ting Wang, Fulei Ma, Liang Zhang, Peiyi Shen, Syed Afaq Ali Shah, Mohammed Bennamoun

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

The fusion of temporal consistency and semantic information with limited foreground information for background segmentation using deep learning is an underinvestigated problem. In this paper, we explore the relation between temporal consistency and semantic information based on the law of total probability. A highly concise framework is proposed to fuse these two types of information. A theoretical proof is given to show that the proposed framework is more accurate than either the temporal consistency-based model or the semantic information-based model and that each model is a special case of the proposed framework. The proposed framework is a white-box framework that can easily be embedded into a deep neural network as a merging layer. In the proposed model, only a few parameters must be learned, which substantially reduces the need for a large dataset. In addition, these interpretable parameters reflect our understanding of the background and can be applied to a wide range of environments. Extensive evaluations indicate the promising performance of the proposed method. Our code and trained weights for the experiments are available at GitHub. (We encourage the reader to run the program for a better understanding of the proposed method).

Original languageEnglish
Pages (from-to)740-754
Number of pages15
JournalIEEE Transactions on Multimedia
Volume24
Early online date2021
DOIs
Publication statusPublished - 2022

Fingerprint

Dive into the research topics of 'Probability-based Framework to Fuse Temporal Consistency and Semantic Information for Background Segmentation'. Together they form a unique fingerprint.

Cite this