Multiple Pedestrian Tracking from Monocular Videos in an Interacting Multiple Model Framework

Zhengqiang Jiang, Du Q. Huynh

Research output: Contribution to journalArticlepeer-review

19 Citations (Scopus)


We present a multiple pedestrian tracking method for monocular videos captured by a fixed camera in an Interacting Multiple Model (IMM) framework. Our tracking method involves multiple IMM trackers running in parallel which are tied together by a robust data association component. We investigate two data association strategies which take into account both the target appearance and motion errors. We use a 4-dimensional colour histogram as the appearance model for each pedestrian returned by a people detector that is based on the Histogram of Oriented Gradients (HOG) features. Short-term occlusion problems and false negative errors from the detector are dealt with using a sliding window of video frames where tracking persists in the absence of observations. Our method has been evaluated and compared both qualitatively and quantitatively with four state-of-the-art visual tracking methods using benchmark video databases. The experiments demonstrate that, on average, our tracking method outperforms these four methods.

Original languageEnglish
Pages (from-to)1361-1375
Number of pages15
JournalIEEE Transactions on Image Processing
Issue number3
Publication statusPublished - 2018


Dive into the research topics of 'Multiple Pedestrian Tracking from Monocular Videos in an Interacting Multiple Model Framework'. Together they form a unique fingerprint.

Cite this