Projects per year
Abstract
Current semi-supervised video object segmentation (VOS) methods often employ the entire features of one frame to predict object masks and update memory. This introduces significant redundant computations. To reduce redundancy, we introduce a Region Aware Video Object Segmentation (RAVOS) approach, which predicts regions of interest (ROIs) for efficient object segmentation and memory storage. RAVOS includes a fast object motion tracker to predict object ROIs in the next frame. For efficient segmentation, object features are extracted based on the ROIs, and an object decoder is designed for object-level segmentation. For efficient memory storage, we propose motion path memory to filter out redundant context by memorizing the features within the motion path of objects. In addition to RAVOS, we also propose a large-scale occluded VOS dataset, dubbed OVOS, to benchmark the performance of VOS models under occlusions. Evaluation on DAVIS and YouTube-VOS benchmarks and our new OVOS dataset show that our method achieves state-of-the-art performance with significantly faster inference time, e.g., 86.1~mathcal {J} & mathcal {F} at 42 FPS on DAVIS and 84.4~mathcal {J} & mathcal {F} at 23 FPS on YouTube-VOS. Project page: ravos.netlify.app.
Original language | English |
---|---|
Pages (from-to) | 2639-2651 |
Number of pages | 13 |
Journal | IEEE Transactions on Image Processing |
Volume | 33 |
DOIs | |
Publication status | Published - 29 Mar 2024 |
Fingerprint
Dive into the research topics of 'Region Aware Video Object Segmentation With Deep Motion Modeling'. Together they form a unique fingerprint.Projects
- 1 Finished
-
ARC Research Hub for Driving Farming Productivity and Disease Prevention
Bennamoun, M. (Investigator 01) & Mian, A. (Investigator 02)
ARC Australian Research Council
1/01/19 → 31/12/23
Project: Research
Research output
- 4 Citations
- 1 Preprint
-
Region Aware Video Object Segmentation with Deep Motion Modeling
Miao, B., Bennamoun, M., Gao, Y. & Mian, A., 21 Jul 2022, arXiv.Research output: Working paper › Preprint
File