Scale-Aware Feature Network for Weakly Supervised Semantic Segmentation

Research output: Contribution to journalArticle

Abstract

Weakly supervised semantic segmentation with image-level labels is of great significance since it alleviates the dependency on dense annotations. However, as it relies on image classification networks that are only capable of producing sparse object localization maps, its performance is far behind that of fully supervised semantic segmentation models. Inspired by the successful use of multi-scale features for an improved performance in a wide range of visual tasks, we propose a Scale-Aware Feature Network (SAFN) for generating object localization maps. The proposed SAFN uses an attention module to learn the relative weights of multi-scale features in a modified fully convolutional network with dilated convolutions. This approach leads to efficient enlargements of the receptive fields of view and produces dense object localization maps. Our approach achieves mIoUs of 62.3% and 66.5% on the PASCAL VOC 2012 test set using VGG16 based and ResNet based segmentation models, respectively, outperforming other state-of-the-art methods for the weakly supervised semantic segmentation task.

Original languageEnglish
Article number9075085
Pages (from-to)75957-75967
Number of pages11
JournalIEEE Access
Volume8
DOIs
Publication statusPublished - 21 Apr 2020

Fingerprint Dive into the research topics of 'Scale-Aware Feature Network for Weakly Supervised Semantic Segmentation'. Together they form a unique fingerprint.

Cite this