Abstract
Existing Image Captioning (IC) systems model words as atomic units in captions and are unable to exploit the structural information in the words. This makes representation of rare words very difficult and out-of-vocabulary words impossible. Moreover, to avoid computational complexity, existing IC models operate over a modest sized vocabulary of frequent words, such that the identity of rare words is lost. In this work we address this common limitation of IC systems in dealing with rare words in the corpora. We decompose words into smaller constituent units ‘subwords’ and represent captions as a sequence of subwords instead of words. This helps represent all words in the corpora using a significantly lower subword vocabulary, leading to better parameter learning. Using subword language modeling, our captioning system improves various metric scores, with a training vocabulary size approximately 90% less than the baseline and various state-of-the-art word-level models. Our quantitative and qualitative results and analysis signify the efficacy of our proposed approach.
Original language | English |
---|---|
Title of host publication | Conference Proceedings 2021 IEEE Winter Conference on Applications of Computer Vision (WACV) |
Place of Publication | USA |
Publisher | IEEE, Institute of Electrical and Electronics Engineers |
Pages | 3539-3540 |
Number of pages | 2 |
ISBN (Electronic) | 9780738142661 |
DOIs | |
Publication status | Published - Jan 2021 |
Event | 2021 IEEE Winter Conference on Applications of Computer Vision - Virtual, Virtual Duration: 5 Jan 2021 → 9 Jan 2021 |
Conference
Conference | 2021 IEEE Winter Conference on Applications of Computer Vision |
---|---|
Abbreviated title | WACV 2021 |
Country/Territory | Virtual |
Period | 5/01/21 → 9/01/21 |