Text to Image Synthesis for Improved Image Captioning

Md Zakir Hossain, Ferdous Sohel, Mohd Fairuz Shiratuddin, Hamid Laga, Mohammed Bennamoun

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)

Abstract

Generating textual descriptions of images has been an important topic in computer vision and natural language processing. A number of techniques based on deep learning have been proposed on this topic. These techniques use human-annotated images for training and testing the models. These models require a large number of training data to perform at their full potential. Collecting human generated images with associative captions is expensive and time-consuming. In this paper, we propose an image captioning method that uses both real and synthetic data for training and testing the model. We use a Generative Adversarial Network (GAN) based text to image generator to generate synthetic images. We use an attention-based image captioning method trained on both real and synthetic images to generate the captions. We demonstrate the results of our models using both qualitative and quantitative analysis on popularly used evaluation metrics. We show that our experimental results achieve two fold benefits of our proposed work: i) it demonstrates the effectiveness of image captioning for synthetic images, and ii) it further improves the quality of the generated captions for real images, understandably because we use additional images for training.

Original languageEnglish
Article number9416431
Pages (from-to)64918 - 64928
Number of pages11
JournalIEEE Access
Volume9
DOIs
Publication statusPublished - 26 Apr 2021

Fingerprint

Dive into the research topics of 'Text to Image Synthesis for Improved Image Captioning'. Together they form a unique fingerprint.

Cite this