LCEval: Learned Composite Metric for Caption Evaluation

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)


Automatic evaluation metrics hold a fundamental importance in the development and fine-grained analysis of captioning systems. While current evaluation metrics tend to achieve an acceptable correlation with human judgements at the system level, they fail to do so at the caption level. In this work, we propose a neural network-based learned metric to improve the caption-level caption evaluation. To get a deeper insight into the parameters which impact a learned metric’s performance, this paper investigates the relationship between different linguistic features and the caption-level correlation of the learned metrics. We also compare metrics trained with different training examples to measure the variations in their evaluation. Moreover, we perform a robustness analysis, which highlights the sensitivity of learned and handcrafted metrics to various sentence perturbations. Our empirical analysis shows that our proposed metric not only outperforms the existing metrics in terms of caption-level correlation but it also shows a strong system-level correlation against human assessments.

Original languageEnglish
Pages (from-to)1586-1610
Number of pages25
JournalInternational Journal of Computer Vision
Issue number10
Publication statusPublished - 1 Oct 2019


Dive into the research topics of 'LCEval: Learned Composite Metric for Caption Evaluation'. Together they form a unique fingerprint.

Cite this