LCEval: Learned Composite Metric for Caption Evaluation

Naeha Sharif, Lyndon White, Mohammed Bennamoun, Wei Liu, Syed Afaq Ali Shah

Research output: Contribution to journalArticle

Abstract

Automatic evaluation metrics hold a fundamental importance in the development and fine-grained analysis of captioning systems. While current evaluation metrics tend to achieve an acceptable correlation with human judgements at the system level, they fail to do so at the caption level. In this work, we propose a neural network-based learned metric to improve the caption-level caption evaluation. To get a deeper insight into the parameters which impact a learned metric’s performance, this paper investigates the relationship between different linguistic features and the caption-level correlation of the learned metrics. We also compare metrics trained with different training examples to measure the variations in their evaluation. Moreover, we perform a robustness analysis, which highlights the sensitivity of learned and handcrafted metrics to various sentence perturbations. Our empirical analysis shows that our proposed metric not only outperforms the existing metrics in terms of caption-level correlation but it also shows a strong system-level correlation against human assessments.

Original languageEnglish
Pages (from-to)1586-1610
JournalInternational Journal of Computer Vision
Volume127
Issue number10
DOIs
Publication statusPublished - Oct 2019

Fingerprint Dive into the research topics of 'LCEval: Learned Composite Metric for Caption Evaluation'. Together they form a unique fingerprint.

  • Projects

    Cite this