Projects per year
Abstract
Automatic evaluation metrics hold a fundamental importance in the development and fine-grained analysis of captioning systems. While current evaluation metrics tend to achieve an acceptable correlation with human judgements at the system level, they fail to do so at the caption level. In this work, we propose a neural network-based learned metric to improve the caption-level caption evaluation. To get a deeper insight into the parameters which impact a learned metric’s performance, this paper investigates the relationship between different linguistic features and the caption-level correlation of the learned metrics. We also compare metrics trained with different training examples to measure the variations in their evaluation. Moreover, we perform a robustness analysis, which highlights the sensitivity of learned and handcrafted metrics to various sentence perturbations. Our empirical analysis shows that our proposed metric not only outperforms the existing metrics in terms of caption-level correlation but it also shows a strong system-level correlation against human assessments.
Original language | English |
---|---|
Pages (from-to) | 1586-1610 |
Number of pages | 25 |
Journal | International Journal of Computer Vision |
Volume | 127 |
Issue number | 10 |
DOIs | |
Publication status | Published - 1 Oct 2019 |
Fingerprint
Dive into the research topics of 'LCEval: Learned Composite Metric for Caption Evaluation'. Together they form a unique fingerprint.Projects
- 1 Finished
-
Advanced 3D Computer Vision Algorithms for 'Find and Grasp' Future Robots
1/01/15 → 31/12/20
Project: Research