NNEval: Neural Network based Evaluation Metric for Image Captioning

Naeha Sharif, Lyndon Rhys White, Mohammed Bennamoun, Syed Shah

Research output: Chapter in Book/Conference paperConference paperpeer-review

7 Citations (Scopus)

Abstract

The automatic evaluation of image descriptions is an intricate task, and it is highly important in the development and fine-grained analysis of captioning systems. Existing metrics to automatically evaluate image captioning systems fail to achieve a satisfactory level of correlation with human judgements at the sentence-level. Moreover, these metrics, unlike humans, tend to focus on specific aspects of quality, such as the n-gram overlap or the semantic meaning. In this paper, we present the first learning-based metric to evaluate image captions. Our proposed framework enables us to incorporate both lexical and semantic information into a single learned metric. This results in an evaluator that takes into account various linguistic features to assess the caption quality. The experiments we performed to assess the proposed metric, show improvements upon the state of the art in terms of correlation with human judgements and demonstrate its superior robustness to distractions.
Original languageEnglish
Title of host publicationECCV
EditorsVittorio Ferrari, Cristian Sminchisescu, Yair Weiss, Martial Hebert
PublisherSpringer
Pages39-55
Number of pages17
ISBN (Print)9783030012366
DOIs
Publication statusPublished - 2018
Event15th European Conference on Computer Vision - Munich, Germany
Duration: 8 Sept 201814 Sept 2018

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11212 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference15th European Conference on Computer Vision
Abbreviated titleECCV 2018
Country/TerritoryGermany
CityMunich
Period8/09/1814/09/18

Fingerprint

Dive into the research topics of 'NNEval: Neural Network based Evaluation Metric for Image Captioning'. Together they form a unique fingerprint.

Cite this