ResFeats: Residual network based features for underwater image classification

Ammar Mahmood, Mohammed Bennamoun, Senjian An, Ferdous Sohel, Farid Boussaid

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

Oceanographers rely on advanced digital imaging systems to assess the health of marine ecosystems. The majority of the imagery collected by these systems do not get annotated due to lack of resources. Consequently, the expert labeled data is not enough to train dedicated deep networks. Meanwhile, in the deep learning community, much focus is on how to use pre-trained deep networks to classify out-of-domain images and transfer learning. In this paper, we leverage these advances to evaluate how well features extracted from deep neural networks transfer to underwater image classification. We propose new image features (called ResFeats) extracted from the different convolutional layers of a deep residual network pre-trained on ImageNet. We further combine the ResFeats extracted from different layers to obtain compact and powerful deep features. Moreover, we show that ResFeats consistently perform better than their CNN counterparts. Experimental results are provided to show the effectiveness of ResFeats with state-of-the-art classification accuracies on MLC, Benthoz15, EILAT and RSMAS datasets.

Original languageEnglish
Article number103811
JournalImage and Vision Computing
DOIs
Publication statusE-pub ahead of print - 1 Nov 2019

Fingerprint Dive into the research topics of 'ResFeats: Residual network based features for underwater image classification'. Together they form a unique fingerprint.

  • Projects

    Cite this