Structural similarity loss for learning to fuse multi-focus images

Xiang Yan, Syed Zulqarnain Gilani, Hanlin Qin, Ajmal Mian

Research output: Contribution to journalArticle

Abstract

Convolutional neural networks have recently been used for multi-focus image fusion. However, some existing methods have resorted to adding Gaussian blur to focused images, to simulate defocus, thereby generating data (with ground-truth) for supervised learning. Moreover, they classify pixels as ‘focused’ or ‘defocused’, and use the classified results to construct the fusion weight maps. This then necessitates a series of post-processing steps. In this paper, we present an end-to-end learning approach for directly predicting the fully focused output image from multi-focus input image pairs. The suggested approach uses a CNN architecture trained to perform fusion, without the need for ground truth fused images. The CNN exploits the image structural similarity (SSIM) to calculate the loss, a metric that is widely accepted for fused image quality evaluation. What is more, we also use the standard deviation of a local window of the image to automatically estimate the importance of the source images in the final fused image when designing the loss function. Our network can accept images of variable sizes and hence, we are able to utilize real benchmark datasets, instead of simulated ones, to train our network. The model is a feed-forward, fully convolutional neural network that can process images of variable sizes during test time. Extensive evaluation on benchmark datasets show that our method outperforms, or is comparable with, existing state-of-the-art techniques on both objective and subjective benchmarks.

Original languageEnglish
Article number6647
Pages (from-to)1-17
Number of pages17
JournalSensors (Switzerland)
Volume20
Issue number22
DOIs
Publication statusPublished - 20 Nov 2020

Fingerprint Dive into the research topics of 'Structural similarity loss for learning to fuse multi-focus images'. Together they form a unique fingerprint.

Cite this