CNN based sub-pixel mapping for hyperspectral images

P. V. Arun, K. M. Buddhiraju, A. Porwal

Research output: Contribution to journalArticlepeer-review

47 Citations (Scopus)

Abstract

Sub-pixel mapping techniques predict the spatial distribution of endmember abundances which are estimated through spectral unmixing. The sub-pixel mapping and spectral unmixing approaches are mostly unsupervised, and both are generally treated as independent optimization problems. This study explores convolutional encoder–decoder as well as recurrent and deconvolution networks for jointly optimizing the unmixing and mapping stages in a supervised manner. In the proposed approach, finer scale classified maps are used for training the network, thereby avoiding the requirement of fractional abundance ground truths. It is observed that the class compatibility based loss minimization yields better convergence and accuracy when compared to the conventional mean squared error (MSE) or cross entropy based approaches. An ensemble of the proposed framework is devised so as to avoid the possible convergence of stochastic gradient minimization towards local optima. The proposed approaches are evaluated using simulated and standard hyperspectral datasets. Experiments indicate that the joint optimization of spectral unmixing and sub-pixel mapping stages improves the accuracy as well as the convergence time. Also, the proposed LSTM approach gives better results, especially for linear mixtures, in comparison with the encoder–decoder based approaches. Although the LSTM approach is relatively less sensitive to the network parameters, the size and number of filters need to be tuned considering the required trade-off between accuracy and running time.

Original languageEnglish
Pages (from-to)51-64
Number of pages14
JournalNeurocomputing
Volume311
DOIs
Publication statusPublished - 15 Oct 2018
Externally publishedYes

Fingerprint

Dive into the research topics of 'CNN based sub-pixel mapping for hyperspectral images'. Together they form a unique fingerprint.

Cite this