Modeling conventional and hyperspectral image-sets for classification

Muhammad Uzair

    Research output: ThesisDoctoral Thesis

    631 Downloads (Pure)

    Abstract

    [Truncated] Traditional image classification algorithms were designed to classify single test images. However, in many practical applications, multiple images of the query object are available. Image-set modeling aims to efficiently represent and classify a collection of images that belong to the same class. Classification based on image-sets is more attractive because the appearance variations, due to changes in pose, illumination and scale of an object or even a scene can be captured in multiple images of the set. Image set modeling aims at explicitly modeling these variations to achieve better classification accuracy. Classification based on image-sets must essentially address two core challenges; how to effectively model the intra-class appearance variations using a robust representation and how to define a distance measure that exploits the inter-class variations within the set representation.

    This thesis proposes efficient and accurate representations to model conventional and hyperspectral image-sets. Several contributions are made with emphasis on designing efficient algorithms that learn the complex nonlinear image-set structures without making prior assumptions about the underlying images or their distributions. In the conventional category, this thesis deals with grayscale, colour (RGB) and near infrared images. Hyperspectral images are basically images acquired at a large number of narrow bands of the visible spectrum and beyond. In the hyperspectral category, images comprising 33 to 65 bands in the wavelength range of 400-1000nm are used.

    Hyperspectral images are modeled as image-sets for the first time in this research. Two methods are proposed for representing hyperspectral image-sets. The first one fuses the spatiospectral mean with the covariance to represent a hyperspectral image cube as a compact feature vector. Fusion is performed by sliding a cubelet over the hyperspectral image cube and integrating the first and second order statistics of the local neighbourhood. This approach minimizes the effects of inter band misalignments that are unavoidable due to the sequential capture of hyperspectral bands. The second method jointly models the rich spatiospectral information of the hyperspectral image-set and is based on the three dimensional Discrete Cosine Transform (3D-DCT). It represents a hyperspectral image cube with a few low frequency 3D-DCT coefficients. Classification is performed using Partial Least Squares regression in both cases. Both representations are evaluated on the task of hyperspectral face recognition on three standard datasets. One of these datasets was acquired as a part of this thesis. Comparisons with grayscale and RGB image based face recognition algorithms show that hyperspectral images provide improved accuracy. This thesis also performs a detailed study on whether the spectral reflectance of the face alone can be used as a reliable biometric.

    Original languageEnglish
    QualificationDoctor of Philosophy
    Publication statusUnpublished - Nov 2015

    Fingerprint Dive into the research topics of 'Modeling conventional and hyperspectral image-sets for classification'. Together they form a unique fingerprint.

    Cite this