Robust RGB-D face recognition using Kinect sensor

B.Y.L. Li, M. Xue, Ajmal Mian, W. Liu, A. Krishna

    Research output: Contribution to journalArticlepeer-review

    18 Citations (Scopus)


    © 2016 Elsevier B.V.
    In this paper we propose a robust face recognition algorithm for low resolution RGB-D Kinect data. Many techniques are proposed for image preprocessing due to the noisy depth data. First, facial symmetry is exploited based on the 3D point cloud to obtain a canonical frontal view image irrespective of the initial pose and then depth data is converted to XYZ normal maps. Secondly, multi-channel Discriminant Transforms are then used to project RGB to DCS (Discriminant Color Space) and normal maps to DNM (Discriminant Normal Maps). Finally, a Multi-channel Robust Sparse Coding method is proposed that codes the multiple channels (DCS or DNM) of a test image as a sparse combination of training samples with different pixel weighting. Weights are calculated dynamically in an iterative process to achieve robustness against variations in pose, illumination, facial expressions and disguise. In contrast to existing techniques, our multi-channel approach is more robust to variations. Reconstruction errors of the test image (DCS and DNM) are normalized and fused to decide its identity. The proposed algorithm is evaluated on four public databases. It achieves 98.4% identification rate on CurtinFaces, a Kinect database with 4784 RGB-D images of 52 subjects. Using a first versus all protocol on the Bosphorus, CASIA and FRGC v2 databases, the proposed algorithm achieves 97.6%, 95.6% and 95.2% identification rates respectively. To the best of our knowledge, these are the highest identification rates reported so far for the first three databases.
    Original languageEnglish
    Pages (from-to)93-108
    Number of pages16
    Early online date17 Jun 2016
    Publication statusPublished - 19 Nov 2016


    Dive into the research topics of 'Robust RGB-D face recognition using Kinect sensor'. Together they form a unique fingerprint.

    Cite this