Automatic 3D Face Landmark Localization Based on 3D Vector Field Analysis

    Research output: Chapter in Book/Conference paperConference paperpeer-review

    1 Citation (Scopus)

    Abstract

    In applications such as 3D face synthesis and animation, a
    prominent face landmark is required to enable 3D face normalization,
    pose correction, 3D face recognition and reconstruction. Due to variations in facial expressions, automatic 3D face landmark localization remains a challenge. Nose tip is one of the salient landmarks in a human face. In this
    paper, a novel nose tip localization technique is proposed. In the proposed approach, the rotation of the 3D vector field is analyzed for robust and efficient nose tip localization. The proposed technique has the following three characteristics: (1) it does not require any training; (2) it does not rely on
    any particular model; (3) it is very efficient, requiring an average time of only 1.9s for nose tip detection. We tested the proposed technique on BU3DFE and Shrec’10 datasets. Experimental results show that the proposed technique is
    robust to variations in facial expressions, achieving a 100% detection rate on these publicly available 3D face datasets.
    Original languageEnglish
    Title of host publication2015 International Conference on Image and Vision Computing New Zealand (IVCNZ)
    Place of PublicationAuckland, NZ
    PublisherWiley-IEEE Press
    ISBN (Print)9781509003570
    DOIs
    Publication statusPublished - 2015
    Event30th International Conference on Image and Vision Computing, New Zealand (IVCNZ 2015) - Auckland, NZ, Auckland, New Zealand
    Duration: 23 Nov 201524 Nov 2015

    Conference

    Conference30th International Conference on Image and Vision Computing, New Zealand (IVCNZ 2015)
    Country/TerritoryNew Zealand
    CityAuckland
    Period23/11/1524/11/15

    Fingerprint

    Dive into the research topics of 'Automatic 3D Face Landmark Localization Based on 3D Vector Field Analysis'. Together they form a unique fingerprint.

    Cite this