Human Body Pose Estimation from Still Images and Video Frames

    Research output: Chapter in Book/Conference paperConference paperpeer-review

    3 Citations (Scopus)

    Abstract

    This paper presents a marker-less approach for human body pose estimation. It employs skeletons extracted from 2D binary silhouettes of videos and uses a classification method to partition the resultant skeletons into five regions namely, the spine and four limbs. The classification method also identifies the neck, the head and the shoulders. Using the center of mass principles, a model is fitted to the body parts. The spine is modeled with a 2 nd order curve while each limb is modeled by two intersected lines. Finally, the model parameters represented by a reference point and two angles belonging to the lines are estimated and the pose is reconstructed. The proposed approach can estimate body poses from single images as well as multiple frames and is considerably robust to occlusions. Unlike existing methods, our approach is computationally efficient and can track human motion while correcting for pose errors using multiple frames. The proposed approach was tested on real videos from MuHAVi and MAS databases and gave promising results.
    Original languageEnglish
    Title of host publicationLecture Notes in Computer Science
    EditorsA. Campilho, M. Kamel
    Place of PublicationBerlin Heidelberg Germany
    PublisherSpringer
    Pages176-188
    Volume1
    ISBN (Print)03029743
    DOIs
    Publication statusPublished - Jun 2010
    Event7th International Conference on Image Analysis and Recognition - Parvoa de Varzim, Portugal
    Duration: 21 Jun 201023 Jun 2010

    Conference

    Conference7th International Conference on Image Analysis and Recognition
    Abbreviated titleICIAR 2010
    Country/TerritoryPortugal
    CityParvoa de Varzim
    Period21/06/1023/06/10

    Fingerprint

    Dive into the research topics of 'Human Body Pose Estimation from Still Images and Video Frames'. Together they form a unique fingerprint.

    Cite this