Deep Reconstruction of 3-D Human Poses From Video

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)
407 Downloads (Pure)

Abstract

Deep learning advances have made it possible to recoverfull 3-D meshes of human models from individual images. However, the extension of this notion to videos for recovering temporally coherent poses is still underexplored. A major challenge in this direction is the lack of appropriately annotated video data for learning the desired computational models. The existing human pose datasets only provide 2-D or 3-D skeleton joint annotations, whereas the datasets are also insufficiently recorded in constrained environments. We first contribute a technique to synthesize monocular action videos with rich 3-D annotations that are suitable for learning computational models for full mesh 3-D human pose recovery. Compared to the existing methods that simply 'texture map' clothes onto the 3-D human pose models, our approach incorporates Physics-based realistic cloth deformations with human body movements. The generated videos cover a large variety of human actions, poses, and visual appearances, while the annotations record accurate human pose dynamics and human body surface information. Our second major contribution is an end-to-end trainable recurrent neural network for full pose mesh recovery from monocular videos. Using the proposed video data and a long short-term memory recurrent structure, our network explicitly learns to model the temporal coherence in videos and imposes geometric consistency over the recovered meshes. We establish the effectiveness of the proposed model with quantitative and qualitative analysis using the proposed and benchmark datasets.

Original languageEnglish
Pages (from-to)497-510
Number of pages14
JournalIEEE Transactions on Artificial Intelligence
Volume4
Issue number3
DOIs
Publication statusPublished - Jun 2023

Fingerprint

Dive into the research topics of 'Deep Reconstruction of 3-D Human Poses From Video'. Together they form a unique fingerprint.

Cite this