Learning a Deep Model for Human Action Recognition from Novel Viewpoints

Hossein Rahmani, Ajmal Mian, Mubarak Shah

Research output: Contribution to journalArticlepeer-review

128 Citations (Scopus)


Recognizing human actions from unknown and unseen (novel) views is a challenging problem. We propose a Robust Non-Linear Knowledge Transfer Model (R-NKTM) for human action recognition from novel views. The proposed R-NKTM is a deep fully-connected neural network that transfers knowledge of human actions from any unknown view to a shared high-level virtual view by finding a set of non-linear transformations that connects the views. The R-NKTM is learned from 2D projections of dense trajectories of synthetic 3D human models fitted to real motion capture data and generalizes to real videos of human actions. The strength of our technique is that we learn a single R-NKTM for all actions and all viewpoints for knowledge transfer of any real human action video without the need for re-Training or fine-Tuning the model. Thus, R-NKTM can efficiently scale to incorporate new action classes. R-NKTM is learned with dummy labels and does not require knowledge of the camera viewpoint at any stage. Experiments on three benchmark cross-view human action datasets show that our method outperforms existing state-of-The-Art.

Original languageEnglish
Article number7893732
Pages (from-to)667-681
Number of pages15
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Issue number3
Publication statusPublished - 1 Mar 2018


Dive into the research topics of 'Learning a Deep Model for Human Action Recognition from Novel Viewpoints'. Together they form a unique fingerprint.

Cite this