Learning a Deep Model for Human Action Recognition from Novel Viewpoints

Research output: Contribution to journalArticle

36 Citations (Scopus)

Abstract

Recognizing human actions from unknown and unseen (novel) views is a challenging problem. We propose a Robust Non-Linear Knowledge Transfer Model (R-NKTM) for human action recognition from novel views. The proposed R-NKTM is a deep fully-connected neural network that transfers knowledge of human actions from any unknown view to a shared high-level virtual view by finding a set of non-linear transformations that connects the views. The R-NKTM is learned from 2D projections of dense trajectories of synthetic 3D human models fitted to real motion capture data and generalizes to real videos of human actions. The strength of our technique is that we learn a single R-NKTM for all actions and all viewpoints for knowledge transfer of any real human action video without the need for re-Training or fine-Tuning the model. Thus, R-NKTM can efficiently scale to incorporate new action classes. R-NKTM is learned with dummy labels and does not require knowledge of the camera viewpoint at any stage. Experiments on three benchmark cross-view human action datasets show that our method outperforms existing state-of-The-Art.

Original languageEnglish
Article number7893732
Pages (from-to)667-681
Number of pages15
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume40
Issue number3
DOIs
Publication statusPublished - 1 Mar 2018

Fingerprint

Action Recognition
Knowledge Transfer
Model
Unknown
Nonlinear Transformation
Motion Capture
Learning
Human
Labels
Data acquisition
Tuning
Camera
Cameras
Trajectories
Projection
Neural Networks
Trajectory
Benchmark
Neural networks
Generalise

Cite this

@article{e179530043b9454a96f76ff9b6be25dc,
title = "Learning a Deep Model for Human Action Recognition from Novel Viewpoints",
abstract = "Recognizing human actions from unknown and unseen (novel) views is a challenging problem. We propose a Robust Non-Linear Knowledge Transfer Model (R-NKTM) for human action recognition from novel views. The proposed R-NKTM is a deep fully-connected neural network that transfers knowledge of human actions from any unknown view to a shared high-level virtual view by finding a set of non-linear transformations that connects the views. The R-NKTM is learned from 2D projections of dense trajectories of synthetic 3D human models fitted to real motion capture data and generalizes to real videos of human actions. The strength of our technique is that we learn a single R-NKTM for all actions and all viewpoints for knowledge transfer of any real human action video without the need for re-Training or fine-Tuning the model. Thus, R-NKTM can efficiently scale to incorporate new action classes. R-NKTM is learned with dummy labels and does not require knowledge of the camera viewpoint at any stage. Experiments on three benchmark cross-view human action datasets show that our method outperforms existing state-of-The-Art.",
keywords = "Cross-view, dense trajectories, view knowledge transfer",
author = "Hossein Rahmani and Ajmal Mian and Mubarak Shah",
year = "2018",
month = "3",
day = "1",
doi = "10.1109/TPAMI.2017.2691768",
language = "English",
volume = "40",
pages = "667--681",
journal = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
issn = "0162-8828",
publisher = "IEEE, Institute of Electrical and Electronics Engineers",
number = "3",

}

Learning a Deep Model for Human Action Recognition from Novel Viewpoints. / Rahmani, Hossein; Mian, Ajmal; Shah, Mubarak.

In: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 40, No. 3, 7893732, 01.03.2018, p. 667-681.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Learning a Deep Model for Human Action Recognition from Novel Viewpoints

AU - Rahmani, Hossein

AU - Mian, Ajmal

AU - Shah, Mubarak

PY - 2018/3/1

Y1 - 2018/3/1

N2 - Recognizing human actions from unknown and unseen (novel) views is a challenging problem. We propose a Robust Non-Linear Knowledge Transfer Model (R-NKTM) for human action recognition from novel views. The proposed R-NKTM is a deep fully-connected neural network that transfers knowledge of human actions from any unknown view to a shared high-level virtual view by finding a set of non-linear transformations that connects the views. The R-NKTM is learned from 2D projections of dense trajectories of synthetic 3D human models fitted to real motion capture data and generalizes to real videos of human actions. The strength of our technique is that we learn a single R-NKTM for all actions and all viewpoints for knowledge transfer of any real human action video without the need for re-Training or fine-Tuning the model. Thus, R-NKTM can efficiently scale to incorporate new action classes. R-NKTM is learned with dummy labels and does not require knowledge of the camera viewpoint at any stage. Experiments on three benchmark cross-view human action datasets show that our method outperforms existing state-of-The-Art.

AB - Recognizing human actions from unknown and unseen (novel) views is a challenging problem. We propose a Robust Non-Linear Knowledge Transfer Model (R-NKTM) for human action recognition from novel views. The proposed R-NKTM is a deep fully-connected neural network that transfers knowledge of human actions from any unknown view to a shared high-level virtual view by finding a set of non-linear transformations that connects the views. The R-NKTM is learned from 2D projections of dense trajectories of synthetic 3D human models fitted to real motion capture data and generalizes to real videos of human actions. The strength of our technique is that we learn a single R-NKTM for all actions and all viewpoints for knowledge transfer of any real human action video without the need for re-Training or fine-Tuning the model. Thus, R-NKTM can efficiently scale to incorporate new action classes. R-NKTM is learned with dummy labels and does not require knowledge of the camera viewpoint at any stage. Experiments on three benchmark cross-view human action datasets show that our method outperforms existing state-of-The-Art.

KW - Cross-view

KW - dense trajectories

KW - view knowledge transfer

UR - http://www.scopus.com/inward/record.url?scp=85041966324&partnerID=8YFLogxK

U2 - 10.1109/TPAMI.2017.2691768

DO - 10.1109/TPAMI.2017.2691768

M3 - Article

VL - 40

SP - 667

EP - 681

JO - IEEE Transactions on Pattern Analysis and Machine Intelligence

JF - IEEE Transactions on Pattern Analysis and Machine Intelligence

SN - 0162-8828

IS - 3

M1 - 7893732

ER -