Learning Action Recognition Model from Depth and Skeleton Videos

Hossein Rahmani, Mohammed Bennamoun

Research output: Chapter in Book/Conference paperConference paper

33 Citations (Scopus)

Abstract

Depth sensors open up possibilities of dealing with the human action recognition problem by providing 3D human skeleton data and depth images of the scene. Analysis of human actions based on 3D skeleton data has become popular recently, due to its robustness and view-invariant representation. However, the skeleton alone is insufficient to distinguish actions which involve human-object interactions. In this paper, we propose a deep model which efficiently models human-object interactions and intra-class variations under viewpoint changes. First, a human body-part model is introduced to transfer the depth appearances of body-parts to a shared view-invariant space. Second, an end-to-end learning framework is proposed which is able to effectively combine the view-invariant body-part representation from skeletal and depth images, and learn the relations between the human body-parts and the environmental objects, the interactions between different human body-parts, and the temporal structure of human actions. We have evaluated the performance of our proposed model against 15 existing techniques on two large benchmark human action recognition datasets including NTU RGB+D and UWA3DII. The Experimental results show that our technique provides a significant improvement over state-of-the-art methods.

Original languageEnglish
Title of host publicationProceedings of the 2017 IEEE International Conference on Computer Vision, ICCV 2017
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages5833-5842
Number of pages10
Volume2017-October
ISBN (Electronic)9781538610329
DOIs
Publication statusPublished - 22 Dec 2017
Event16th IEEE International Conference on Computer Vision, ICCV 2017 - Venice, Italy
Duration: 22 Oct 201729 Oct 2017

Conference

Conference16th IEEE International Conference on Computer Vision, ICCV 2017
CountryItaly
CityVenice
Period22/10/1729/10/17

Fingerprint

Sensors

Cite this

Rahmani, H., & Bennamoun, M. (2017). Learning Action Recognition Model from Depth and Skeleton Videos. In Proceedings of the 2017 IEEE International Conference on Computer Vision, ICCV 2017 (Vol. 2017-October, pp. 5833-5842). [8237883] IEEE, Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/ICCV.2017.621
Rahmani, Hossein ; Bennamoun, Mohammed. / Learning Action Recognition Model from Depth and Skeleton Videos. Proceedings of the 2017 IEEE International Conference on Computer Vision, ICCV 2017. Vol. 2017-October IEEE, Institute of Electrical and Electronics Engineers, 2017. pp. 5833-5842
@inproceedings{6b04e6ef79e346828fe31d27732510b4,
title = "Learning Action Recognition Model from Depth and Skeleton Videos",
abstract = "Depth sensors open up possibilities of dealing with the human action recognition problem by providing 3D human skeleton data and depth images of the scene. Analysis of human actions based on 3D skeleton data has become popular recently, due to its robustness and view-invariant representation. However, the skeleton alone is insufficient to distinguish actions which involve human-object interactions. In this paper, we propose a deep model which efficiently models human-object interactions and intra-class variations under viewpoint changes. First, a human body-part model is introduced to transfer the depth appearances of body-parts to a shared view-invariant space. Second, an end-to-end learning framework is proposed which is able to effectively combine the view-invariant body-part representation from skeletal and depth images, and learn the relations between the human body-parts and the environmental objects, the interactions between different human body-parts, and the temporal structure of human actions. We have evaluated the performance of our proposed model against 15 existing techniques on two large benchmark human action recognition datasets including NTU RGB+D and UWA3DII. The Experimental results show that our technique provides a significant improvement over state-of-the-art methods.",
author = "Hossein Rahmani and Mohammed Bennamoun",
year = "2017",
month = "12",
day = "22",
doi = "10.1109/ICCV.2017.621",
language = "English",
volume = "2017-October",
pages = "5833--5842",
booktitle = "Proceedings of the 2017 IEEE International Conference on Computer Vision, ICCV 2017",
publisher = "IEEE, Institute of Electrical and Electronics Engineers",
address = "United States",

}

Rahmani, H & Bennamoun, M 2017, Learning Action Recognition Model from Depth and Skeleton Videos. in Proceedings of the 2017 IEEE International Conference on Computer Vision, ICCV 2017. vol. 2017-October, 8237883, IEEE, Institute of Electrical and Electronics Engineers, pp. 5833-5842, 16th IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, 22/10/17. https://doi.org/10.1109/ICCV.2017.621

Learning Action Recognition Model from Depth and Skeleton Videos. / Rahmani, Hossein; Bennamoun, Mohammed.

Proceedings of the 2017 IEEE International Conference on Computer Vision, ICCV 2017. Vol. 2017-October IEEE, Institute of Electrical and Electronics Engineers, 2017. p. 5833-5842 8237883.

Research output: Chapter in Book/Conference paperConference paper

TY - GEN

T1 - Learning Action Recognition Model from Depth and Skeleton Videos

AU - Rahmani, Hossein

AU - Bennamoun, Mohammed

PY - 2017/12/22

Y1 - 2017/12/22

N2 - Depth sensors open up possibilities of dealing with the human action recognition problem by providing 3D human skeleton data and depth images of the scene. Analysis of human actions based on 3D skeleton data has become popular recently, due to its robustness and view-invariant representation. However, the skeleton alone is insufficient to distinguish actions which involve human-object interactions. In this paper, we propose a deep model which efficiently models human-object interactions and intra-class variations under viewpoint changes. First, a human body-part model is introduced to transfer the depth appearances of body-parts to a shared view-invariant space. Second, an end-to-end learning framework is proposed which is able to effectively combine the view-invariant body-part representation from skeletal and depth images, and learn the relations between the human body-parts and the environmental objects, the interactions between different human body-parts, and the temporal structure of human actions. We have evaluated the performance of our proposed model against 15 existing techniques on two large benchmark human action recognition datasets including NTU RGB+D and UWA3DII. The Experimental results show that our technique provides a significant improvement over state-of-the-art methods.

AB - Depth sensors open up possibilities of dealing with the human action recognition problem by providing 3D human skeleton data and depth images of the scene. Analysis of human actions based on 3D skeleton data has become popular recently, due to its robustness and view-invariant representation. However, the skeleton alone is insufficient to distinguish actions which involve human-object interactions. In this paper, we propose a deep model which efficiently models human-object interactions and intra-class variations under viewpoint changes. First, a human body-part model is introduced to transfer the depth appearances of body-parts to a shared view-invariant space. Second, an end-to-end learning framework is proposed which is able to effectively combine the view-invariant body-part representation from skeletal and depth images, and learn the relations between the human body-parts and the environmental objects, the interactions between different human body-parts, and the temporal structure of human actions. We have evaluated the performance of our proposed model against 15 existing techniques on two large benchmark human action recognition datasets including NTU RGB+D and UWA3DII. The Experimental results show that our technique provides a significant improvement over state-of-the-art methods.

UR - http://www.scopus.com/inward/record.url?scp=85041922054&partnerID=8YFLogxK

U2 - 10.1109/ICCV.2017.621

DO - 10.1109/ICCV.2017.621

M3 - Conference paper

VL - 2017-October

SP - 5833

EP - 5842

BT - Proceedings of the 2017 IEEE International Conference on Computer Vision, ICCV 2017

PB - IEEE, Institute of Electrical and Electronics Engineers

ER -

Rahmani H, Bennamoun M. Learning Action Recognition Model from Depth and Skeleton Videos. In Proceedings of the 2017 IEEE International Conference on Computer Vision, ICCV 2017. Vol. 2017-October. IEEE, Institute of Electrical and Electronics Engineers. 2017. p. 5833-5842. 8237883 https://doi.org/10.1109/ICCV.2017.621