Learning Spatiotemporal Features using 3DCNN and Convolutional LSTM for Gesture Recognition

Liang Zhang, G Zhu, P. Shen, Juan Song, Syed Shah, Mohammed Bennamoun

Research output: Chapter in Book/Conference paperConference paper

20 Citations (Scopus)

Abstract

Gesture recognition aims at understanding the ongoing human gestures. In this paper, we present a deep architecture to learn spatiotemporal features for gesture recognition. The deep architecture first learns 2D spatiotemporal feature maps using 3D convolutional neural networks (3DCNN) and bidirectional convolutional long-short-term-memory networks (ConvLSTM). The learnt 2D feature maps can encode the global temporal information and local spatial information simultaneously. Then, 2DCNN is utilized further to learn the higher-level spatiotemporal features from the 2D feature maps for the final gesture recognition. The spatiotemporal correlation information is kept through the whole process of feature learning. This makes the deep architecture an effective spatiotemporal feature learner. Experiments on the ChaLearn LAP large-scale isolated gesture dataset (IsoGD) and the Sheffield Kinect Gesture (SKIG) dataset demonstrate the superiority of the proposed deep architecture.
Original languageEnglish
Title of host publicationLearning Spatiotemporal Features using 3DCNN and Convolutional LSTM for Gesture Recognition
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages3120-3128
ISBN (Print)9781538610343
DOIs
Publication statusPublished - 1 Aug 2017
Event16th IEEE International Conference on Computer Vision, ICCV 2017 - Venice, Italy
Duration: 22 Oct 201729 Oct 2017

Conference

Conference16th IEEE International Conference on Computer Vision, ICCV 2017
CountryItaly
CityVenice
Period22/10/1729/10/17

Fingerprint

Gesture recognition
Neural networks
Experiments

Cite this

Zhang, L., Zhu, G., Shen, P., Song, J., Shah, S., & Bennamoun, M. (2017). Learning Spatiotemporal Features using 3DCNN and Convolutional LSTM for Gesture Recognition. In Learning Spatiotemporal Features using 3DCNN and Convolutional LSTM for Gesture Recognition (pp. 3120-3128). IEEE, Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/ICCVW.2017.369
Zhang, Liang ; Zhu, G ; Shen, P. ; Song, Juan ; Shah, Syed ; Bennamoun, Mohammed. / Learning Spatiotemporal Features using 3DCNN and Convolutional LSTM for Gesture Recognition. Learning Spatiotemporal Features using 3DCNN and Convolutional LSTM for Gesture Recognition. IEEE, Institute of Electrical and Electronics Engineers, 2017. pp. 3120-3128
@inproceedings{150e951e9010473685302eeb9e07440e,
title = "Learning Spatiotemporal Features using 3DCNN and Convolutional LSTM for Gesture Recognition",
abstract = "Gesture recognition aims at understanding the ongoing human gestures. In this paper, we present a deep architecture to learn spatiotemporal features for gesture recognition. The deep architecture first learns 2D spatiotemporal feature maps using 3D convolutional neural networks (3DCNN) and bidirectional convolutional long-short-term-memory networks (ConvLSTM). The learnt 2D feature maps can encode the global temporal information and local spatial information simultaneously. Then, 2DCNN is utilized further to learn the higher-level spatiotemporal features from the 2D feature maps for the final gesture recognition. The spatiotemporal correlation information is kept through the whole process of feature learning. This makes the deep architecture an effective spatiotemporal feature learner. Experiments on the ChaLearn LAP large-scale isolated gesture dataset (IsoGD) and the Sheffield Kinect Gesture (SKIG) dataset demonstrate the superiority of the proposed deep architecture.",
author = "Liang Zhang and G Zhu and P. Shen and Juan Song and Syed Shah and Mohammed Bennamoun",
year = "2017",
month = "8",
day = "1",
doi = "10.1109/ICCVW.2017.369",
language = "English",
isbn = "9781538610343",
pages = "3120--3128",
booktitle = "Learning Spatiotemporal Features using 3DCNN and Convolutional LSTM for Gesture Recognition",
publisher = "IEEE, Institute of Electrical and Electronics Engineers",
address = "United States",

}

Zhang, L, Zhu, G, Shen, P, Song, J, Shah, S & Bennamoun, M 2017, Learning Spatiotemporal Features using 3DCNN and Convolutional LSTM for Gesture Recognition. in Learning Spatiotemporal Features using 3DCNN and Convolutional LSTM for Gesture Recognition. IEEE, Institute of Electrical and Electronics Engineers, pp. 3120-3128, 16th IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, 22/10/17. https://doi.org/10.1109/ICCVW.2017.369

Learning Spatiotemporal Features using 3DCNN and Convolutional LSTM for Gesture Recognition. / Zhang, Liang; Zhu, G; Shen, P.; Song, Juan; Shah, Syed; Bennamoun, Mohammed.

Learning Spatiotemporal Features using 3DCNN and Convolutional LSTM for Gesture Recognition. IEEE, Institute of Electrical and Electronics Engineers, 2017. p. 3120-3128.

Research output: Chapter in Book/Conference paperConference paper

TY - GEN

T1 - Learning Spatiotemporal Features using 3DCNN and Convolutional LSTM for Gesture Recognition

AU - Zhang, Liang

AU - Zhu, G

AU - Shen, P.

AU - Song, Juan

AU - Shah, Syed

AU - Bennamoun, Mohammed

PY - 2017/8/1

Y1 - 2017/8/1

N2 - Gesture recognition aims at understanding the ongoing human gestures. In this paper, we present a deep architecture to learn spatiotemporal features for gesture recognition. The deep architecture first learns 2D spatiotemporal feature maps using 3D convolutional neural networks (3DCNN) and bidirectional convolutional long-short-term-memory networks (ConvLSTM). The learnt 2D feature maps can encode the global temporal information and local spatial information simultaneously. Then, 2DCNN is utilized further to learn the higher-level spatiotemporal features from the 2D feature maps for the final gesture recognition. The spatiotemporal correlation information is kept through the whole process of feature learning. This makes the deep architecture an effective spatiotemporal feature learner. Experiments on the ChaLearn LAP large-scale isolated gesture dataset (IsoGD) and the Sheffield Kinect Gesture (SKIG) dataset demonstrate the superiority of the proposed deep architecture.

AB - Gesture recognition aims at understanding the ongoing human gestures. In this paper, we present a deep architecture to learn spatiotemporal features for gesture recognition. The deep architecture first learns 2D spatiotemporal feature maps using 3D convolutional neural networks (3DCNN) and bidirectional convolutional long-short-term-memory networks (ConvLSTM). The learnt 2D feature maps can encode the global temporal information and local spatial information simultaneously. Then, 2DCNN is utilized further to learn the higher-level spatiotemporal features from the 2D feature maps for the final gesture recognition. The spatiotemporal correlation information is kept through the whole process of feature learning. This makes the deep architecture an effective spatiotemporal feature learner. Experiments on the ChaLearn LAP large-scale isolated gesture dataset (IsoGD) and the Sheffield Kinect Gesture (SKIG) dataset demonstrate the superiority of the proposed deep architecture.

U2 - 10.1109/ICCVW.2017.369

DO - 10.1109/ICCVW.2017.369

M3 - Conference paper

SN - 9781538610343

SP - 3120

EP - 3128

BT - Learning Spatiotemporal Features using 3DCNN and Convolutional LSTM for Gesture Recognition

PB - IEEE, Institute of Electrical and Electronics Engineers

ER -

Zhang L, Zhu G, Shen P, Song J, Shah S, Bennamoun M. Learning Spatiotemporal Features using 3DCNN and Convolutional LSTM for Gesture Recognition. In Learning Spatiotemporal Features using 3DCNN and Convolutional LSTM for Gesture Recognition. IEEE, Institute of Electrical and Electronics Engineers. 2017. p. 3120-3128 https://doi.org/10.1109/ICCVW.2017.369