Redundancy and Attention in Convolutional LSTM for Gesture Recognition

Guangming Zhu, Liang Zhang, Lu Yang, Lin Mei, Syed Afaq Ali Shah, Mohammed Bennamoun, Peiyi Shen

Research output: Contribution to journalArticlepeer-review

53 Citations (Web of Science)


Convolutional long short-term memory (ConvLSTM) networks have been widely used for action/gesture recognition, and different attention mechanisms have also been embedded into ConvLSTM networks. This paper explores the redundancy of spatial convolutions and the effects of the attention mechanism in ConvLSTM, based on our previous gesture recognition architectures that combine the 3-D convolutional neural network (CNN) and ConvLSTM. Depthwise separable, group, and shuffle convolutions are used to replace the convolutional structures in ConvLSTM for the redundancy analysis. In addition, four ConvLSTM variants are derived for attention analysis: 1) by removing the convolutional structures of the three gates in ConvLSTM; 2) by applying the attention mechanism on the ConvLSTM input; and 3) by reconstructing the input and 4) output gates with the modified channelwise attention mechanism. Evaluation results demonstrate that the spatial convolutions in the three gates scarcely contribute to the spatiotemporal feature fusion and that the attention mechanisms embedded into the input and output gates cannot improve the feature fusion. In other words, ConvLSTM mainly contributes to the temporal fusion along with the recurrent steps to learn long-term spatiotemporal features when taking spatial or spatiotemporal features as input. A new LSTM variant is derived on this basis in which the convolutional structures are embedded only into the input-to-state transition of LSTM. The code of the LSTM variants is publicly available.11

Original languageEnglish
Article number8750878
Pages (from-to)1323-1335
Number of pages13
JournalIEEE Transactions on Neural Networks and Learning Systems
Issue number4
Publication statusPublished - Apr 2020


Dive into the research topics of 'Redundancy and Attention in Convolutional LSTM for Gesture Recognition'. Together they form a unique fingerprint.

Cite this