TY - JOUR
T1 - QEAN
T2 - quaternion-enhanced attention network for visual dance generation
AU - Zhou, Zhizhen
AU - Huo, Yejing
AU - Huang, Guoheng
AU - Zeng, An
AU - Chen, Xuhang
AU - Huang, Lian
AU - Li, Zinuo
PY - 2025/1
Y1 - 2025/1
N2 - The study of music-generated dance is a novel and challenging image generation task. It aims to input a piece of music and seed motions, then generate natural dance movements for the subsequent music. Transformer-based methods face challenges in time-series prediction tasks related to human movements and music due to their struggle in capturing the nonlinear relationship and temporal aspects. This can lead to issues like joint deformation, role deviation, floating, and inconsistencies in dance movements generated in response to the music. In this paper, we propose a quaternion-enhanced attention network for visual dance synthesis from a quaternion perspective, which consists of a spin position embedding (SPE) module and a quaternion rotary attention (QRA) module. First, SPE embeds position information into self-attention in a rotational manner, leading to better learning of features of movement sequences and audio sequences and improved understanding of the connection between music and dance. Second, QRA represents and fuses 3D motion features and audio features in the form of a series of quaternions, enabling the model to better learn the temporal coordination of music and dance under the complex temporal cycle conditions of dance generation. Finally, we conducted experiments on the dataset AIST++, and the results show that our approach achieves better and more robust performance in generating accurate, high-quality dance movements. Our source code and dataset can be available from https://github.com/MarasyZZ/QEAN and https://google.github.io/aistplusplus_dataset, respectively.
AB - The study of music-generated dance is a novel and challenging image generation task. It aims to input a piece of music and seed motions, then generate natural dance movements for the subsequent music. Transformer-based methods face challenges in time-series prediction tasks related to human movements and music due to their struggle in capturing the nonlinear relationship and temporal aspects. This can lead to issues like joint deformation, role deviation, floating, and inconsistencies in dance movements generated in response to the music. In this paper, we propose a quaternion-enhanced attention network for visual dance synthesis from a quaternion perspective, which consists of a spin position embedding (SPE) module and a quaternion rotary attention (QRA) module. First, SPE embeds position information into self-attention in a rotational manner, leading to better learning of features of movement sequences and audio sequences and improved understanding of the connection between music and dance. Second, QRA represents and fuses 3D motion features and audio features in the form of a series of quaternions, enabling the model to better learn the temporal coordination of music and dance under the complex temporal cycle conditions of dance generation. Finally, we conducted experiments on the dataset AIST++, and the results show that our approach achieves better and more robust performance in generating accurate, high-quality dance movements. Our source code and dataset can be available from https://github.com/MarasyZZ/QEAN and https://google.github.io/aistplusplus_dataset, respectively.
KW - Animation generation task
KW - Dance generation
KW - Multi-modal task
KW - Quaternion network
KW - Time-series prediction task
UR - http://www.scopus.com/inward/record.url?scp=85190424263&partnerID=8YFLogxK
U2 - 10.1007/s00371-024-03376-5
DO - 10.1007/s00371-024-03376-5
M3 - Article
AN - SCOPUS:85190424263
SN - 0178-2789
VL - 41
SP - 961
EP - 973
JO - Visual Computer
JF - Visual Computer
IS - 2
ER -