A Location-Velocity-Temporal Attention LSTM Model for Pedestrian Trajectory Prediction

Research output: Contribution to journalArticlepeer-review

30 Citations (Scopus)

Abstract

Pedestrian trajectory prediction is fundamental to a wide range of scientific research work and industrial applications. Most of the current advanced trajectory prediction methods incorporate context information such as pedestrian neighbourhood, labelled static obstacles, and the background scene into the trajectory prediction process. In contrast to these methods which require rich contexts, the method in our paper focuses on predicting a pedestrian's future trajectory using his/her observed part of the trajectory only. Our method, which we refer to as LVTA, is a Location-Velocity-Temporal Attention LSTM model where two temporal attention mechanisms are applied to the hidden state vectors from the location and velocity LSTM layers. In addition, a location-velocity attention layer embedded inside a tweak module is used to improve the predicted location and velocity coordinates before they are passed to the next time step. Extensive experiments conducted on three large benchmark datasets and comparison with eleven existing trajectory prediction methods demonstrate that LVTA achieves competitive prediction performance. Specifically, LVTA attains 9.19 pixels Average Displacement Error (ADE) and 17.28 pixels Final Displacement Error (FDE) for the Central Station dataset, and 0.46 metres ADE and 0.92 metres FDE for the ETH&UCY datasets. Furthermore, evaluation on using LVTA to generate trajectories of different prediction lengths and on new scenes without the need of retraining confirms that it has good generalizability.
Original languageEnglish
Article number9020049
Pages (from-to)44576-44589
Number of pages14
JournalIEEE Access
Volume8
DOIs
Publication statusPublished - 2020

Fingerprint

Dive into the research topics of 'A Location-Velocity-Temporal Attention LSTM Model for Pedestrian Trajectory Prediction'. Together they form a unique fingerprint.

Cite this