Synthesising 2D videos from 3D data: enlarging sparse 2D video datasets for machine learning applications

Research output: Contribution to journalConference articlepeer-review


This study outlines a technique to repurpose widely available high resolution three-dimensional (3D) motion capture data for training a machine learning model to estimate the ground reaction forces from two-dimensional (2D) pose estimation keypoints. Keypoints describe anatomically related landmarks in 2D image coordinates. The landmarks can be calculated from 3D motion capture data and projected to different image planes, serving to synthesise a near-infinite number of 2D camera views. This highly efficient method of synthesising 2D camera views can be used to enlarge sparse 2D video databases of sporting movements. We show the feasibility of this approach using a sidestepping dataset and evaluate the optimal camera number and location required to estimate 3D ground reaction forces. The method presented and the additional insights gained from this approach can be used to optimise corporeal data capture by sports practitioners.
Original languageEnglish
Article number121
Pages (from-to)503-506
Number of pages4
JournalISBS Proceedings Archive
Issue number1
Publication statusPublished - 2022
Event40th International Conference on Biomechanics in Sport - Liverpool John Moores University, Liverpool, United Kingdom
Duration: 19 Jul 202223 Jul 2022


Dive into the research topics of 'Synthesising 2D videos from 3D data: enlarging sparse 2D video datasets for machine learning applications'. Together they form a unique fingerprint.

Cite this