Synthesising 2D videos from 3D data: enlarging sparse 2D video datasets for machine learning applications

Research output: Chapter in Book/Conference paperConference paperpeer-review

Abstract

This study outlines a technique to repurpose widely available high resolution three-dimensional (3D) motion capture data for training a machine learning model to estimate the ground reaction forces from two-dimensional (2D) pose estimation keypoints. Keypoints describe anatomically related landmarks in 2D image coordinates. The landmarks can be calculated from 3D motion capture data and projected to different image planes, serving to synthesise a near-infinite number of 2D camera views. This highly efficient method of synthesising 2D camera views can be used to enlarge sparse 2D video databases of sporting movements. We show the feasibility of this approach using a sidestepping dataset and evaluate the optimal camera number and location required to estimate 3D ground reaction forces. The method presented and the additional insights gained from this approach can be used to optimise corporeal data capture by sports practitioners.
Original languageEnglish
Title of host publicationISBS Proceedings Archive
Pages503-506
Number of pages4
Volume40
Publication statusPublished - 2022
Event40th International Conference on Biomechanics in Sport - Liverpool John Moores University, Liverpool, United Kingdom
Duration: 19 Jul 202223 Jul 2022
https://isbs.org/isbs-conference

Conference

Conference40th International Conference on Biomechanics in Sport
Country/TerritoryUnited Kingdom
CityLiverpool
Period19/07/2223/07/22
Internet address

Fingerprint

Dive into the research topics of 'Synthesising 2D videos from 3D data: enlarging sparse 2D video datasets for machine learning applications'. Together they form a unique fingerprint.

Cite this