Computer Vision for Human-Machine Interaction

Qiuhong Ke, Jun Liu, Mohammed Bennamoun, Senjian An, Ferdous Sohel, Farid Boussaid

Research output: Chapter in Book/Conference paperChapterpeer-review

68 Citations (Scopus)

Abstract

Human–machine interaction (HMI) refers to the communication and interaction between a human and a machine via a user interface. Nowadays, natural user interfaces such as gestures have gained increasing attention as they allow humans to control machines through natural and intuitive behaviors. In gesture-based HMI, a sensor such as Microsoft Kinect is used to capture the human postures and motions, which are processed to control a machine. The key task of gesture-based HMI is to recognize the meaningful expressions of human motions using the data provided by Kinect, including RGB (red, green, blue), depth, and skeleton information. In this chapter, we focus on the gesture recognition task for HMI and introduce current deep learning methods that have been used for human motion analysis and RGB-D-based gesture recognition. More specifically, we briefly introduce the convolutional neural networks (CNNs), and then present several deep learning frameworks based on CNNs that have been used for gesture recognition by using RGB, depth and skeleton sequences.
Original languageEnglish
Title of host publicationComputer Vision for Assistive Healthcare
EditorsLeo Marco, Giovanni Maria Farinella
PublisherAcademic Press
Chapter5
Pages127-145
Number of pages19
ISBN (Print)9780128134450
Publication statusPublished - 2018

Fingerprint

Dive into the research topics of 'Computer Vision for Human-Machine Interaction'. Together they form a unique fingerprint.

Cite this