Computer Vision for Human-Machine Interaction

Qiuhong Ke, Jun Liu, Mohammed Bennamoun, Senjian An, Ferdous Sohel, Farid Boussaid

Research output: Chapter in Book/Conference paperChapter

3 Citations (Scopus)

Abstract

Human–machine interaction (HMI) refers to the communication and interaction between a human and a machine via a user interface. Nowadays, natural user interfaces such as gestures have gained increasing attention as they allow humans to control machines through natural and intuitive behaviors. In gesture-based HMI, a sensor such as Microsoft Kinect is used to capture the human postures and motions, which are processed to control a machine. The key task of gesture-based HMI is to recognize the meaningful expressions of human motions using the data provided by Kinect, including RGB (red, green, blue), depth, and skeleton information. In this chapter, we focus on the gesture recognition task for HMI and introduce current deep learning methods that have been used for human motion analysis and RGB-D-based gesture recognition. More specifically, we briefly introduce the convolutional neural networks (CNNs), and then present several deep learning frameworks based on CNNs that have been used for gesture recognition by using RGB, depth and skeleton sequences.
Original languageEnglish
Title of host publicationComputer Vision for Assistive Healthcare
EditorsLeo Marco, Giovanni Maria Farinella
PublisherAcademic Press
Chapter5
Pages127-145
ISBN (Print)9780128134450
Publication statusPublished - 2018

Fingerprint

Gesture recognition
Computer vision
User interfaces
Neural networks
Communication
Sensors
Deep learning

Cite this

Ke, Q., Liu, J., Bennamoun, M., An, S., Sohel, F., & Boussaid, F. (2018). Computer Vision for Human-Machine Interaction. In L. Marco, & G. M. Farinella (Eds.), Computer Vision for Assistive Healthcare (pp. 127-145). Academic Press.
Ke, Qiuhong ; Liu, Jun ; Bennamoun, Mohammed ; An, Senjian ; Sohel, Ferdous ; Boussaid, Farid. / Computer Vision for Human-Machine Interaction. Computer Vision for Assistive Healthcare. editor / Leo Marco ; Giovanni Maria Farinella. Academic Press, 2018. pp. 127-145
@inbook{65857efcc38a45a880f08d469cab0f37,
title = "Computer Vision for Human-Machine Interaction",
abstract = "Human–machine interaction (HMI) refers to the communication and interaction between a human and a machine via a user interface. Nowadays, natural user interfaces such as gestures have gained increasing attention as they allow humans to control machines through natural and intuitive behaviors. In gesture-based HMI, a sensor such as Microsoft Kinect is used to capture the human postures and motions, which are processed to control a machine. The key task of gesture-based HMI is to recognize the meaningful expressions of human motions using the data provided by Kinect, including RGB (red, green, blue), depth, and skeleton information. In this chapter, we focus on the gesture recognition task for HMI and introduce current deep learning methods that have been used for human motion analysis and RGB-D-based gesture recognition. More specifically, we briefly introduce the convolutional neural networks (CNNs), and then present several deep learning frameworks based on CNNs that have been used for gesture recognition by using RGB, depth and skeleton sequences.",
author = "Qiuhong Ke and Jun Liu and Mohammed Bennamoun and Senjian An and Ferdous Sohel and Farid Boussaid",
year = "2018",
language = "English",
isbn = "9780128134450",
pages = "127--145",
editor = "Leo Marco and Farinella, {Giovanni Maria}",
booktitle = "Computer Vision for Assistive Healthcare",
publisher = "Academic Press",
address = "United Kingdom",

}

Ke, Q, Liu, J, Bennamoun, M, An, S, Sohel, F & Boussaid, F 2018, Computer Vision for Human-Machine Interaction. in L Marco & GM Farinella (eds), Computer Vision for Assistive Healthcare. Academic Press, pp. 127-145.

Computer Vision for Human-Machine Interaction. / Ke, Qiuhong; Liu, Jun; Bennamoun, Mohammed; An, Senjian; Sohel, Ferdous; Boussaid, Farid.

Computer Vision for Assistive Healthcare. ed. / Leo Marco; Giovanni Maria Farinella. Academic Press, 2018. p. 127-145.

Research output: Chapter in Book/Conference paperChapter

TY - CHAP

T1 - Computer Vision for Human-Machine Interaction

AU - Ke, Qiuhong

AU - Liu, Jun

AU - Bennamoun, Mohammed

AU - An, Senjian

AU - Sohel, Ferdous

AU - Boussaid, Farid

PY - 2018

Y1 - 2018

N2 - Human–machine interaction (HMI) refers to the communication and interaction between a human and a machine via a user interface. Nowadays, natural user interfaces such as gestures have gained increasing attention as they allow humans to control machines through natural and intuitive behaviors. In gesture-based HMI, a sensor such as Microsoft Kinect is used to capture the human postures and motions, which are processed to control a machine. The key task of gesture-based HMI is to recognize the meaningful expressions of human motions using the data provided by Kinect, including RGB (red, green, blue), depth, and skeleton information. In this chapter, we focus on the gesture recognition task for HMI and introduce current deep learning methods that have been used for human motion analysis and RGB-D-based gesture recognition. More specifically, we briefly introduce the convolutional neural networks (CNNs), and then present several deep learning frameworks based on CNNs that have been used for gesture recognition by using RGB, depth and skeleton sequences.

AB - Human–machine interaction (HMI) refers to the communication and interaction between a human and a machine via a user interface. Nowadays, natural user interfaces such as gestures have gained increasing attention as they allow humans to control machines through natural and intuitive behaviors. In gesture-based HMI, a sensor such as Microsoft Kinect is used to capture the human postures and motions, which are processed to control a machine. The key task of gesture-based HMI is to recognize the meaningful expressions of human motions using the data provided by Kinect, including RGB (red, green, blue), depth, and skeleton information. In this chapter, we focus on the gesture recognition task for HMI and introduce current deep learning methods that have been used for human motion analysis and RGB-D-based gesture recognition. More specifically, we briefly introduce the convolutional neural networks (CNNs), and then present several deep learning frameworks based on CNNs that have been used for gesture recognition by using RGB, depth and skeleton sequences.

M3 - Chapter

SN - 9780128134450

SP - 127

EP - 145

BT - Computer Vision for Assistive Healthcare

A2 - Marco, Leo

A2 - Farinella, Giovanni Maria

PB - Academic Press

ER -

Ke Q, Liu J, Bennamoun M, An S, Sohel F, Boussaid F. Computer Vision for Human-Machine Interaction. In Marco L, Farinella GM, editors, Computer Vision for Assistive Healthcare. Academic Press. 2018. p. 127-145