Classify, Detect and Tell: Real-Time American Sign Language

Zainy M. Malakan, Hezam A. Albaqami

Research output: Chapter in Book/Conference paperConference paperpeer-review

2 Citations (Scopus)

Abstract

Communication is an essential means of life, which is the main tool for people to express their ideas and feelings to others. It not only helps in sharing information but also helps in developing relationships. Verbal expression is one of the most effective communication methods for ordinary people, but for people with speech and hearing disabilities, sign language is their primary way of communication. They face a hard time communicating with people who do not understand sign language. Those who rely on sign language deserve to be engaged in the community and to be understood. Therefore, it is vital to develop a system that would enable real-time communication between the two groups of people. This paper addresses the issue of recognizing the hand gestures of American Sign Language utilizing deep learning artificial intelligence techniques. The recognition rates of sample data were measured after sample data was prepared in one of three ways: normalizing the image, converting to binary black and white, and color with background removed. Of the three methods, colored images with the background removed achieved the best results.

Original languageEnglish
Title of host publication2021 National Computing Colleges Conference (NCCC)
Place of PublicationUSA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages1097-1102
Number of pages6
ISBN (Electronic)978-1-7281-6719-0
DOIs
Publication statusPublished - 27 Mar 2021
EventIEEE 4th National Computing Colleges Conference (NCCC) -
Duration: 27 Mar 202128 Mar 2021

Conference

ConferenceIEEE 4th National Computing Colleges Conference (NCCC)
Period27/03/2128/03/21

Fingerprint

Dive into the research topics of 'Classify, Detect and Tell: Real-Time American Sign Language'. Together they form a unique fingerprint.

Cite this