Machine Vs. Deep Learning Comparision for Developing an International Sign Language Translator

No Thumbnail Available

Date

2022

Journal Title

Journal ISSN

Volume Title

Publisher

Taylor & Francis Ltd

Open Access Color

Green Open Access

No

OpenAIRE Downloads

OpenAIRE Views

Publicly Funded

No
Impulse
Average
Influence
Average
Popularity
Average

Research Projects

Journal Issue

Abstract

This study aims to enable deaf and hard-of-hearing people to communicate with other individuals who know and do not know sign language. The mobile application was developed for video classification by using MediaPipe Library in the study. While doing this, considering the problems that deaf and hearing loss individuals face in Turkey and abroad modelling and training stages were carried out with the English language option. With the real-time translation feature added to the study individuals were provided with instant communication. In this way, communication problems experienced by hearing-impaired individuals will be greatly reduced. Machine learning and Deep learning concepts were investigated in the study. Model creation and training stages were carried out using VGG16, OpenCV, Pandas, Keras, and Os libraries. Due to the low success rate in the model created using VGG16, the MediaPipe library was used in the formation and training stages of the model. The reason for this is that, thanks to the solutions available in the MediaPipe library, it can normalise the coordinates in 3D by marking the regions to be detected in the human body. Being able to extract the coordinates independently of the background and body type in the videos in the dataset increases the success rate of the model in the formation and training stages. As a result of an experiment, the accuracy rate of the deep learning model is 85% and the application can be easily integrated with different languages. It is concluded that deep learning model is more accure than machine learning one and the communication problem faced by hearing-impaired individuals in many countries can be reduced easily.

Description

ERYILMAZ, MELTEM/0000-0001-9483-6164; Ucan, Eylul/0000-0001-7138-7087

Keywords

Sign language, hearing loss, video classification, machine learning, deep learning

Turkish CoHE Thesis Center URL

Fields of Science

0202 electrical engineering, electronic engineering, information engineering, 02 engineering and technology

Citation

WoS Q

Q3

Scopus Q

Q2
OpenCitations Logo
OpenCitations Citation Count
1

Source

Journal of Experimental & Theoretical Artificial Intelligence

Volume

36

Issue

Start Page

975

End Page

984

Collections

PlumX Metrics
Citations

Scopus : 1

Captures

Mendeley Readers : 25

SCOPUS™ Citations

1

checked on Jan 23, 2026

Web of Science™ Citations

1

checked on Jan 23, 2026

Page Views

14

checked on Jan 23, 2026

Google Scholar Logo
Google Scholar™
OpenAlex Logo
OpenAlex FWCI
0.14769002

Sustainable Development Goals

SDG data is not available