Machine Vs. Deep Learning Comparision for Developing an International Sign Language Translator

dc.authoridERYILMAZ, MELTEM/0000-0001-9483-6164
dc.authoridUcan, Eylul/0000-0001-7138-7087
dc.authorscopusid57213371849
dc.authorscopusid57884249400
dc.authorscopusid57884503900
dc.authorscopusid57884752600
dc.authorscopusid57883754600
dc.contributor.authorEryilmaz, Meltem
dc.contributor.authorEryılmaz, Meltem
dc.contributor.authorBalkaya, Ecem
dc.contributor.authorUcan, Eylul
dc.contributor.authorTuran, Gizem
dc.contributor.authorOral, Seden Gulay
dc.contributor.otherComputer Engineering
dc.date.accessioned2024-07-05T15:24:22Z
dc.date.available2024-07-05T15:24:22Z
dc.date.issued2022
dc.departmentAtılım Universityen_US
dc.department-temp[Eryilmaz, Meltem; Turan, Gizem; Oral, Seden Gulay] Atilim Univ, Fac Engn, Dept Comp Engn, Ankara, Turkey; [Balkaya, Ecem; Ucan, Eylul] Atilim Univ, Fac Engn, Dept Informat Syst Engn, Ankara, Turkeyen_US
dc.descriptionERYILMAZ, MELTEM/0000-0001-9483-6164; Ucan, Eylul/0000-0001-7138-7087en_US
dc.description.abstractThis study aims to enable deaf and hard-of-hearing people to communicate with other individuals who know and do not know sign language. The mobile application was developed for video classification by using MediaPipe Library in the study. While doing this, considering the problems that deaf and hearing loss individuals face in Turkey and abroad modelling and training stages were carried out with the English language option. With the real-time translation feature added to the study individuals were provided with instant communication. In this way, communication problems experienced by hearing-impaired individuals will be greatly reduced. Machine learning and Deep learning concepts were investigated in the study. Model creation and training stages were carried out using VGG16, OpenCV, Pandas, Keras, and Os libraries. Due to the low success rate in the model created using VGG16, the MediaPipe library was used in the formation and training stages of the model. The reason for this is that, thanks to the solutions available in the MediaPipe library, it can normalise the coordinates in 3D by marking the regions to be detected in the human body. Being able to extract the coordinates independently of the background and body type in the videos in the dataset increases the success rate of the model in the formation and training stages. As a result of an experiment, the accuracy rate of the deep learning model is 85% and the application can be easily integrated with different languages. It is concluded that deep learning model is more accure than machine learning one and the communication problem faced by hearing-impaired individuals in many countries can be reduced easily.en_US
dc.identifier.citation0
dc.identifier.doi10.1080/0952813X.2022.2115560
dc.identifier.issn0952-813X
dc.identifier.issn1362-3079
dc.identifier.scopus2-s2.0-85137717392
dc.identifier.scopusqualityQ2
dc.identifier.urihttps://doi.org/10.1080/0952813X.2022.2115560
dc.identifier.urihttps://hdl.handle.net/20.500.14411/2426
dc.identifier.wosWOS:000849711000001
dc.identifier.wosqualityQ3
dc.language.isoenen_US
dc.publisherTaylor & Francis Ltden_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectSign languageen_US
dc.subjecthearing lossen_US
dc.subjectvideo classificationen_US
dc.subjectmachine learningen_US
dc.subjectdeep learningen_US
dc.titleMachine Vs. Deep Learning Comparision for Developing an International Sign Language Translatoren_US
dc.typeArticleen_US
dspace.entity.typePublication
relation.isAuthorOfPublicationec6c4c06-14dd-4654-b3a6-04e89c8d3baf
relation.isAuthorOfPublication.latestForDiscoveryec6c4c06-14dd-4654-b3a6-04e89c8d3baf
relation.isOrgUnitOfPublicatione0809e2c-77a7-4f04-9cb0-4bccec9395fa
relation.isOrgUnitOfPublication.latestForDiscoverye0809e2c-77a7-4f04-9cb0-4bccec9395fa

Files

Collections