Machine Vs. Deep Learning Comparision for Developing an International Sign Language Translator
dc.authorid | ERYILMAZ, MELTEM/0000-0001-9483-6164 | |
dc.authorid | Ucan, Eylul/0000-0001-7138-7087 | |
dc.authorscopusid | 57213371849 | |
dc.authorscopusid | 57884249400 | |
dc.authorscopusid | 57884503900 | |
dc.authorscopusid | 57884752600 | |
dc.authorscopusid | 57883754600 | |
dc.contributor.author | Eryilmaz, Meltem | |
dc.contributor.author | Eryılmaz, Meltem | |
dc.contributor.author | Balkaya, Ecem | |
dc.contributor.author | Ucan, Eylul | |
dc.contributor.author | Turan, Gizem | |
dc.contributor.author | Oral, Seden Gulay | |
dc.contributor.other | Computer Engineering | |
dc.date.accessioned | 2024-07-05T15:24:22Z | |
dc.date.available | 2024-07-05T15:24:22Z | |
dc.date.issued | 2022 | |
dc.department | Atılım University | en_US |
dc.department-temp | [Eryilmaz, Meltem; Turan, Gizem; Oral, Seden Gulay] Atilim Univ, Fac Engn, Dept Comp Engn, Ankara, Turkey; [Balkaya, Ecem; Ucan, Eylul] Atilim Univ, Fac Engn, Dept Informat Syst Engn, Ankara, Turkey | en_US |
dc.description | ERYILMAZ, MELTEM/0000-0001-9483-6164; Ucan, Eylul/0000-0001-7138-7087 | en_US |
dc.description.abstract | This study aims to enable deaf and hard-of-hearing people to communicate with other individuals who know and do not know sign language. The mobile application was developed for video classification by using MediaPipe Library in the study. While doing this, considering the problems that deaf and hearing loss individuals face in Turkey and abroad modelling and training stages were carried out with the English language option. With the real-time translation feature added to the study individuals were provided with instant communication. In this way, communication problems experienced by hearing-impaired individuals will be greatly reduced. Machine learning and Deep learning concepts were investigated in the study. Model creation and training stages were carried out using VGG16, OpenCV, Pandas, Keras, and Os libraries. Due to the low success rate in the model created using VGG16, the MediaPipe library was used in the formation and training stages of the model. The reason for this is that, thanks to the solutions available in the MediaPipe library, it can normalise the coordinates in 3D by marking the regions to be detected in the human body. Being able to extract the coordinates independently of the background and body type in the videos in the dataset increases the success rate of the model in the formation and training stages. As a result of an experiment, the accuracy rate of the deep learning model is 85% and the application can be easily integrated with different languages. It is concluded that deep learning model is more accure than machine learning one and the communication problem faced by hearing-impaired individuals in many countries can be reduced easily. | en_US |
dc.identifier.citation | 0 | |
dc.identifier.doi | 10.1080/0952813X.2022.2115560 | |
dc.identifier.issn | 0952-813X | |
dc.identifier.issn | 1362-3079 | |
dc.identifier.scopus | 2-s2.0-85137717392 | |
dc.identifier.scopusquality | Q2 | |
dc.identifier.uri | https://doi.org/10.1080/0952813X.2022.2115560 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14411/2426 | |
dc.identifier.wos | WOS:000849711000001 | |
dc.identifier.wosquality | Q3 | |
dc.language.iso | en | en_US |
dc.publisher | Taylor & Francis Ltd | en_US |
dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | en_US |
dc.rights | info:eu-repo/semantics/closedAccess | en_US |
dc.subject | Sign language | en_US |
dc.subject | hearing loss | en_US |
dc.subject | video classification | en_US |
dc.subject | machine learning | en_US |
dc.subject | deep learning | en_US |
dc.title | Machine Vs. Deep Learning Comparision for Developing an International Sign Language Translator | en_US |
dc.type | Article | en_US |
dspace.entity.type | Publication | |
relation.isAuthorOfPublication | ec6c4c06-14dd-4654-b3a6-04e89c8d3baf | |
relation.isAuthorOfPublication.latestForDiscovery | ec6c4c06-14dd-4654-b3a6-04e89c8d3baf | |
relation.isOrgUnitOfPublication | e0809e2c-77a7-4f04-9cb0-4bccec9395fa | |
relation.isOrgUnitOfPublication.latestForDiscovery | e0809e2c-77a7-4f04-9cb0-4bccec9395fa |