Search Results

Now showing 1 - 10 of 14
  • Article
    Citation - WoS: 1
    Citation - Scopus: 1
    Machine Vs. Deep Learning Comparision for Developing an International Sign Language Translator
    (Taylor & Francis Ltd, 2022) Eryilmaz, Meltem; Balkaya, Ecem; Ucan, Eylul; Turan, Gizem; Oral, Seden Gulay
    This study aims to enable deaf and hard-of-hearing people to communicate with other individuals who know and do not know sign language. The mobile application was developed for video classification by using MediaPipe Library in the study. While doing this, considering the problems that deaf and hearing loss individuals face in Turkey and abroad modelling and training stages were carried out with the English language option. With the real-time translation feature added to the study individuals were provided with instant communication. In this way, communication problems experienced by hearing-impaired individuals will be greatly reduced. Machine learning and Deep learning concepts were investigated in the study. Model creation and training stages were carried out using VGG16, OpenCV, Pandas, Keras, and Os libraries. Due to the low success rate in the model created using VGG16, the MediaPipe library was used in the formation and training stages of the model. The reason for this is that, thanks to the solutions available in the MediaPipe library, it can normalise the coordinates in 3D by marking the regions to be detected in the human body. Being able to extract the coordinates independently of the background and body type in the videos in the dataset increases the success rate of the model in the formation and training stages. As a result of an experiment, the accuracy rate of the deep learning model is 85% and the application can be easily integrated with different languages. It is concluded that deep learning model is more accure than machine learning one and the communication problem faced by hearing-impaired individuals in many countries can be reduced easily.
  • Article
    Citation - WoS: 29
    Citation - Scopus: 43
    Text Classification Using Improved Bidirectional Transformer
    (Wiley, 2022) Tezgider, Murat; Yıldız, Beytullah; Yildiz, Beytullah; Aydin, Galip; Yıldız, Beytullah
    Text data have an important place in our daily life. A huge amount of text data is generated everyday. As a result, automation becomes necessary to handle these large text data. Recently, we are witnessing important developments with the adaptation of new approaches in text processing. Attention mechanisms and transformers are emerging as methods with significant potential for text processing. In this study, we introduced a bidirectional transformer (BiTransformer) constructed using two transformer encoder blocks that utilize bidirectional position encoding to take into account the forward and backward position information of text data. We also created models to evaluate the contribution of attention mechanisms to the classification process. Four models, including long short term memory, attention, transformer, and BiTransformer, were used to conduct experiments on a large Turkish text dataset consisting of 30 categories. The effect of using pretrained embedding on models was also investigated. Experimental results show that the classification models using transformer and attention give promising results compared with classical deep learning models. We observed that the BiTransformer we proposed showed superior performance in text classification.
  • Review
    Citation - WoS: 7
    Citation - Scopus: 9
    A Survey of Covid-19 Diagnosis Using Routine Blood Tests With the Aid of Artificial Intelligence Techniques
    (Mdpi, 2023) Habashi, Soheila Abbasi; Koyuncu, Murat; Alizadehsani, Roohallah
    Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), causing a disease called COVID-19, is a class of acute respiratory syndrome that has considerably affected the global economy and healthcare system. This virus is diagnosed using a traditional technique known as the Reverse Transcription Polymerase Chain Reaction (RT-PCR) test. However, RT-PCR customarily outputs a lot of false-negative and incorrect results. Current works indicate that COVID-19 can also be diagnosed using imaging resolutions, including CT scans, X-rays, and blood tests. Nevertheless, X-rays and CT scans cannot always be used for patient screening because of high costs, radiation doses, and an insufficient number of devices. Therefore, there is a requirement for a less expensive and faster diagnostic model to recognize the positive and negative cases of COVID-19. Blood tests are easily performed and cost less than RT-PCR and imaging tests. Since biochemical parameters in routine blood tests vary during the COVID-19 infection, they may supply physicians with exact information about the diagnosis of COVID-19. This study reviewed some newly emerging artificial intelligence (AI)-based methods to diagnose COVID-19 using routine blood tests. We gathered information about research resources and inspected 92 articles that were carefully chosen from a variety of publishers, such as IEEE, Springer, Elsevier, and MDPI. Then, these 92 studies are classified into two tables which contain articles that use machine Learning and deep Learning models to diagnose COVID-19 while using routine blood test datasets. In these studies, for diagnosing COVID-19, Random Forest and logistic regression are the most widely used machine learning methods and the most widely used performance metrics are accuracy, sensitivity, specificity, and AUC. Finally, we conclude by discussing and analyzing these studies which use machine learning and deep learning models and routine blood test datasets for COVID-19 detection. This survey can be the starting point for a novice-/beginner-level researcher to perform on COVID-19 classification.
  • Article
    Citation - WoS: 5
    Citation - Scopus: 8
    A Novel Hybrid Machine Learning-Based System Using Deep Learning Techniques and Meta-Heuristic Algorithms for Various Medical Datatypes Classification
    (Mdpi, 2024) Kadhim, Yezi Ali; Guzel, Mehmet Serdar; Mishra, Alok
    Medicine is one of the fields where the advancement of computer science is making significant progress. Some diseases require an immediate diagnosis in order to improve patient outcomes. The usage of computers in medicine improves precision and accelerates data processing and diagnosis. In order to categorize biological images, hybrid machine learning, a combination of various deep learning approaches, was utilized, and a meta-heuristic algorithm was provided in this research. In addition, two different medical datasets were introduced, one covering the magnetic resonance imaging (MRI) of brain tumors and the other dealing with chest X-rays (CXRs) of COVID-19. These datasets were introduced to the combination network that contained deep learning techniques, which were based on a convolutional neural network (CNN) or autoencoder, to extract features and combine them with the next step of the meta-heuristic algorithm in order to select optimal features using the particle swarm optimization (PSO) algorithm. This combination sought to reduce the dimensionality of the datasets while maintaining the original performance of the data. This is considered an innovative method and ensures highly accurate classification results across various medical datasets. Several classifiers were employed to predict the diseases. The COVID-19 dataset found that the highest accuracy was 99.76% using the combination of CNN-PSO-SVM. In comparison, the brain tumor dataset obtained 99.51% accuracy, the highest accuracy derived using the combination method of autoencoder-PSO-KNN.
  • Article
    Citation - WoS: 3
    Citation - Scopus: 5
    Convolutional Neural Network-Based Vehicle Classification in Low-Quality Imaging Conditions for Internet of Things Devices
    (Multidisciplinary Digital Publishing Institute (MDPI), 2023) Maiga,B.; Dalveren,Y.; Kara,A.; Derawi,M.
    Vehicle classification has an important role in the efficient implementation of Internet of Things (IoT)-based intelligent transportation system (ITS) applications. Nowadays, because of their higher performance, convolutional neural networks (CNNs) are mostly used for vehicle classification. However, the computational complexity of CNNs and high-resolution data provided by high-quality monitoring cameras can pose significant challenges due to limited IoT device resources. In order to address this issue, this study aims to propose a simple CNN-based model for vehicle classification in low-quality images collected by a standard security camera positioned far from a traffic scene under low lighting and different weather conditions. For this purpose, firstly, a new dataset that contains 4800 low-quality vehicle images with 100 × 100 pixels and a 96 dpi resolution was created. Then, the proposed model and several well-known CNN-based models were tested on the created dataset. The results demonstrate that the proposed model achieved 95.8% accuracy, outperforming Inception v3, Inception-ResNet v2, Xception, and VGG19. While DenseNet121 and ResNet50 achieved better accuracy, their complexity in terms of higher trainable parameters, layers, and training times might be a significant concern in practice. In this context, the results suggest that the proposed model could be a feasible option for IoT devices used in ITS applications due to its simple architecture. © 2023 by the authors.
  • Article
    Citation - WoS: 19
    Citation - Scopus: 25
    Prediction of Composite Mechanical Properties: Integration of Deep Neural Network Methods and Finite Element Analysis
    (Mdpi, 2023) Gholami, Kimia; Ege, Faraz; Barzegar, Ramin
    Extracting the mechanical properties of a composite hydrogel; e.g., bioglass (BG)-collagen (COL), is often difficult due to the complexity of the experimental procedure. BGs could be embedded in the COL and thereby improve the mechanical properties of COL for bone tissue engineering applications. This paper proposed a deep-learning-based approach to extract the mechanical properties of a composite hydrogel directly from the microstructural images. Four datasets of various shapes of BGs (9000 2D images) generated by a finite element analysis showed that the deep neural network (DNN) model could efficiently predict the mechanical properties of the composite hydrogel, including the Young's modulus and Poisson's ratio. ResNet and AlexNet architecture were tuned to ensure the excellent performance and high accuracy of the proposed methods with R-values greater than 0.99 and a mean absolute error of the prediction of less than 7%. The results for the full dataset revealed that AlexNet had a better performance than ResNet in predicting the elastic material properties of BGs-COL with R-values of 0.99 and 0.97 compared to 0.97 and 0.96 for the Young's modulus and Poisson's ratio, respectively. This work provided bridging methods to combine a finite element analysis and a DNN for applications in diverse fields such as tissue engineering, materials science, and medical engineering.
  • Article
    Citation - WoS: 18
    Citation - Scopus: 21
    A Data-Driven Model To Forecast Multi-Step Ahead Time Series of Turkish Daily Electricity Load
    (Mdpi, 2022) Unlu, Kamil Demirberk; Ünlü, Kamil Demirberk; Ünlü, Kamil Demirberk; Industrial Engineering; Industrial Engineering
    It is critical to maintain a balance between the supply and the demand for electricity because of its non-storable feature. For power-producing facilities and traders, an electrical load is a piece of fundamental and vital information to have, particularly in terms of production planning, daily operations, and unit obligations, among other things. This study offers a deep learning methodology to model and forecast multistep daily Turkish electricity loads using the data between 5 January 2015, and 26 December 2021. One major reason for the growing popularity of deep learning is the creation of new and creative deep neural network topologies and significant computational advancements. Long Short-Term Memory (LSTM), Gated Recurrent Network, and Convolutional Neural Network are trained and compared to forecast 1 day to 7 days ahead of daily electricity load. Three different performance metrics including coefficient of determination (R-2), root mean squared error, and mean absolute error were used to evaluate the performance of the proposed algorithms. The forecasting results on the test set showed that the best performance is achieved by LSTM. The algorithm has an R-2 of 0.94 for 1 day ahead forecast, and the metric decreases to 0.73 in 7 days ahead forecast.
  • Article
    Citation - WoS: 3
    Citation - Scopus: 6
    Investigating the Impact of Two Major Programming Environments on the Accuracy of Deep Learning-Based Glioma Detection From Mri Images
    (Mdpi, 2023) Yilmaz, Vadi Su; Akdag, Metehan; Dalveren, Yaser; Doruk, Resat Ozgur; Kara, Ali; Soylu, Ahmet
    Brain tumors have been the subject of research for many years. Brain tumors are typically classified into two main groups: benign and malignant tumors. The most common tumor type among malignant brain tumors is known as glioma. In the diagnosis of glioma, different imaging technologies could be used. Among these techniques, MRI is the most preferred imaging technology due to its high-resolution image data. However, the detection of gliomas from a huge set of MRI data could be challenging for the practitioners. In order to solve this concern, many Deep Learning (DL) models based on Convolutional Neural Networks (CNNs) have been proposed to be used in detecting glioma. However, understanding which CNN architecture would work efficiently under various conditions including development environment or programming aspects as well as performance analysis has not been studied so far. In this research work, therefore, the purpose is to investigate the impact of two major programming environments (namely, MATLAB and Python) on the accuracy of CNN-based glioma detection from Magnetic Resonance Imaging (MRI) images. To this end, experiments on the Brain Tumor Segmentation (BraTS) dataset (2016 and 2017) consisting of multiparametric magnetic MRI images are performed by implementing two popular CNN architectures, the three-dimensional (3D) U-Net and the V-Net in the programming environments. From the results, it is concluded that the use of Python with Google Colaboratory (Colab) might be highly useful in the implementation of CNN-based models for glioma detection. Moreover, the 3D U-Net model is found to perform better, attaining a high accuracy on the dataset. The authors believe that the results achieved from this study would provide useful information to the research community in their appropriate implementation of DL approaches for brain tumor detection.
  • Article
    Citation - WoS: 9
    Citation - Scopus: 13
    Improving Word Embedding Quality With Innovative Automated Approaches To Hyperparameters
    (Wiley, 2021) Yildiz, Beytullah; Yıldız, Beytullah; Tezgider, Murat; Yıldız, Beytullah
    Deep learning practices have a great impact in many areas. Big data and significant hardware developments are the main reasons behind deep learning success. Recent advances in deep learning have led to significant improvements in text analysis and classification. Progress in the quality of word representation is an important factor among these improvements. In this study, we aimed to develop word2vec word representation, also called embedding, by automatically optimizing hyperparameters. Minimum word count, vector size, window size, negative sample, and iteration number were used to improve word embedding. We introduce two approaches for setting hyperparameters that are faster than grid search and random search. Word embeddings were created using documents of approximately 300 million words. We measured the quality of word embedding using a deep learning classification model on documents of 10 different classes. It was observed that the optimization of the values of hyperparameters alone increased classification success by 9%. In addition, we demonstrate the benefits of our approaches by comparing the semantic and syntactic relations between word embedding using default and optimized hyperparameters.
  • Article
    Citation - WoS: 5
    Citation - Scopus: 6
    Deployment and Implementation Aspects of Radio Frequency Fingerprinting in Cybersecurity of Smart Grids
    (Mdpi, 2023) Awan, Maaz Ali; Dalveren, Yaser; Catak, Ferhat Ozgur; Kara, Ali
    Smart grids incorporate diverse power equipment used for energy optimization in intelligent cities. This equipment may use Internet of Things (IoT) devices and services in the future. To ensure stable operation of smart grids, cybersecurity of IoT is paramount. To this end, use of cryptographic security methods is prevalent in existing IoT. Non-cryptographic methods such as radio frequency fingerprinting (RFF) have been on the horizon for a few decades but are limited to academic research or military interest. RFF is a physical layer security feature that leverages hardware impairments in radios of IoT devices for classification and rogue device detection. The article discusses the potential of RFF in wireless communication of IoT devices to augment the cybersecurity of smart grids. The characteristics of a deep learning (DL)-aided RFF system are presented. Subsequently, a deployment framework of RFF for smart grids is presented with implementation and regulatory aspects. The article culminates with a discussion of existing challenges and potential research directions for maturation of RFF.