Search Results

Now showing 1 - 2 of 2
  • Article
    Citation - WoS: 29
    Citation - Scopus: 43
    Text Classification Using Improved Bidirectional Transformer
    (Wiley, 2022) Tezgider, Murat; Yıldız, Beytullah; Yildiz, Beytullah; Aydin, Galip; Yıldız, Beytullah
    Text data have an important place in our daily life. A huge amount of text data is generated everyday. As a result, automation becomes necessary to handle these large text data. Recently, we are witnessing important developments with the adaptation of new approaches in text processing. Attention mechanisms and transformers are emerging as methods with significant potential for text processing. In this study, we introduced a bidirectional transformer (BiTransformer) constructed using two transformer encoder blocks that utilize bidirectional position encoding to take into account the forward and backward position information of text data. We also created models to evaluate the contribution of attention mechanisms to the classification process. Four models, including long short term memory, attention, transformer, and BiTransformer, were used to conduct experiments on a large Turkish text dataset consisting of 30 categories. The effect of using pretrained embedding on models was also investigated. Experimental results show that the classification models using transformer and attention give promising results compared with classical deep learning models. We observed that the BiTransformer we proposed showed superior performance in text classification.
  • Article
    Citation - WoS: 11
    Citation - Scopus: 20
    Reinforcement Learning Using Fully Connected, Attention, and Transformer Models in Knapsack Problem Solving
    (Wiley, 2022) Yildiz, Beytullah; Yıldız, Beytullah; Yıldız, Beytullah
    Knapsack is a combinatorial optimization problem that involves a variety of resource allocation challenges. It is defined as non-deterministic polynomial time (NP) hard and has a wide range of applications. Knapsack problem (KP) has been studied in applied mathematics and computer science for decades. Many algorithms that can be classified as exact or approximate solutions have been proposed. Under the category of exact solutions, algorithms such as branch-and-bound and dynamic programming and the approaches obtained by combining these algorithms can be classified. Due to the fact that exact solutions require a long processing time, many approximate methods have been introduced for knapsack solution. In this research, deep Q-learning using models containing fully connected layers, attention, and transformer as function estimators were used to provide the solution for KP. We observed that deep Q-networks, which continued their training by observing the reward signals provided by the knapsack environment we developed, optimized the total reward gained over time. The results showed that our approaches give near-optimum solutions and work about 40 times faster than an exact algorithm using dynamic programming.