Search Results

Now showing 1 - 3 of 3
  • Article
    Citation - WoS: 29
    Citation - Scopus: 43
    Text Classification Using Improved Bidirectional Transformer
    (Wiley, 2022) Tezgider, Murat; Yıldız, Beytullah; Yildiz, Beytullah; Aydin, Galip; Yıldız, Beytullah
    Text data have an important place in our daily life. A huge amount of text data is generated everyday. As a result, automation becomes necessary to handle these large text data. Recently, we are witnessing important developments with the adaptation of new approaches in text processing. Attention mechanisms and transformers are emerging as methods with significant potential for text processing. In this study, we introduced a bidirectional transformer (BiTransformer) constructed using two transformer encoder blocks that utilize bidirectional position encoding to take into account the forward and backward position information of text data. We also created models to evaluate the contribution of attention mechanisms to the classification process. Four models, including long short term memory, attention, transformer, and BiTransformer, were used to conduct experiments on a large Turkish text dataset consisting of 30 categories. The effect of using pretrained embedding on models was also investigated. Experimental results show that the classification models using transformer and attention give promising results compared with classical deep learning models. We observed that the BiTransformer we proposed showed superior performance in text classification.
  • Article
    Citation - WoS: 6
    Citation - Scopus: 10
    Beyond Rouge: a Comprehensive Evaluation Metric for Abstractive Summarization Leveraging Similarity, Entailment, and Acceptability
    (World Scientific Publ Co Pte Ltd, 2024) Briman, Mohammed Khalid Hilmi; Yıldız, Beytullah; Yildiz, Beytullah; Yıldız, Beytullah
    A vast amount of textual information on the internet has amplified the importance of text summarization models. Abstractive summarization generates original words and sentences that may not exist in the source document to be summarized. Such abstractive models may suffer from shortcomings such as linguistic acceptability and hallucinations. Recall-Oriented Understudy for Gisting Evaluation (ROUGE) is a metric commonly used to evaluate abstractive summarization models. However, due to its n-gram-based approach, it ignores several critical linguistic aspects. In this work, we propose Similarity, Entailment, and Acceptability Score (SEAScore), an automatic evaluation metric for evaluating abstractive text summarization models using the power of state-of-the-art pre-trained language models. SEAScore comprises three language models (LMs) that extract meaningful linguistic features from candidate and reference summaries and a weighted sum aggregator that computes an evaluation score. Experimental results show that our LM-based SEAScore metric correlates better with human judgment than standard evaluation metrics such as ROUGE-N and BERTScore.
  • Article
    Citation - WoS: 9
    Citation - Scopus: 13
    Improving Word Embedding Quality With Innovative Automated Approaches To Hyperparameters
    (Wiley, 2021) Yildiz, Beytullah; Yıldız, Beytullah; Tezgider, Murat; Yıldız, Beytullah
    Deep learning practices have a great impact in many areas. Big data and significant hardware developments are the main reasons behind deep learning success. Recent advances in deep learning have led to significant improvements in text analysis and classification. Progress in the quality of word representation is an important factor among these improvements. In this study, we aimed to develop word2vec word representation, also called embedding, by automatically optimizing hyperparameters. Minimum word count, vector size, window size, negative sample, and iteration number were used to improve word embedding. We introduce two approaches for setting hyperparameters that are faster than grid search and random search. Word embeddings were created using documents of approximately 300 million words. We measured the quality of word embedding using a deep learning classification model on documents of 10 different classes. It was observed that the optimization of the values of hyperparameters alone increased classification success by 9%. In addition, we demonstrate the benefits of our approaches by comparing the semantic and syntactic relations between word embedding using default and optimized hyperparameters.