Improving Text Classification with Transformer
No Thumbnail Available
Date
2021
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Institute of Electrical and Electronics Engineers Inc.
Open Access Color
OpenAIRE Downloads
OpenAIRE Views
Abstract
Huge amounts of text data are produced every day. Processing text data that accumulates and grows exponentially every day requires the use of appropriate automation tools. Text classification, a Natural Language Processing task, has the potential to provide automatic text data processing. Many new models have been proposed to achieve much better results in text classification. The transformer model has been introduced recently to provide superior performance in terms of accuracy and processing speed in deep learning. In this article, we propose an improved Transformer model for text classification. The dataset containing information about the books was collected from an online resource and used to train the models. We witnessed superior performance in our proposed Transformer model compared to previous state-of-art models such as L S T M and CNN. © 2021 IEEE
Description
Keywords
Attention, Deep learning, Natural language processing, Text classification, Transformer, Word embedding
Turkish CoHE Thesis Center URL
Fields of Science
Citation
9
WoS Q
Scopus Q
Source
Proceedings - 6th International Conference on Computer Science and Engineering, UBMK 2021 -- 6th International Conference on Computer Science and Engineering, UBMK 2021 -- 15 September 2021 through 17 September 2021 -- Ankara -- 176826
Volume
Issue
Start Page
707
End Page
712