Yildiz, BeytullahYıldız, BeytullahTezgider, MuratYıldız, BeytullahSoftware Engineering2024-07-052024-07-05202161532-06261532-063410.1002/cpe.60912-s2.0-85100035562https://doi.org/10.1002/cpe.6091https://hdl.handle.net/20.500.14411/1873YILDIZ, Beytullah/0000-0001-7664-5145Deep learning practices have a great impact in many areas. Big data and significant hardware developments are the main reasons behind deep learning success. Recent advances in deep learning have led to significant improvements in text analysis and classification. Progress in the quality of word representation is an important factor among these improvements. In this study, we aimed to develop word2vec word representation, also called embedding, by automatically optimizing hyperparameters. Minimum word count, vector size, window size, negative sample, and iteration number were used to improve word embedding. We introduce two approaches for setting hyperparameters that are faster than grid search and random search. Word embeddings were created using documents of approximately 300 million words. We measured the quality of word embedding using a deep learning classification model on documents of 10 different classes. It was observed that the optimization of the values of hyperparameters alone increased classification success by 9%. In addition, we demonstrate the benefits of our approaches by comparing the semantic and syntactic relations between word embedding using default and optimized hyperparameters.eninfo:eu-repo/semantics/closedAccessdeep learningmachine learningtext analysistext classificationword embeddingword2vecImproving word embedding quality with innovative automated approaches to hyperparametersArticleQ3Q23318WOS:000609293400001