2 results
Search Results
Now showing 1 - 2 of 2
Conference Object Citation - Scopus: 1Toxicity Detection Using State of the Art Natural Language Methodologies(Ieee, 2023) Keskin, Enes Faruk; Acikgoz, Erkut; Dogan, GulustanIn this paper, the studies carried out to detect objectionable expressions in any text will be explained. Experiments were performed with Sentence transformers, supervised machine learning algorithms, and Bert transformer architecture trained in English, and the results were observed. To prepare the dataset used in the experiments, the natural language processing and machine learning methodologies of the toxic and non-toxic contents in the labeled text data obtained from the Kaggle platform are explained, and then the methods and performances of the models trained using this dataset are summarized in this paper.Article Citation - WoS: 6Citation - Scopus: 10Beyond Rouge: a Comprehensive Evaluation Metric for Abstractive Summarization Leveraging Similarity, Entailment, and Acceptability(World Scientific Publ Co Pte Ltd, 2024) Briman, Mohammed Khalid Hilmi; Yıldız, Beytullah; Yildiz, Beytullah; Yıldız, BeytullahA vast amount of textual information on the internet has amplified the importance of text summarization models. Abstractive summarization generates original words and sentences that may not exist in the source document to be summarized. Such abstractive models may suffer from shortcomings such as linguistic acceptability and hallucinations. Recall-Oriented Understudy for Gisting Evaluation (ROUGE) is a metric commonly used to evaluate abstractive summarization models. However, due to its n-gram-based approach, it ignores several critical linguistic aspects. In this work, we propose Similarity, Entailment, and Acceptability Score (SEAScore), an automatic evaluation metric for evaluating abstractive text summarization models using the power of state-of-the-art pre-trained language models. SEAScore comprises three language models (LMs) that extract meaningful linguistic features from candidate and reference summaries and a weighted sum aggregator that computes an evaluation score. Experimental results show that our LM-based SEAScore metric correlates better with human judgment than standard evaluation metrics such as ROUGE-N and BERTScore.

