Soyutlayıcı Özetlemek, Benzerlik, Gereklilik, ve Kabul Edilebilirliği Kullanan Kapsamlı Değerlendirme Metriği

Loading...
Thumbnail Image

Date

2023

Journal Title

Journal ISSN

Volume Title

Publisher

Open Access Color

OpenAIRE Downloads

OpenAIRE Views

Research Projects

Organizational Units

Organizational Unit
Software Engineering
(2005)
Department of Software Engineering was founded in 2005 as the first department in Ankara in Software Engineering. The recent developments in current technologies such as Artificial Intelligence, Machine Learning, Big Data, and Blockchains, have placed Software Engineering among the top professions of today, and the future. The academic and research activities in the department are pursued with qualified faculty at Undergraduate, Graduate and Doctorate Degree levels. Our University is one of the two universities offering a Doctorate-level program in this field. In addition to focusing on the basic phases of software (analysis, design, development, testing) and relevant methodologies in detail, our department offers education in various areas of expertise, such as Object-oriented Analysis and Design, Human-Computer Interaction, Software Quality Assurance, Software Requirement Engineering, Software Design and Architecture, Software Project Management, Software Testing and Model-Driven Software Development. The curriculum of our Department is catered to graduate individuals who are prepared to take part in any phase of software development of large-scale software in line with the requirements of the software sector. Department of Software Engineering is accredited by MÜDEK (Association for Evaluation and Accreditation of Engineering Programs) until September 30th, 2021, and has been granted the EUR-ACE label that is valid in Europe. This label provides our graduates with a vital head-start to be admitted to graduate-level programs, and into working environments in European Union countries. The Big Data and Cloud Computing Laboratory, as well as MobiLab where mobile applications are developed, SimLAB, the simulation laboratory for Medical Computing, and software education laboratories of the department are equipped with various software tools and hardware to enable our students to use state-of-the-art software technologies. Our graduates are employed in software and R&D companies (Technoparks), national/international institutions developing or utilizing software technologies (such as banks, healthcare institutions, the Information Technologies departments of private and public institutions, telecommunication companies, TÜİK, SPK, BDDK, EPDK, RK, or universities), and research institutions such TÜBİTAK.

Journal Issue

Events

Abstract

Uzun metinlerden otomatik olarak anlamlı özetler üretmek, birçok alanda büyük önem taşımaktadır. Transformer modeli gibi yeni sinir ağı mimarilerinin ortaya çıkması, kaliteli özetler üretebilen çok sayıda büyük dil modellerinin gelişmesine neden olmuştur. Fakat, özetleme modellerinin ürettiği özetler, önemli bir sorunu beraberinde getirmektedir. Özetleme modellerinin kalitesini ölçen, ROUGE gibi, standart otomatik değerlendirme metrikleri, kapsamlı bir değerlendirme yapmakta eksik kalmaktadır. Bu çalışmada, modeller tarafından üretilen ve insanlar tarafından yazılan örnek özetleri kullanan, SEAScore adlı yeni bir model tabanlı metrik sunuyoruz. Bu metrik, semantik benzerlik, doğal dil çıkarımı ve dilsel kabul edilebilirlik gibi çeşitli Doğal Dil İşleme yöntemlerini kullanır. Geliştirdiğimiz SEAScore metriği, daha önce eğitilmiş dil modelleri tarafından çıkarılan özellikleri kullanarak, özetleme modellerinin kalitelerini ölçen bir puan üretir. Bu tezde, üç tane özetleme modeli kullanarak yeni metriğimizin kalitesini ölçen deneyler yaptık. Deneysel sonuçlara göre, geliştirdiğimiz SEAScore metriği, bilinen standart metriklerine göre, insan tarafından üretilen değerlendirme puanları ile daha yüksek korelasyon sergileyerek başarılı sonuçlar sunmuştur.
Producing meaningful automatic summaries from long textual documents is essential in various fields. The emergence of novel neural network architectures, such as the Transformer model, has led to the development of large pre-trained language models that can produce quality summaries. However, model-generated summaries suffer from many issues. Thus, standard automatic evaluation metrics, such as the ROUGE metric, fail to effectively evaluate the quality of summarization models. In this study, we introduce SEAScore, a new model-based automatic evaluation metric that can evaluate model-generated summaries against their counterpart reference summaries by utilizing multiple Natural Language Processing tasks such as Semantic Similarity, Natural Language Inference, and Linguistic Acceptability. SEAScore takes features extracted by pre-trained language models and produces an evaluation score to measure the quality of summarization models. In this thesis, we develop our new evaluation metric SEAScore and train three summarization models to assess our new metric. Experimental results show that SEAScore correlates better with human judgment than some standard metrics.

Description

Keywords

Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrol, Computer Engineering and Computer Science and Control, Derin öğrenme, Deep learning

Turkish CoHE Thesis Center URL

Fields of Science

Citation

WoS Q

Scopus Q

Source

Volume

Issue

Start Page

0

End Page

123