Developing and Evaluating a Model-Based Metric for Legal Question Answering Systems
No Thumbnail Available
Date
2023
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Institute of Electrical and Electronics Engineers Inc.
Open Access Color
OpenAIRE Downloads
OpenAIRE Views
Abstract
In the complicated world of legal law, Question Answering (QA) systems only work if they can give correct, situation-aware, and logically sound answers. Traditional evaluation methods, which rely on superficial similarity measures, can't catch the complex accuracy and reasoning needed in legal answers. This means that evaluation methods need to change completely. To fix the problems with current methods, this study presents a new model-based evaluation metric that is designed to work well with legal QA systems. We are looking into the basic ideas that are needed for this kind of metric, as well as the problems of putting it into practice in the real world, finding the right technological frameworks, creating good evaluation methods. We talk about a theory framework that is based on legal standards and computational linguistics. We also talk about how the metric was created and how it can be used in real life. Our results, which come from thorough tests, show that our suggested measure is better than existing ones. It is more reliable, accurate, and useful for judging legal quality assurance systems. © 2023 IEEE.
Description
Ankura; IEEE Dataport
Keywords
Large Language Model, Model-Based Evaluation Metric, Natural Language Processing, Question Answering Systems, Transformer Models
Turkish CoHE Thesis Center URL
Fields of Science
Citation
0
WoS Q
Scopus Q
Source
Proceedings - 2023 IEEE International Conference on Big Data, BigData 2023 -- 2023 IEEE International Conference on Big Data, BigData 2023 -- 15 December 2023 through 18 December 2023 -- Sorrento -- 196820
Volume
Issue
Start Page
2745
End Page
2754