Developing and Evaluating a Model-Based Metric for Legal Question Answering Systems

dc.authorscopusid 57844028200
dc.authorscopusid 14632851900
dc.authorscopusid 8410237700
dc.contributor.author Bakir,D.
dc.contributor.author Yildiz,B.
dc.contributor.author Aktas,M.S.
dc.contributor.other Software Engineering
dc.date.accessioned 2024-07-05T15:50:27Z
dc.date.available 2024-07-05T15:50:27Z
dc.date.issued 2023
dc.department Atılım University en_US
dc.department-temp Bakir D., Yildiz Technical University, Computer Engineering Department, Istanbul, Turkey; Yildiz B., Atilim University, Software Engineering Department, Ankara, Turkey; Aktas M.S., Yildiz Technical University, Computer Engineering Department, Istanbul, Turkey en_US
dc.description Ankura; IEEE Dataport en_US
dc.description.abstract In the complicated world of legal law, Question Answering (QA) systems only work if they can give correct, situation-aware, and logically sound answers. Traditional evaluation methods, which rely on superficial similarity measures, can't catch the complex accuracy and reasoning needed in legal answers. This means that evaluation methods need to change completely. To fix the problems with current methods, this study presents a new model-based evaluation metric that is designed to work well with legal QA systems. We are looking into the basic ideas that are needed for this kind of metric, as well as the problems of putting it into practice in the real world, finding the right technological frameworks, creating good evaluation methods. We talk about a theory framework that is based on legal standards and computational linguistics. We also talk about how the metric was created and how it can be used in real life. Our results, which come from thorough tests, show that our suggested measure is better than existing ones. It is more reliable, accurate, and useful for judging legal quality assurance systems. © 2023 IEEE. en_US
dc.identifier.citationcount 0
dc.identifier.doi 10.1109/BigData59044.2023.10386689
dc.identifier.endpage 2754 en_US
dc.identifier.isbn 979-835032445-7
dc.identifier.scopus 2-s2.0-85184987087
dc.identifier.startpage 2745 en_US
dc.identifier.uri https://doi.org/10.1109/BigData59044.2023.10386689
dc.identifier.uri https://hdl.handle.net/20.500.14411/4147
dc.institutionauthor Yıldız, Beytullah
dc.language.iso en en_US
dc.publisher Institute of Electrical and Electronics Engineers Inc. en_US
dc.relation.ispartof Proceedings - 2023 IEEE International Conference on Big Data, BigData 2023 -- 2023 IEEE International Conference on Big Data, BigData 2023 -- 15 December 2023 through 18 December 2023 -- Sorrento -- 196820 en_US
dc.relation.publicationcategory Konferans Öğesi - Uluslararası - Kurum Öğretim Elemanı en_US
dc.rights info:eu-repo/semantics/closedAccess en_US
dc.scopus.citedbyCount 1
dc.subject Large Language Model en_US
dc.subject Model-Based Evaluation Metric en_US
dc.subject Natural Language Processing en_US
dc.subject Question Answering Systems en_US
dc.subject Transformer Models en_US
dc.title Developing and Evaluating a Model-Based Metric for Legal Question Answering Systems en_US
dc.type Conference Object en_US
dspace.entity.type Publication
relation.isAuthorOfPublication 8eb144cb-95ff-4557-a99c-cd0ffa90749d
relation.isAuthorOfPublication.latestForDiscovery 8eb144cb-95ff-4557-a99c-cd0ffa90749d
relation.isOrgUnitOfPublication d86bbe4b-0f69-4303-a6de-c7ec0c515da5
relation.isOrgUnitOfPublication.latestForDiscovery d86bbe4b-0f69-4303-a6de-c7ec0c515da5

Files

Collections