Developing and Evaluating a Model-Based Metric for Legal Question Answering Systems

dc.authorscopusid57844028200
dc.authorscopusid14632851900
dc.authorscopusid8410237700
dc.contributor.authorYıldız, Beytullah
dc.contributor.authorYildiz,B.
dc.contributor.authorAktas,M.S.
dc.contributor.otherSoftware Engineering
dc.date.accessioned2024-07-05T15:50:27Z
dc.date.available2024-07-05T15:50:27Z
dc.date.issued2023
dc.departmentAtılım Universityen_US
dc.department-tempBakir D., Yildiz Technical University, Computer Engineering Department, Istanbul, Turkey; Yildiz B., Atilim University, Software Engineering Department, Ankara, Turkey; Aktas M.S., Yildiz Technical University, Computer Engineering Department, Istanbul, Turkeyen_US
dc.descriptionAnkura; IEEE Dataporten_US
dc.description.abstractIn the complicated world of legal law, Question Answering (QA) systems only work if they can give correct, situation-aware, and logically sound answers. Traditional evaluation methods, which rely on superficial similarity measures, can't catch the complex accuracy and reasoning needed in legal answers. This means that evaluation methods need to change completely. To fix the problems with current methods, this study presents a new model-based evaluation metric that is designed to work well with legal QA systems. We are looking into the basic ideas that are needed for this kind of metric, as well as the problems of putting it into practice in the real world, finding the right technological frameworks, creating good evaluation methods. We talk about a theory framework that is based on legal standards and computational linguistics. We also talk about how the metric was created and how it can be used in real life. Our results, which come from thorough tests, show that our suggested measure is better than existing ones. It is more reliable, accurate, and useful for judging legal quality assurance systems. © 2023 IEEE.en_US
dc.identifier.citation0
dc.identifier.doi10.1109/BigData59044.2023.10386689
dc.identifier.endpage2754en_US
dc.identifier.isbn979-835032445-7
dc.identifier.scopus2-s2.0-85184987087
dc.identifier.startpage2745en_US
dc.identifier.urihttps://doi.org/10.1109/BigData59044.2023.10386689
dc.identifier.urihttps://hdl.handle.net/20.500.14411/4147
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineers Inc.en_US
dc.relation.ispartofProceedings - 2023 IEEE International Conference on Big Data, BigData 2023 -- 2023 IEEE International Conference on Big Data, BigData 2023 -- 15 December 2023 through 18 December 2023 -- Sorrento -- 196820en_US
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectLarge Language Modelen_US
dc.subjectModel-Based Evaluation Metricen_US
dc.subjectNatural Language Processingen_US
dc.subjectQuestion Answering Systemsen_US
dc.subjectTransformer Modelsen_US
dc.titleDeveloping and Evaluating a Model-Based Metric for Legal Question Answering Systemsen_US
dc.typeConference Objecten_US
dspace.entity.typePublication
relation.isAuthorOfPublication8eb144cb-95ff-4557-a99c-cd0ffa90749d
relation.isAuthorOfPublication.latestForDiscovery8eb144cb-95ff-4557-a99c-cd0ffa90749d
relation.isOrgUnitOfPublicationd86bbe4b-0f69-4303-a6de-c7ec0c515da5
relation.isOrgUnitOfPublication.latestForDiscoveryd86bbe4b-0f69-4303-a6de-c7ec0c515da5

Files

Collections