2 results
Search Results
Now showing 1 - 2 of 2
Conference Object Hybrid AI-Driven Decision Model for Test Automation in Agile Software Development(Institute of Electrical and Electronics Engineers Inc., 2025) Bon, Mohammad; Yazici, AliTest automation plays an essential role in Agile Software Development (ASD), but its implementation remains complex. This study conducts a Systematic Literature Review (SLR) to identify key points of test automation and recent developments in Artificial Intelligence (AI). Based on 21 factors proposed by Butt et al., we construct a three-phase decision-support model addressing software, tools, tests, human, and economic dimensions. To improve this model, modern AI techniques - including natural language processing (NLP), machine learning (ML), Mabl (a self-healing, AI-based test automation tool) and Parasoft Selenic - are used. These technologies automate test case generation, prioritization, and maintenance, aligning with Agile's fast-paced demands. Our proposed hybrid model applies NLP to identify effecting factors, ML for impact scoring, and reinforcement learning (RL) for guiding automation strategies. The goal is to decrease manual processes, improve decision accuracy, and to adapt to evolving requirements. However, challenges such as data quality and the need for AI expertise remain. Future work should focus on practical validation and explore applications in non-functional testing. This study offers a practical, AI-enhanced framework to support Agile teams in streamlining test automation. © 2025 IEEE.Conference Object Citation - Scopus: 2Developing and Evaluating a Model-Based Metric for Legal Question Answering Systems(Institute of Electrical and Electronics Engineers Inc., 2023) Bakir,D.; Yildiz,B.; Aktas,M.S.In the complicated world of legal law, Question Answering (QA) systems only work if they can give correct, situation-aware, and logically sound answers. Traditional evaluation methods, which rely on superficial similarity measures, can't catch the complex accuracy and reasoning needed in legal answers. This means that evaluation methods need to change completely. To fix the problems with current methods, this study presents a new model-based evaluation metric that is designed to work well with legal QA systems. We are looking into the basic ideas that are needed for this kind of metric, as well as the problems of putting it into practice in the real world, finding the right technological frameworks, creating good evaluation methods. We talk about a theory framework that is based on legal standards and computational linguistics. We also talk about how the metric was created and how it can be used in real life. Our results, which come from thorough tests, show that our suggested measure is better than existing ones. It is more reliable, accurate, and useful for judging legal quality assurance systems. © 2023 IEEE.

