Khan, Muhammad UmerKhan, M. U.Mechatronics Engineering2024-09-102024-09-10201902147-284X10.17694/bajece.532746https://doi.org/10.17694/bajece.532746https://search.trdizin.gov.tr/tr/yayin/detay/318134/mobile-robot-navigation-using-reinforcement-learning-in-unknown-environmentshttps://hdl.handle.net/20.500.14411/7421In mobile robotics, navigation is considered as one of the most primary tasks, which becomes more challenging during local navigation when the environment is unknown. Therefore, the robot has to explore utilizing the sensory information. Reinforcement learning (RL), a biologically-inspired learning paradigm, has caught the attention of many as it has the capability to learn autonomously in an unknown environment. However, the randomized behavior of exploration, common in RL, increases computation time and cost, hence making it less appealing for real-world scenarios. This paper proposes an informed-biased softmax regression (iBSR) learning process that introduce a heuristic-based cost function to ensure faster convergence. Here, the action-selection is not considered as a random process, rather, is based on the maximum probability function calculated using softmax regression. Through experimental simulation scenarios for navigation, the strength of the proposed approach is tested and, for comparison and analysis purposes, the iBSR learning process is evaluated against two benchmark algorithms.eninfo:eu-repo/semantics/openAccessMühendislikBiyotıpMühendislikElektrik ve ElektronikBilgisayar BilimleriYazılım MühendisliğiYeşilSürdürülebilir Bilim ve TeknolojiTelekomünikasyonBilgisayar BilimleriSibernitikBilgisayar BilimleriBilgi SistemleriBilgisayar BilimleriDonanım ve MimariBilgisayar BilimleriTeori ve MetotlarBilgisayar BilimleriYapay ZekaMobile Robot Navigation Using Reinforcement Learning in Unknown EnvironmentsArticle73235244318134