A Reinforcement Learning Algorithm for Data Collection in UAV-aided IoT Networks with Uncertain Time Windows

No Thumbnail Available

Date

2021

Journal Title

Journal ISSN

Volume Title

Publisher

Ieee

Research Projects

Organizational Units

Organizational Unit
Industrial Engineering
(1998)
Industrial Engineering is a field of engineering that develops and applies methods and techniques to design, implement, develop and improve systems comprising of humans, materials, machines, energy and funding. Our department was founded in 1998, and since then, has graduated hundreds of individuals who may compete nationally and internationally into professional life. Accredited by MÜDEK in 2014, our student-centered education continues. In addition to acquiring the knowledge necessary for every Industrial engineer, our students are able to gain professional experience in their desired fields of expertise with a wide array of elective courses, such as E-commerce and ERP, Reliability, Tabulation, or Industrial Engineering Applications in the Energy Sector. With dissertation projects fictionalized on solving real problems at real companies, our students gain experience in the sector, and a wide network of contacts. Our education is supported with ERASMUS programs. With the scientific studies of our competent academic staff published in internationally-renowned magazines, our department ranks with the bests among other universities. IESC, one of the most active student networks at our university, continues to organize extensive, and productive events every year.

Journal Issue

Abstract

Unmanned aerial vehicles (UAVs) have been considered as an efficient solution to collect data from ground sensor nodes in Internet-of-Things (IoT) networks due to their several advantages such as flexibility, quick deployment and maneuverability. Studies on this subject have been mainly focused on problems where limited UAV battery is introduced as a tight constraint that shortens the mission time in the models, which significantly undervalues the UAV potential. Moreover, the sensors in the network are typically assumed to have deterministic working times during which the data is uploaded. In this study, we revisit the UAV trajectory planning problem with a different approach and revise the battery constraint by allowing UAVs to swap their batteries at fixed stations and continue their data collection task, hence, the planning horizon can be extended. In particular, we develop a discrete time Markov process (DTMP) in which the UAV trajectory and battery swapping times are jointly determined to minimize the total data loss in the network, where the sensors have uncertain time windows for uploading. Due to the so-called curse-of-dimensionality, we propose a reinforcement learning (RL) algorithm in which the UAV is trained as an agent to explore the network. The computational study shows that our proposed algorithm outperforms two benchmark approaches and achieves significant reduction in data loss.

Description

Cicek, Cihan Tugrul/0000-0002-3532-2638

Keywords

UAV, internet-of-things, reinforcement learning, battery swapping, time windows, uncertainty

Turkish CoHE Thesis Center URL

Citation

0

WoS Q

Scopus Q

Source

IEEE International Conference on Communications (ICC) -- JUN 14-23, 2021 -- ELECTR NETWORK

Volume

Issue

Start Page

End Page

Collections