A Reinforcement Learning Algorithm for Data Collection in UAV-aided IoT Networks with Uncertain Time Windows

dc.authoridCicek, Cihan Tugrul/0000-0002-3532-2638
dc.authorscopusid57208147005
dc.authorwosidCicek, Cihan Tugrul/AAF-7787-2019
dc.contributor.authorÇiçek, Cihan Tuğrul
dc.contributor.otherIndustrial Engineering
dc.date.accessioned2024-07-05T15:19:11Z
dc.date.available2024-07-05T15:19:11Z
dc.date.issued2021
dc.departmentAtılım Universityen_US
dc.department-temp[Cicek, Cihan Tugrul] Atilim Univ, Dept Ind Engn, Ankara, Turkeyen_US
dc.descriptionCicek, Cihan Tugrul/0000-0002-3532-2638en_US
dc.description.abstractUnmanned aerial vehicles (UAVs) have been considered as an efficient solution to collect data from ground sensor nodes in Internet-of-Things (IoT) networks due to their several advantages such as flexibility, quick deployment and maneuverability. Studies on this subject have been mainly focused on problems where limited UAV battery is introduced as a tight constraint that shortens the mission time in the models, which significantly undervalues the UAV potential. Moreover, the sensors in the network are typically assumed to have deterministic working times during which the data is uploaded. In this study, we revisit the UAV trajectory planning problem with a different approach and revise the battery constraint by allowing UAVs to swap their batteries at fixed stations and continue their data collection task, hence, the planning horizon can be extended. In particular, we develop a discrete time Markov process (DTMP) in which the UAV trajectory and battery swapping times are jointly determined to minimize the total data loss in the network, where the sensors have uncertain time windows for uploading. Due to the so-called curse-of-dimensionality, we propose a reinforcement learning (RL) algorithm in which the UAV is trained as an agent to explore the network. The computational study shows that our proposed algorithm outperforms two benchmark approaches and achieves significant reduction in data loss.en_US
dc.identifier.citation0
dc.identifier.doi10.1109/ICCWorkshops50388.2021.9473768
dc.identifier.isbn9781728194417
dc.identifier.issn2164-7038
dc.identifier.scopus2-s2.0-85112795751
dc.identifier.urihttps://doi.org/10.1109/ICCWorkshops50388.2021.9473768
dc.identifier.urihttps://hdl.handle.net/20.500.14411/1947
dc.identifier.wosWOS:000848412200244
dc.institutionauthorCicek, Cihan Tugrul
dc.language.isoenen_US
dc.publisherIeeeen_US
dc.relation.ispartofIEEE International Conference on Communications (ICC) -- JUN 14-23, 2021 -- ELECTR NETWORKen_US
dc.relation.ispartofseriesIEEE International Conference on Communications Workshops
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectUAVen_US
dc.subjectinternet-of-thingsen_US
dc.subjectreinforcement learningen_US
dc.subjectbattery swappingen_US
dc.subjecttime windowsen_US
dc.subjectuncertaintyen_US
dc.titleA Reinforcement Learning Algorithm for Data Collection in UAV-aided IoT Networks with Uncertain Time Windowsen_US
dc.typeConference Objecten_US
dspace.entity.typePublication
relation.isAuthorOfPublication82ea98fd-36fb-4469-8e29-b73dc71cabb9
relation.isAuthorOfPublication.latestForDiscovery82ea98fd-36fb-4469-8e29-b73dc71cabb9
relation.isOrgUnitOfPublication12c9377e-b7fe-4600-8326-f3613a05653d
relation.isOrgUnitOfPublication.latestForDiscovery12c9377e-b7fe-4600-8326-f3613a05653d

Files

Collections