Attitude Control of Quad-Copter Using Deterministic Policy Gradient Algorithms (dpga)

dc.authorid Khan, Muhammad/0000-0002-9195-3477
dc.authorwosid Khan, Muhammad/N-5478-2016
dc.authorwosid Khan, Haroon ur Rashid u/B-6188-2016
dc.contributor.author Ghouri, Usama Hamayun
dc.contributor.author Zafar, Muhammad Usama
dc.contributor.author Bari, Salman
dc.contributor.author Khan, Haroon
dc.contributor.author Khan, Muhammad Umer
dc.contributor.other Mechatronics Engineering
dc.date.accessioned 2024-07-05T15:28:36Z
dc.date.available 2024-07-05T15:28:36Z
dc.date.issued 2019
dc.department Atılım University en_US
dc.department-temp [Ghouri, Usama Hamayun; Zafar, Muhammad Usama; Bari, Salman; Khan, Haroon] Air Univ, Dept Mechatron Engn, Islamabad, Pakistan; [Khan, Muhammad Umer] Atilim Univ, Dept Mechatron Engn, Ankara, Turkey en_US
dc.description Khan, Muhammad/0000-0002-9195-3477; en_US
dc.description.abstract In aerial robotics, intelligent control has been a buzz for the past few years. Extensive research efforts can be witnessed to produce control algorithms for stable flight operation of aerial robots using machine learning. Supervised learning has the tendency but training an agent using supervised learning can be a tedious task. Moreover, the data gathering could be expensive and always prone to inaccuracies due to parametric variations and system dynamics. An alternative approach is to ensure the stability of the aerial robots with the help of Deep Re-inforcement Learning (DRL). This paper deals with the intelligent control of quad-copter using deterministic policy gradient algorithms. In this research, state of the art Deep Deterministic Policy Gradient (DDPG) and Distributed Distributional Deep Deterministic Policy Gradient (D4PG) algorithms are employed for attitude control of quad-copter. An open source simulation environment GymFC is used for training of quad-copter. The results for comparative analysis of DDPG & D4PG algorithms are also presented, highlighting the attitude control performance. en_US
dc.identifier.citationcount 5
dc.identifier.doi 10.1109/c-code.2019.8681003
dc.identifier.endpage 153 en_US
dc.identifier.isbn 9781538696095
dc.identifier.startpage 149 en_US
dc.identifier.uri https://doi.org/10.1109/c-code.2019.8681003
dc.identifier.uri https://hdl.handle.net/20.500.14411/2823
dc.identifier.wos WOS:000469783600029
dc.institutionauthor Khan, Muhammad Umer
dc.language.iso en en_US
dc.publisher Ieee en_US
dc.relation.ispartof 2nd International Conference on Communication, Computing and Digital Systems (C-CODE) -- MAR 06-07, 2019 -- Bahria Univ, Islamabad, PAKISTAN en_US
dc.relation.publicationcategory Konferans Öğesi - Uluslararası - Kurum Öğretim Elemanı en_US
dc.rights info:eu-repo/semantics/closedAccess en_US
dc.subject Deep reinforcement learning en_US
dc.subject DDPG en_US
dc.subject D4PG en_US
dc.subject Quad-copter control en_US
dc.subject GymFC en_US
dc.title Attitude Control of Quad-Copter Using Deterministic Policy Gradient Algorithms (dpga) en_US
dc.type Conference Object en_US
dc.wos.citedbyCount 6
dspace.entity.type Publication
relation.isAuthorOfPublication e2e22115-4c8f-46cc-bce9-27539d99955e
relation.isAuthorOfPublication.latestForDiscovery e2e22115-4c8f-46cc-bce9-27539d99955e
relation.isOrgUnitOfPublication cfebf934-de19-4347-b1c4-16bed15637f7
relation.isOrgUnitOfPublication.latestForDiscovery cfebf934-de19-4347-b1c4-16bed15637f7

Files

Collections