Reinforcement learning using fully connected, attention, and transformer models in knapsack problem solving
No Thumbnail Available
Date
2022
Journal Title
Journal ISSN
Volume Title
Publisher
Wiley
Open Access Color
OpenAIRE Downloads
OpenAIRE Views
Abstract
Knapsack is a combinatorial optimization problem that involves a variety of resource allocation challenges. It is defined as non-deterministic polynomial time (NP) hard and has a wide range of applications. Knapsack problem (KP) has been studied in applied mathematics and computer science for decades. Many algorithms that can be classified as exact or approximate solutions have been proposed. Under the category of exact solutions, algorithms such as branch-and-bound and dynamic programming and the approaches obtained by combining these algorithms can be classified. Due to the fact that exact solutions require a long processing time, many approximate methods have been introduced for knapsack solution. In this research, deep Q-learning using models containing fully connected layers, attention, and transformer as function estimators were used to provide the solution for KP. We observed that deep Q-networks, which continued their training by observing the reward signals provided by the knapsack environment we developed, optimized the total reward gained over time. The results showed that our approaches give near-optimum solutions and work about 40 times faster than an exact algorithm using dynamic programming.
Description
YILDIZ, Beytullah/0000-0001-7664-5145
ORCID
Keywords
attention, combinatorial optimization problem, deep Q-learning, knapsack, reinforcement learning, transformer
Turkish CoHE Thesis Center URL
Fields of Science
Citation
1
WoS Q
Q3
Scopus Q
Q2
Source
Volume
34
Issue
9