Deep Q-Learning With Q-Matrix Transfer Learning for Novel Fire Evacuation Environment

被引:59
|
作者
Sharma, Jivitesh [1 ]
Andersen, Per-Arne [1 ]
Granmo, Ole-Christoffer [1 ]
Goodwin, Morten [1 ]
机构
[1] Univ Agder, Ctr Artificial Intelligence Res, Dept Informat & Commun Technol, N-4879 Grimstad, Norway
关键词
Deep Q-networks; double DQN (DDQN); DQN; dueling DQN; emergency management; evacuation; fire evacuation environment; pretraining; reinforcement learning (RL); transfer learning; CONVERGENCE; GAME; GO;
D O I
10.1109/TSMC.2020.2967936
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep reinforcement learning (RL) is achieving significant success in various applications like control, robotics, games, resource management, and scheduling. However, the important problem of emergency evacuation, which clearly could benefit from RL, has been largely unaddressed. Indeed, emergency evacuation is a complex task that is difficult to solve with RL. An emergency situation is highly dynamic, with a lot of changing variables and complex constraints that make it challenging to solve. Also, there is no standard benchmark environment available that can be used to train RL agents for evacuation. A realistic environment can be complex to design. In this article, we propose the first fire evacuation environment to train RL agents for evacuation planning. The environment is modeled as a graph capturing the building structure. It consists of realistic features like fire spread, uncertainty, and bottlenecks. The implementation of our environment is in the OpenAI gym format, to facilitate future research. We also propose a new RL approach that entails pretraining the network weights of a DQN-based agent [DQN/Double-DQN (DDQN)/Dueling-DQN] to incorporate information on the shortest path to the exit. We achieved this by using tabular Q-learning to learn the shortest path on the building model's graph. This information is transferred to the network by deliberately overfitting it on the Q-matrix. Then, the pretrained DQN model is trained on the fire evacuation environment to generate the optimal evacuation path under time varying conditions due to fire spread, bottlenecks, and uncertainty. We perform comparisons of the proposed approach with state-of-the-art RL algorithms like DQN, DDQN, Dueling-DQN, PPO, VPG, state-action-reward-state-action (SARSA), actor-critic method, and ACKTR. The results show that our method is able to outperform state-of-the-art models by a huge margin including the original DQN-based models. Finally, our model is tested on a large and complex real building consisting of 91 rooms, with the possibility to move to any other room, hence giving 8281 actions. In order to reduce the action space, we propose a strategy that involves one step simulation. That is, an action importance vector is added to the final output of the pretrained DQN and acts like an attention mechanism. Using this strategy, the action space is reduced by 90.1%. In this manner, the model is able to deal with large action spaces. Hence, our model achieves near optimal performance on the real world emergency environment.
引用
收藏
页码:7363 / 7381
页数:19
相关论文
共 50 条
  • [41] Unregistered Biological Words Recognition by Q-Learning with Transfer Learning
    Zhu, Fei
    Liu, Quan
    Wang, Hui
    Zhou, Xiaoke
    Fu, Yuchen
    SCIENTIFIC WORLD JOURNAL, 2014,
  • [42] A Novel Behavioral Strategy for RoboCode Platform Based on Deep Q-Learning
    Kayakoku, Hakan
    Guzel, Mehmet Serdar
    Bostanci, Erkan
    Medeni, Ihsan Tolga
    Mishra, Deepti
    COMPLEXITY, 2021, 2021
  • [43] Stabilizing deep Q-learning with Q-graph-based bounds
    Hoppe, Sabrina
    Giftthaler, Markus
    Krug, Robert
    Toussaint, Marc
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2023, 42 (09): : 633 - 654
  • [44] Contextual Q-Learning
    Pinto, Tiago
    Vale, Zita
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 2927 - 2928
  • [45] Fuzzy Q-learning
    Glorennec, PY
    Jouffe, L
    PROCEEDINGS OF THE SIXTH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS I - III, 1997, : 659 - 662
  • [46] CVaR Q-Learning
    Stanko, Silvestr
    Macek, Karel
    COMPUTATIONAL INTELLIGENCE: 11th International Joint Conference, IJCCI 2019, Vienna, Austria, September 17-19, 2019, Revised Selected Papers, 2021, 922 : 333 - 358
  • [47] Bayesian Q-learning
    Dearden, R
    Friedman, N
    Russell, S
    FIFTEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-98) AND TENTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICAL INTELLIGENCE (IAAI-98) - PROCEEDINGS, 1998, : 761 - 768
  • [48] Zap Q-Learning
    Devraj, Adithya M.
    Meyn, Sean P.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [49] Convex Q-Learning
    Lu, Fan
    Mehta, Prashant G.
    Meyn, Sean P.
    Neu, Gergely
    2021 AMERICAN CONTROL CONFERENCE (ACC), 2021, : 4749 - 4756
  • [50] Q-learning and robotics
    Touzet, CF
    Santos, JM
    SIMULATION IN INDUSTRY 2001, 2001, : 685 - 689