Deep Q-Learning With Q-Matrix Transfer Learning for Novel Fire Evacuation Environment

被引:59
|
作者
Sharma, Jivitesh [1 ]
Andersen, Per-Arne [1 ]
Granmo, Ole-Christoffer [1 ]
Goodwin, Morten [1 ]
机构
[1] Univ Agder, Ctr Artificial Intelligence Res, Dept Informat & Commun Technol, N-4879 Grimstad, Norway
关键词
Deep Q-networks; double DQN (DDQN); DQN; dueling DQN; emergency management; evacuation; fire evacuation environment; pretraining; reinforcement learning (RL); transfer learning; CONVERGENCE; GAME; GO;
D O I
10.1109/TSMC.2020.2967936
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep reinforcement learning (RL) is achieving significant success in various applications like control, robotics, games, resource management, and scheduling. However, the important problem of emergency evacuation, which clearly could benefit from RL, has been largely unaddressed. Indeed, emergency evacuation is a complex task that is difficult to solve with RL. An emergency situation is highly dynamic, with a lot of changing variables and complex constraints that make it challenging to solve. Also, there is no standard benchmark environment available that can be used to train RL agents for evacuation. A realistic environment can be complex to design. In this article, we propose the first fire evacuation environment to train RL agents for evacuation planning. The environment is modeled as a graph capturing the building structure. It consists of realistic features like fire spread, uncertainty, and bottlenecks. The implementation of our environment is in the OpenAI gym format, to facilitate future research. We also propose a new RL approach that entails pretraining the network weights of a DQN-based agent [DQN/Double-DQN (DDQN)/Dueling-DQN] to incorporate information on the shortest path to the exit. We achieved this by using tabular Q-learning to learn the shortest path on the building model's graph. This information is transferred to the network by deliberately overfitting it on the Q-matrix. Then, the pretrained DQN model is trained on the fire evacuation environment to generate the optimal evacuation path under time varying conditions due to fire spread, bottlenecks, and uncertainty. We perform comparisons of the proposed approach with state-of-the-art RL algorithms like DQN, DDQN, Dueling-DQN, PPO, VPG, state-action-reward-state-action (SARSA), actor-critic method, and ACKTR. The results show that our method is able to outperform state-of-the-art models by a huge margin including the original DQN-based models. Finally, our model is tested on a large and complex real building consisting of 91 rooms, with the possibility to move to any other room, hence giving 8281 actions. In order to reduce the action space, we propose a strategy that involves one step simulation. That is, an action importance vector is added to the final output of the pretrained DQN and acts like an attention mechanism. Using this strategy, the action space is reduced by 90.1%. In this manner, the model is able to deal with large action spaces. Hence, our model achieves near optimal performance on the real world emergency environment.
引用
收藏
页码:7363 / 7381
页数:19
相关论文
共 50 条
  • [21] A Novel Deep Q-learning Method for Dynamic Spectrum Access
    Tomovic, S.
    Radusinovic, I
    2020 28TH TELECOMMUNICATIONS FORUM (TELFOR), 2020, : 9 - 12
  • [22] Learning Large Q-Matrix by Restricted Boltzmann Machines
    Chengcheng Li
    Chenchen Ma
    Gongjun Xu
    Psychometrika, 2022, 87 : 1010 - 1041
  • [23] Optimizing Agent Training with Deep Q-Learning on a Self Driving Reinforcement Learning Environment
    Rodrigues, Pedro
    Vieira, Susana
    2020 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2020, : 745 - 752
  • [24] Backward Q-learning: The combination of Sarsa algorithm and Q-learning
    Wang, Yin-Hao
    Li, Tzuu-Hseng S.
    Lin, Chih-Jui
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2013, 26 (09) : 2184 - 2193
  • [25] A scheduling scheme in the cloud computing environment using deep Q-learning
    Tong, Zhao
    Chen, Hongjian
    Deng, Xiaomei
    Li, Kenli
    Li, Keqin
    INFORMATION SCIENCES, 2020, 512 (512) : 1170 - 1191
  • [26] Learning rates for Q-Learning
    Even-Dar, E
    Mansour, Y
    COMPUTATIONAL LEARNING THEORY, PROCEEDINGS, 2001, 2111 : 589 - 604
  • [27] Learning rates for Q-learning
    Even-Dar, E
    Mansour, Y
    JOURNAL OF MACHINE LEARNING RESEARCH, 2003, 5 : 1 - 25
  • [28] Modification of Q-learning to Adapt to the Randomness of Environment
    Luo, Xiulian
    Gao, Youbing
    Huang, Shao
    Zhao, Yaodong
    Zhang, Shengmiao
    ICCAIS 2019: THE 8TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND INFORMATION SCIENCES, 2019,
  • [29] Q-learning with Experience Replay in a Dynamic Environment
    Pieters, Mathijs
    Wiering, Marco A.
    PROCEEDINGS OF 2016 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2016,
  • [30] A novel deep learning driven robot path planning strategy: Q-learning approach
    Hu, Junli
    INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN TECHNOLOGY, 2023, 71 (03) : 237 - 243