Deep Q-Learning With Q-Matrix Transfer Learning for Novel Fire Evacuation Environment

被引:59
|
作者
Sharma, Jivitesh [1 ]
Andersen, Per-Arne [1 ]
Granmo, Ole-Christoffer [1 ]
Goodwin, Morten [1 ]
机构
[1] Univ Agder, Ctr Artificial Intelligence Res, Dept Informat & Commun Technol, N-4879 Grimstad, Norway
关键词
Deep Q-networks; double DQN (DDQN); DQN; dueling DQN; emergency management; evacuation; fire evacuation environment; pretraining; reinforcement learning (RL); transfer learning; CONVERGENCE; GAME; GO;
D O I
10.1109/TSMC.2020.2967936
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep reinforcement learning (RL) is achieving significant success in various applications like control, robotics, games, resource management, and scheduling. However, the important problem of emergency evacuation, which clearly could benefit from RL, has been largely unaddressed. Indeed, emergency evacuation is a complex task that is difficult to solve with RL. An emergency situation is highly dynamic, with a lot of changing variables and complex constraints that make it challenging to solve. Also, there is no standard benchmark environment available that can be used to train RL agents for evacuation. A realistic environment can be complex to design. In this article, we propose the first fire evacuation environment to train RL agents for evacuation planning. The environment is modeled as a graph capturing the building structure. It consists of realistic features like fire spread, uncertainty, and bottlenecks. The implementation of our environment is in the OpenAI gym format, to facilitate future research. We also propose a new RL approach that entails pretraining the network weights of a DQN-based agent [DQN/Double-DQN (DDQN)/Dueling-DQN] to incorporate information on the shortest path to the exit. We achieved this by using tabular Q-learning to learn the shortest path on the building model's graph. This information is transferred to the network by deliberately overfitting it on the Q-matrix. Then, the pretrained DQN model is trained on the fire evacuation environment to generate the optimal evacuation path under time varying conditions due to fire spread, bottlenecks, and uncertainty. We perform comparisons of the proposed approach with state-of-the-art RL algorithms like DQN, DDQN, Dueling-DQN, PPO, VPG, state-action-reward-state-action (SARSA), actor-critic method, and ACKTR. The results show that our method is able to outperform state-of-the-art models by a huge margin including the original DQN-based models. Finally, our model is tested on a large and complex real building consisting of 91 rooms, with the possibility to move to any other room, hence giving 8281 actions. In order to reduce the action space, we propose a strategy that involves one step simulation. That is, an action importance vector is added to the final output of the pretrained DQN and acts like an attention mechanism. Using this strategy, the action space is reduced by 90.1%. In this manner, the model is able to deal with large action spaces. Hence, our model achieves near optimal performance on the real world emergency environment.
引用
收藏
页码:7363 / 7381
页数:19
相关论文
共 50 条
  • [1] Deep Reinforcement Learning: From Q-Learning to Deep Q-Learning
    Tan, Fuxiao
    Yan, Pengfei
    Guan, Xinping
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT IV, 2017, 10637 : 475 - 483
  • [2] Constrained Deep Q-Learning Gradually Approaching Ordinary Q-Learning
    Ohnishi, Shota
    Uchibe, Eiji
    Yamaguchi, Yotaro
    Nakanishi, Kosuke
    Yasui, Yuji
    Ishii, Shin
    FRONTIERS IN NEUROROBOTICS, 2019, 13
  • [3] Data-Driven Learning of Q-Matrix
    Liu, Jingchen
    Xu, Gongjun
    Ying, Zhiliang
    APPLIED PSYCHOLOGICAL MEASUREMENT, 2012, 36 (07) : 548 - 564
  • [4] Comparison of Deep Q-Learning, Q-Learning and SARSA Reinforced Learning for Robot Local Navigation
    Anas, Hafiq
    Ong, Wee Hong
    Malik, Owais Ahmed
    ROBOT INTELLIGENCE TECHNOLOGY AND APPLICATIONS 6, 2022, 429 : 443 - 454
  • [5] Deep Reinforcement Learning with Double Q-Learning
    van Hasselt, Hado
    Guez, Arthur
    Silver, David
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 2094 - 2100
  • [6] Theory of self-learning Q-matrix
    Liu, Jingchen
    Xu, Gongjun
    Ying, Zhiliang
    BERNOULLI, 2013, 19 (5A) : 1790 - 1817
  • [7] An Online Home Energy Management System using Q-Learning and Deep Q-Learning
    Izmitligil, Hasan
    Karamancioglu, Abdurrahman
    SUSTAINABLE COMPUTING-INFORMATICS & SYSTEMS, 2024, 43
  • [8] Active deep Q-learning with demonstration
    Si-An Chen
    Voot Tangkaratt
    Hsuan-Tien Lin
    Masashi Sugiyama
    Machine Learning, 2020, 109 : 1699 - 1725
  • [9] Active deep Q-learning with demonstration
    Chen, Si-An
    Tangkaratt, Voot
    Lin, Hsuan-Tien
    Sugiyama, Masashi
    MACHINE LEARNING, 2020, 109 (9-10) : 1699 - 1725
  • [10] Hierarchical clustering with deep Q-learning
    Forster, Richard
    Fulop, Agnes
    ACTA UNIVERSITATIS SAPIENTIAE INFORMATICA, 2018, 10 (01) : 86 - 109