Proxemics-based deep reinforcement learning for robot navigation in continuous action space

被引:1
|
作者
Cimurs R. [1 ]
Suh I.-H. [2 ]
机构
[1] Department of Intelligent Robot Engineering, Hanyang University
[2] Department of Electronics and Computer Engineering, Hanyang University
关键词
Deep reinforcement learning; Proxemics-based navigation; Socially aware navigation;
D O I
10.5302/J.ICROS.2020.19.0225
中图分类号
学科分类号
摘要
This paper presents a deep reinforcement learning approach to learn robot navigation in continuous action space with a motion behavior based on human proxemics. We extended a deep deterministic policy gradient network to include convolutional layers for dealing with motion over multiple timesteps. A proxemics-based cost function for the robot to obtain the desired socially aware navigation behavior was developed and implemented in the learning stage, which respects the personal and intimate space of a human. The performed experiments in the simulated and real environments exhibited the desired behavior. Furthermore, the intrusions into the proxemics zones of a human were significantly reduced compared to similar learned robot navigation approaches. © ICROS 2020.
引用
收藏
页码:168 / 176
页数:8
相关论文
共 50 条
  • [31] Adversarial Attacks on Multiagent Deep Reinforcement Learning Models in Continuous Action Space
    Zhou, Ziyuan
    Liu, Guanjun
    Guo, Weiran
    Zhou, MengChu
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2024,
  • [32] Active Exploration Deep Reinforcement Learning for Continuous Action Space with Forward Prediction
    Zhao, Dongfang
    Huanshi, Xu
    Xun, Zhang
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2024, 17 (01)
  • [33] Active Exploration Deep Reinforcement Learning for Continuous Action Space with Forward Prediction
    Dongfang Zhao
    Xu Huanshi
    Zhang Xun
    International Journal of Computational Intelligence Systems, 17
  • [34] Switching reinforcement learning for continuous action space
    Nagayoshi, Masato
    Murao, Hajime
    Tamaki, Hisashi
    ELECTRONICS AND COMMUNICATIONS IN JAPAN, 2012, 95 (03) : 37 - 44
  • [35] Robot navigation in crowds via deep reinforcement learning with modeling of obstacle uni-action
    Lu, Xiaojun
    Woo, Hanwool
    Faragasso, Angela
    Yamashita, Atsushi
    Asama, Hajime
    ADVANCED ROBOTICS, 2023, 37 (04) : 257 - 269
  • [36] Bayesian reinforcement learning in continuous POMDPs with application to robot navigation
    Ross, Stephane
    Chaib-draa, Brahim
    Pineaul, Joelle
    2008 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-9, 2008, : 2845 - +
  • [37] Energy management of hybrid electric bus based on deep reinforcement learning in continuous state and action space
    Tan, Huachun
    Zhang, Hailong
    Peng, Jiankun
    Jiang, Zhuxi
    Wu, Yuankai
    ENERGY CONVERSION AND MANAGEMENT, 2019, 195 : 548 - 560
  • [38] A Deep Reinforcement Learning Based Mapless Navigation Algorithm Using Continuous Actions
    Duo Nanxun
    Wang Qinzhao
    Lv Qiang
    Wei Heng
    Zhang Pei
    2019 INTERNATIONAL CONFERENCE ON ROBOTS & INTELLIGENT SYSTEM (ICRIS 2019), 2019, : 63 - 68
  • [39] Action Space Shaping in Deep Reinforcement Learning
    Kanervisto, Anssi
    Scheller, Christian
    Hautamaki, Ville
    2020 IEEE CONFERENCE ON GAMES (IEEE COG 2020), 2020, : 479 - 486
  • [40] Robot Navigation in Crowded Environments Using Deep Reinforcement Learning
    Liu, Lucia
    Dugas, Daniel
    Cesari, Gianluca
    Siegwart, Roland
    Dube, Renaud
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 5671 - 5677