Sensor Fusion for Robot Control through Deep Reinforcement Learning

被引:0
|
作者
Bohez, Steven [1 ]
Verbelen, Tim [1 ]
De Coninck, Elias [1 ]
Vankeirsbilck, Bert [1 ]
Simoens, Pieter [1 ]
Dhoedt, Bart [1 ]
机构
[1] Univ Ghent, Imec, IDLab, Dept Informat Technol, Ghent, Belgium
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep reinforcement learning is becoming increasingly popular for robot control algorithms, with the aim for a robot to self-learn useful feature representations from unstructured sensory input leading to the optimal actuation policy. In addition to sensors mounted on the robot, sensors might also be deployed in the environment, although these might need to be accessed via an unreliable wireless connection. In this paper, we demonstrate deep neural network architectures that are able to fuse information generated by multiple sensors and are robust to sensor failures at runtime. We evaluate our method on a search and pick task for a robot both in simulation and the real world.
引用
收藏
页码:2365 / 2370
页数:6
相关论文
共 50 条
  • [31] Human-level control through deep reinforcement learning
    Mnih, Volodymyr
    Kavukcuoglu, Koray
    Silver, David
    Rusu, Andrei A.
    Veness, Joel
    Bellemare, Marc G.
    Graves, Alex
    Riedmiller, Martin
    Fidjeland, Andreas K.
    Ostrovski, Georg
    Petersen, Stig
    Beattie, Charles
    Sadik, Amir
    Antonoglou, Ioannis
    King, Helen
    Kumaran, Dharshan
    Wierstra, Daan
    Legg, Shane
    Hassabis, Demis
    [J]. NATURE, 2015, 518 (7540) : 529 - 533
  • [32] Human-level control through deep reinforcement learning
    Volodymyr Mnih
    Koray Kavukcuoglu
    David Silver
    Andrei A. Rusu
    Joel Veness
    Marc G. Bellemare
    Alex Graves
    Martin Riedmiller
    Andreas K. Fidjeland
    Georg Ostrovski
    Stig Petersen
    Charles Beattie
    Amir Sadik
    Ioannis Antonoglou
    Helen King
    Dharshan Kumaran
    Daan Wierstra
    Shane Legg
    Demis Hassabis
    [J]. Nature, 2015, 518 : 529 - 533
  • [33] Formation Control with Collision Avoidance through Deep Reinforcement Learning
    Sui, Zezhi
    Pu, Zhiqiang
    Yi, Jianqiang
    Xiong, Tianyi
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [34] Confidence-Based Robot Navigation Under Sensor Occlusion with Deep Reinforcement Learning
    Ryu, Hyeongyeol
    Yoon, Minsung
    Park, Daehyung
    Yoon, Sung-eui
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 8231 - 8237
  • [35] Deep Reinforcement Learning for Humanoid Robot Dribbling
    Muzio, Alexandre F., V
    Maximo, Marcos R. O. A.
    Yoneyama, Takashi
    [J]. 2020 XVIII LATIN AMERICAN ROBOTICS SYMPOSIUM, 2020 XII BRAZILIAN SYMPOSIUM ON ROBOTICS AND 2020 XI WORKSHOP OF ROBOTICS IN EDUCATION (LARS-SBR-WRE 2020), 2020, : 246 - 251
  • [36] Deep Reinforcement Learning for Snake Robot Locomotion
    Shi, Junyao
    Dear, Tony
    Kelly, Scott David
    [J]. IFAC PAPERSONLINE, 2020, 53 (02): : 9688 - 9695
  • [37] Path Planning for Mobile Robot Based on Deep Reinforcement Learning and Fuzzy Control
    Liu, Chunling
    Xu, Jun
    Guo, Kaiwen
    [J]. 2022 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, COMPUTER VISION AND MACHINE LEARNING (ICICML), 2022, : 533 - 537
  • [38] Control of Nameplate Pasting Robot for Sand Mold Based on Deep Reinforcement Learning
    Tuo, Guiben
    Li, Te
    Qin, Haibo
    Huang, Bin
    Liu, Kuo
    Wang, Yongqing
    [J]. INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2019, PART VI, 2019, 11745 : 368 - 378
  • [39] A Disturbance Rejection Control Method Based on Deep Reinforcement Learning for a Biped Robot
    Liu, Chuzhao
    Gao, Junyao
    Tian, Dingkui
    Zhang, Xuefeng
    Liu, Huaxin
    Meng, Libo
    [J]. APPLIED SCIENCES-BASEL, 2021, 11 (04): : 1 - 17
  • [40] Deep Reinforcement Learning for Mobile Robot Navigation
    Gromniak, Martin
    Stenzel, Jonas
    [J]. 2019 4TH ASIA-PACIFIC CONFERENCE ON INTELLIGENT ROBOT SYSTEMS (ACIRS 2019), 2019, : 68 - 73