Quantum Deep Reinforcement Learning for Robot Navigation Tasks

被引:0
|
作者
Hohenfeld, Hans [1 ]
Heimann, Dirk [1 ]
Wiebe, Felix [2 ,3 ]
Kirchner, Frank [1 ,2 ,3 ]
机构
[1] Univ Bremen, Robot Res Grp, D-28359 Bremen, Germany
[2] Robot Innovat Ctr RIC, D-28359 Bremen, Germany
[3] German Res Ctr Artificial Intelligence DFKI, D-28359 Bremen, Germany
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Task analysis; Quantum mechanics; Quantum circuit; Deep reinforcement learning; Reinforcement learning; Encoding; autonomous agents; robotics; quantum machine learning; quantum computing;
D O I
10.1109/ACCESS.2024.3417808
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We utilize hybrid quantum deep reinforcement learning to learn navigation tasks for a simple, wheeled robot in simulated environments of increasing complexity. For this, we train parameterized quantum circuits (PQCs) with two different encoding strategies in a hybrid quantum-classical setup as well as a classical neural network baseline with the double deep Q network (DDQN) reinforcement learning algorithm. Quantum deep reinforcement learning (QDRL) has previously been studied in several relatively simple benchmark environments, mainly from the OpenAI gym suite. However, scaling behavior and applicability of QDRL to more demanding tasks closer to real-world problems e.g., from the robotics domain, have not been studied previously. Here, we show that quantum circuits in hybrid quantum-classic reinforcement learning setups are capable of learning optimal policies in multiple robotic navigation scenarios with notably fewer trainable parameters compared to a classical baseline. Across a large number of experimental configurations, we find that the employed quantum circuits outperform the classical neural network baselines when equating for the number of trainable parameters. Yet, the classical neural network consistently showed better results concerning training times and stability, with at least one order of magnitude of trainable parameters more than the best-performing quantum circuits. However, validating the robustness of the learning methods in a large and dynamic environment, we find that the classical baseline produces more stable and better performing policies overall. For the two encoding schemes, we observed better results for consecutively encoding the classical state vector on each qubit compared to encoding each component on a separate qubit. Our findings demonstrate that current hybrid quantum machine-learning approaches can be scaled to simple robotic problems while yielding sufficient results, at least in an idealized simulated setting, but there are yet open questions regarding the application to considerably more demanding tasks. We anticipate that our work will contribute to introducing quantum machine learning in general and quantum deep reinforcement learning in particular to more demanding problem domains and emphasize the importance of encoding techniques for classic data in hybrid quantum-classical settings.
引用
收藏
页码:87217 / 87236
页数:20
相关论文
共 50 条
  • [31] Reinforcement learning in nonstationary environment navigation tasks
    Lane, Terran
    Ridens, Martin
    Stevens, Scott
    [J]. ADVANCES IN ARTIFICIAL INTELLIGENCE, 2007, 4509 : 429 - +
  • [32] Robot Navigation of Environments with Unknown Rough Terrain Using Deep Reinforcement Learning
    Zhang, Kaicheng
    Niroui, Farzad
    Ficocelli, Maurizio
    Nejat, Goldie
    [J]. 2018 IEEE INTERNATIONAL SYMPOSIUM ON SAFETY, SECURITY, AND RESCUE ROBOTICS (SSRR), 2018,
  • [33] All Aware Robot Navigation in Human Environments Using Deep Reinforcement Learning
    Lu, Xiaojun
    Faragasso, Angela
    Yamashita, Atsushi
    Asama, Hajime
    [J]. 2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 5989 - 5996
  • [34] Connectivity Guaranteed Multi-robot Navigation via Deep Reinforcement Learning
    Lin, Juntong
    Yang, Xuyun
    Zheng, Peiwei
    Cheng, Hui
    [J]. CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [35] Decentralized Structural-RNN for Robot Crowd Navigation with Deep Reinforcement Learning
    Liu, Shuijing
    Chang, Peixin
    Liang, Weihang
    Chakraborty, Neeloy
    Driggs-Campbell, Katherine
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 3517 - 3524
  • [36] Deep Reinforcement Learning for Autonomous Map-Less Navigation of a Flying Robot
    Doukhi, Oualid
    Lee, Deok Jin
    [J]. IEEE ACCESS, 2022, 10 : 82964 - 82976
  • [37] CBNAV: Costmap Based Approach to Deep Reinforcement Learning Mobile Robot Navigation
    Tomasi Junior, Darci Luiz
    Todt, Eduardo
    [J]. 2021 LATIN AMERICAN ROBOTICS SYMPOSIUM / 2021 BRAZILIAN SYMPOSIUM ON ROBOTICS / 2021 WORKSHOP OF ROBOTICS IN EDUCATION (LARS-SBR-WRE 2021), 2021, : 324 - 329
  • [38] Cooperative Multi-Robot Navigation in Dynamic Environment with Deep Reinforcement Learning
    Han, Ruihua
    Chen, Shengduo
    Hao, Qi
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 448 - 454
  • [39] A Behavior-Based Mobile Robot Navigation Method with Deep Reinforcement Learning
    Li, Juncheng
    Ran, Maopeng
    Wang, Han
    Xie, Lihua
    [J]. UNMANNED SYSTEMS, 2021, 9 (03) : 201 - 209
  • [40] Autonomous Navigation by Mobile Robot with Sensor Fusion Based on Deep Reinforcement Learning
    Ou, Yang
    Cai, Yiyi
    Sun, Youming
    Qin, Tuanfa
    [J]. SENSORS, 2024, 24 (12)