Quantum Deep Reinforcement Learning for Robot Navigation Tasks

被引:0
|
作者
Hohenfeld, Hans [1 ]
Heimann, Dirk [1 ]
Wiebe, Felix [2 ,3 ]
Kirchner, Frank [1 ,2 ,3 ]
机构
[1] Univ Bremen, Robot Res Grp, D-28359 Bremen, Germany
[2] Robot Innovat Ctr RIC, D-28359 Bremen, Germany
[3] German Res Ctr Artificial Intelligence DFKI, D-28359 Bremen, Germany
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Task analysis; Quantum mechanics; Quantum circuit; Deep reinforcement learning; Reinforcement learning; Encoding; autonomous agents; robotics; quantum machine learning; quantum computing;
D O I
10.1109/ACCESS.2024.3417808
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We utilize hybrid quantum deep reinforcement learning to learn navigation tasks for a simple, wheeled robot in simulated environments of increasing complexity. For this, we train parameterized quantum circuits (PQCs) with two different encoding strategies in a hybrid quantum-classical setup as well as a classical neural network baseline with the double deep Q network (DDQN) reinforcement learning algorithm. Quantum deep reinforcement learning (QDRL) has previously been studied in several relatively simple benchmark environments, mainly from the OpenAI gym suite. However, scaling behavior and applicability of QDRL to more demanding tasks closer to real-world problems e.g., from the robotics domain, have not been studied previously. Here, we show that quantum circuits in hybrid quantum-classic reinforcement learning setups are capable of learning optimal policies in multiple robotic navigation scenarios with notably fewer trainable parameters compared to a classical baseline. Across a large number of experimental configurations, we find that the employed quantum circuits outperform the classical neural network baselines when equating for the number of trainable parameters. Yet, the classical neural network consistently showed better results concerning training times and stability, with at least one order of magnitude of trainable parameters more than the best-performing quantum circuits. However, validating the robustness of the learning methods in a large and dynamic environment, we find that the classical baseline produces more stable and better performing policies overall. For the two encoding schemes, we observed better results for consecutively encoding the classical state vector on each qubit compared to encoding each component on a separate qubit. Our findings demonstrate that current hybrid quantum machine-learning approaches can be scaled to simple robotic problems while yielding sufficient results, at least in an idealized simulated setting, but there are yet open questions regarding the application to considerably more demanding tasks. We anticipate that our work will contribute to introducing quantum machine learning in general and quantum deep reinforcement learning in particular to more demanding problem domains and emphasize the importance of encoding techniques for classic data in hybrid quantum-classical settings.
引用
收藏
页码:87217 / 87236
页数:20
相关论文
共 50 条
  • [1] Asynchronous Deep Reinforcement Learning for the Mobile Robot Navigation with Supervised Auxiliary Tasks
    Tongloy, T.
    Chuwongin, S.
    Jaksukam, K.
    Chousangsuntorn, C.
    Boonsang, S.
    [J]. 2017 2ND INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION ENGINEERING (ICRAE), 2017, : 68 - 72
  • [2] Deep Reinforcement Learning for Mobile Robot Navigation
    Gromniak, Martin
    Stenzel, Jonas
    [J]. 2019 4TH ASIA-PACIFIC CONFERENCE ON INTELLIGENT ROBOT SYSTEMS (ACIRS 2019), 2019, : 68 - 73
  • [3] Autonomous Quantum Reinforcement Learning for Robot Navigation
    Mohan, Arjun
    Jayabalan, Sudharsan
    Mohan, Archana
    [J]. PROCEEDINGS OF 2ND INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND APPLICATIONS, 2017, 467 : 351 - 357
  • [4] Growing Robot Navigation Based on Deep Reinforcement Learning
    Ataka, Ahmad
    Sandiwan, Andreas P.
    [J]. 2023 9TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND ROBOTICS, ICCAR, 2023, : 115 - 120
  • [5] Deep Reinforcement Learning for Mapless Robot Navigation Systems
    Oliveira, Iure Rosa L.
    Brandao, Alexandre S.
    [J]. 2023 LATIN AMERICAN ROBOTICS SYMPOSIUM, LARS, 2023 BRAZILIAN SYMPOSIUM ON ROBOTICS, SBR, AND 2023 WORKSHOP ON ROBOTICS IN EDUCATION, WRE, 2023, : 331 - 336
  • [6] Mobile Robot Navigation Using Deep Reinforcement Learning
    Lee, Min-Fan Ricky
    Yusuf, Sharfiden Hassen
    [J]. PROCESSES, 2022, 10 (12)
  • [7] Mobile Robot Navigation based on Deep Reinforcement Learning
    Ruan, Xiaogang
    Ren, Dingqi
    Zhu, Xiaoqing
    Huang, Jing
    [J]. PROCEEDINGS OF THE 2019 31ST CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2019), 2019, : 6174 - 6178
  • [8] Deep Reinforcement Learning Based Mobile Robot Navigation: A Review
    Zhu, Kai
    Zhang, Tao
    [J]. TSINGHUA SCIENCE AND TECHNOLOGY, 2021, 26 (05) : 674 - 691
  • [9] Robot Navigation in Crowded Environments Using Deep Reinforcement Learning
    Liu, Lucia
    Dugas, Daniel
    Cesari, Gianluca
    Siegwart, Roland
    Dube, Renaud
    [J]. 2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 5671 - 5677
  • [10] Deep Reinforcement Learning Based Mobile Robot Navigation:A Review
    Kai Zhu
    Tao Zhang
    [J]. Tsinghua Science and Technology, 2021, 26 (05) : 674 - 691