Control of Magnetic Surgical Robots With Model-Based Simulators and Reinforcement Learning

被引:0
|
作者
Barnoy, Yotam [1 ]
Erin, Onder [2 ]
Raval, Suraj [3 ]
Pryor, Will [2 ]
Mair, Lamar O. [4 ]
Weinberg, Irving N. [4 ]
Diaz-Mercado, Yancy [3 ]
Krieger, Axel [2 ]
Hager, Gregory D. [1 ]
机构
[1] Johns Hopkins Univ, Dept Comp Sci, Baltimore, MD 21287 USA
[2] Johns Hopkins Univ, Dept Mech Engn, Baltimore, MD 21287 USA
[3] Univ Maryland, Dept Mech Engn, College Pk, MD 20742 USA
[4] Weinberg Med Phys, Div Magnet Manipulat & Particle Res, North Bethesda, MD 20852 USA
来源
基金
美国国家卫生研究院;
关键词
Magnetic robots; surgical robotics; autonomous control; reinforcement learning;
D O I
10.1109/TMRB.2022.3214426
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Magnetically manipulated medical robots are a promising alternative to current robotic platforms, allowing for miniaturization and tetherless actuation. Controlling such systems autonomously may enable safe, accurate operation. However, classical control methods require rigorous models of magnetic fields, robot dynamics, and robot environments, which can be difficult to generate. Model-free reinforcement learning (RL) offers an alternative that can bypass these requirements. We apply RL to a robotic magnetic needle manipulation system. Reinforcement learning algorithms often require long runtimes, making them impractical for many surgical robotics applications, most of which require careful, constant monitoring. Our approach first constructs a model-based simulation (MBS) on guided real-world exploration, learning the dynamics of the environment. After intensive MBS environment training, we transfer the learned behavior from the MBS environment to the real-world. Our MBS method applies RL roughly 200 times faster than doing so in the real world, and achieves a 6 mm rootmean-square (RMS) error for a square reference trajectory. In comparison, pure simulation-based approaches fail to transfer, producing a 31 mm RMS error. These results demonstrate that MBS environments are a good solution for domains where running model-free RL is impractical, especially if an accurate simulation is not available.
引用
收藏
页码:945 / 956
页数:12
相关论文
共 50 条
  • [1] Model-Based Reinforcement Learning for Trajectory Tracking of Musculoskeletal Robots
    Xu, Haoran
    Fan, Jianyin
    Wang, Qiang
    [J]. 2023 IEEE INTERNATIONAL INSTRUMENTATION AND MEASUREMENT TECHNOLOGY CONFERENCE, I2MTC, 2023,
  • [2] Model-Based Reinforcement Learning For Robot Control
    Li, Xiang
    Shang, Weiwei
    Cong, Shuang
    [J]. 2020 5TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2020), 2020, : 300 - 305
  • [3] Control Approach Combining Reinforcement Learning and Model-Based Control
    Okawa, Yoshihiro
    Sasaki, Tomotake
    Iwane, Hidenao
    [J]. 2019 12TH ASIAN CONTROL CONFERENCE (ASCC), 2019, : 1419 - 1424
  • [4] Efficient reinforcement learning: Model-based acrobot control
    Boone, G
    [J]. 1997 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION - PROCEEDINGS, VOLS 1-4, 1997, : 229 - 234
  • [5] Multiple model-based reinforcement learning for nonlinear control
    Samejima, K
    Katagiri, K
    Doya, K
    Kawato, M
    [J]. ELECTRONICS AND COMMUNICATIONS IN JAPAN PART III-FUNDAMENTAL ELECTRONIC SCIENCE, 2006, 89 (09): : 54 - 69
  • [6] Offline Model-Based Reinforcement Learning for Tokamak Control
    Char, Ian
    Abbate, Joseph
    Bardoczi, Laszlo
    Boyer, Mark D.
    Chung, Youngseog
    Conlin, Rory
    Erickson, Keith
    Mehta, Viraj
    Richner, Nathan
    Kolemen, Egemen
    Schneider, Jeff
    [J]. LEARNING FOR DYNAMICS AND CONTROL CONFERENCE, VOL 211, 2023, 211
  • [7] Fault Tolerant Control combining Reinforcement Learning and Model-based Control
    Bhan, Luke
    Quinones-Grueiro, Marcos
    Biswas, Gautam
    [J]. 5TH CONFERENCE ON CONTROL AND FAULT-TOLERANT SYSTEMS (SYSTOL 2021), 2021, : 31 - 36
  • [8] Cognitive Control Predicts Use of Model-based Reinforcement Learning
    Otto, A. Ross
    Skatova, Anya
    Madlon-Kay, Seth
    Daw, Nathaniel D.
    [J]. JOURNAL OF COGNITIVE NEUROSCIENCE, 2015, 27 (02) : 319 - 333
  • [9] Model-based hierarchical reinforcement learning and human action control
    Botvinick, Matthew
    Weinstein, Ari
    [J]. PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, 2014, 369 (1655)
  • [10] Model-based Reinforcement Learning for Continuous Control with Posterior Sampling
    Fan, Ying
    Ming, Yifei
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139