Control of Magnetic Surgical Robots With Model-Based Simulators and Reinforcement Learning

被引:0
|
作者
Barnoy, Yotam [1 ]
Erin, Onder [2 ]
Raval, Suraj [3 ]
Pryor, Will [2 ]
Mair, Lamar O. [4 ]
Weinberg, Irving N. [4 ]
Diaz-Mercado, Yancy [3 ]
Krieger, Axel [2 ]
Hager, Gregory D. [1 ]
机构
[1] Johns Hopkins Univ, Dept Comp Sci, Baltimore, MD 21287 USA
[2] Johns Hopkins Univ, Dept Mech Engn, Baltimore, MD 21287 USA
[3] Univ Maryland, Dept Mech Engn, College Pk, MD 20742 USA
[4] Weinberg Med Phys, Div Magnet Manipulat & Particle Res, North Bethesda, MD 20852 USA
来源
基金
美国国家卫生研究院;
关键词
Magnetic robots; surgical robotics; autonomous control; reinforcement learning;
D O I
10.1109/TMRB.2022.3214426
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Magnetically manipulated medical robots are a promising alternative to current robotic platforms, allowing for miniaturization and tetherless actuation. Controlling such systems autonomously may enable safe, accurate operation. However, classical control methods require rigorous models of magnetic fields, robot dynamics, and robot environments, which can be difficult to generate. Model-free reinforcement learning (RL) offers an alternative that can bypass these requirements. We apply RL to a robotic magnetic needle manipulation system. Reinforcement learning algorithms often require long runtimes, making them impractical for many surgical robotics applications, most of which require careful, constant monitoring. Our approach first constructs a model-based simulation (MBS) on guided real-world exploration, learning the dynamics of the environment. After intensive MBS environment training, we transfer the learned behavior from the MBS environment to the real-world. Our MBS method applies RL roughly 200 times faster than doing so in the real world, and achieves a 6 mm rootmean-square (RMS) error for a square reference trajectory. In comparison, pure simulation-based approaches fail to transfer, producing a 31 mm RMS error. These results demonstrate that MBS environments are a good solution for domains where running model-free RL is impractical, especially if an accurate simulation is not available.
引用
收藏
页码:945 / 956
页数:12
相关论文
共 50 条
  • [21] Model-based reinforcement learning control of reaction-diffusion problems
    Schenk, Christina
    Vasudevan, Aditya
    Haranczyk, Maciej
    Romero, Ignacio
    [J]. OPTIMAL CONTROL APPLICATIONS & METHODS, 2024,
  • [22] Laboratory experiments of model-based reinforcement learning for adaptive optics control
    Nousiainen, Jalo
    Engler, Byron
    Kasper, Markus
    Rajani, Chang
    Helin, Tapio
    Heritier, Cédric T.
    Quanz, Sascha P.
    Glauser, Adrian M.
    [J]. Journal of Astronomical Telescopes, Instruments, and Systems, 2024, 10 (01)
  • [23] Sample-efficient model-based reinforcement learning for quantum control
    Khalid, Irtaza
    Weidner, Carrie A.
    Jonckheere, Edmond A.
    Schirmer, Sophie G.
    Langbein, Frank C.
    [J]. PHYSICAL REVIEW RESEARCH, 2023, 5 (04):
  • [24] Model-Based Reinforcement Learning for Optimal Feedback Control of Switched Systems
    Greene, Max L.
    Abudia, Moad
    Kamalapurkar, Rushikesh
    Dixon, Warren E.
    [J]. 2020 59TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2020, : 162 - 167
  • [25] Model-Based Graph Reinforcement Learning for Inductive Traffic Signal Control
    Devailly, Francois-Xavier
    Larocque, Denis
    Charlin, Laurent
    [J]. IEEE OPEN JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 5 : 238 - 250
  • [26] Robust Model-Based Reinforcement Learning Control of a Batch Crystallization Process
    Benyahia, B.
    Anandan, P. D.
    Rielly, C.
    [J]. 2021 9TH INTERNATIONAL CONFERENCE ON SYSTEMS AND CONTROL (ICSC'21), 2021, : 89 - 94
  • [27] Transmission Control in NB-IoT With Model-Based Reinforcement Learning
    Alcaraz, Juan J.
    Losilla, Fernando
    Gonzalez-Castano, Francisco-Javier
    [J]. IEEE ACCESS, 2023, 11 : 57991 - 58005
  • [28] Model-Based Reinforcement Learning Control of Electrohydraulic Position Servo Systems
    Yao, Zhikai
    Liang, Xianglong
    Jiang, Guo-Ping
    Yao, Jianyong
    [J]. IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2023, 28 (03) : 1446 - 1455
  • [29] Control-Oriented Model-Based Reinforcement Learning with Implicit Differentiation
    Nikishin, Evgenii
    Abachi, Romina
    Agarwal, Rishabh
    Bacon, Pierre-Luc
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 7886 - 7894
  • [30] Efficient model-based reinforcement learning for approximate online optimal control
    Kamalapurkar, Rushikesh
    Rosenfeld, Joel A.
    Dixon, Warren E.
    [J]. AUTOMATICA, 2016, 74 : 247 - 258