Skill Learning in Robot-Assisted Micro-Manipulation Through Human Demonstrations with Attention Guidance

被引:1
|
作者
An, Yujian [1 ]
Yang, Jianxin [1 ]
Li, Jinkai [1 ]
He, Bingze [1 ]
Guo, Yao [1 ]
Yang, Guang-Zhong [1 ]
机构
[1] Shanghai Jiao Tong Univ, Inst Med Robot, Shanghai, Peoples R China
关键词
D O I
10.1109/ICRA57147.2024.10610945
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
For the development of robotic systems for micromanipulation, it is challenging to design appropriate control strategies due to either the lack of sufficient information for feedback or the difficulty in extracting subtle yet critical visual features. With the same system under the teleoperated mode, however, human operators seem to be able to complete the task more successfully with an inherent motion and control strategy. The extraction of implicit human attention during the task and integration of this with robot control could provide crucial guidance in the design of feature extraction and motion control algorithms. In this paper, a micro-assembly task of miniature thin membrane sensors is considered. For human demonstrations, we collected data from repeated tests performed by ten operators following three motion strategies. The human attention during the task is explored according to the coordinates of the eye gaze, and then a neural network with gaze-guided attention is trained to segment the visual Region of Interest (ROI). After quantitative evaluation of operator results in terms of success rate, efficiency, reset time, and the Index of Pupillary Activity (IPA), an optimized motion strategy based on the "palpation" framework was derived. Consequently, we apply this strategy to automated tasks and achieve superior results than human operators, showing an average task completion time of 34.8 +/- 5.9s and a success rate of over 90%.
引用
收藏
页码:15601 / 15607
页数:7
相关论文
共 50 条
  • [41] Comparison of Outcomes Between Robot-Assisted Language Learning System and Human Tutors: Focusing on Speaking Ability
    Iio, Takamasa
    Yoshikawa, Yuichiro
    Ogawa, Kohei
    Ishiguro, Hiroshi
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2024, 16 (04) : 743 - 761
  • [42] Human-Robot Interaction: Does Robotic Guidance Force Affect Gait-Related Brain Dynamics during Robot-Assisted Treadmill Walking?
    Knaepen, Kristel
    Mierau, Andreas
    Swinnen, Eva
    Tellez, Helio Fernandez
    Michielsen, Marc
    Kerckhofs, Eric
    Lefeber, Dirk
    Meeusen, Romain
    PLOS ONE, 2015, 10 (10):
  • [43] Environment-adaptive interaction primitives through visual context for human–robot motor skill learning
    Yunduan Cui
    James Poon
    Jaime Valls Miro
    Kimitoshi Yamazaki
    Kenji Sugimoto
    Takamitsu Matsubara
    Autonomous Robots, 2019, 43 : 1225 - 1240
  • [44] Bimanual rope manipulation skill synthesis through context dependent correction policy learning from human demonstration
    Akbulut, Baturhan
    Girgin, Tuba
    Mehrabi, Mash
    Asada, Minoru
    Ugur, Emre
    Oztop, Erhan
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 3904 - 3910
  • [45] Environment-adaptive interaction primitives through visual context for human-robot motor skill learning
    Cui, Yunduan
    Poon, James
    Miro, Jaime Valls
    Yamazaki, Kimitoshi
    Sugimoto, Kenji
    Matsubara, Takamitsu
    AUTONOMOUS ROBOTS, 2019, 43 (05) : 1225 - 1240
  • [46] ROBOT SKILL SYNTHESIS THROUGH HUMAN VISUO-MOTOR LEARNING Humanoid Robot Statically-stable Reaching and In-place Stepping
    Babic, Jan
    Hajdinjak, Blaz
    Oztop, Erhan
    ICINCO 2010: PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, VOL 2, 2010, : 212 - 215
  • [47] Machine Learning-Based Cognitive Position and Force Controls for Power-Assisted Human-Robot Collaborative Manipulation
    Rahman, S. M. Mizanoor
    MACHINES, 2021, 9 (02) : 1 - 20
  • [48] Psychophysics-Based Cognitive Reinforcement Learning to Optimize Human-Robot Interaction in Power-Assisted Object Manipulation
    Rahman, S. M. Mizanoor
    INTELLIGENT HUMAN SYSTEMS INTEGRATION 2021, 2021, 1322 : 56 - 62
  • [49] AP-MTL: Attention Pruned Multi-task Learning Model for Real-time Instrument Detection and Segmentation in Robot-assisted Surgery
    Islam, Mobarakol
    Vibashan, V. S.
    Ren, Hongliang
    2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 8433 - 8439
  • [50] Comparison of the operative outcomes and learning curves between laparoscopic and “Micro Hand S” robot-assisted total mesorectal excision for rectal cancer: a retrospective study
    Yanlei Wang
    Guohui Wang
    Zheng Li
    Hao Ling
    Bo Yi
    Shaihong Zhu
    BMC Gastroenterology, 21