Bayesian Active Learning for Sim-to-Real Robotic Perception

被引:6
|
作者
Feng, Jianxiang [1 ,2 ]
Lee, Jongseok [1 ]
Durner, Maximilian [1 ,2 ]
Triebel, Rudolph [1 ,2 ]
机构
[1] German Aerosp Ctr DLR, Inst Robot & Mechatron, Wessling, Germany
[2] Tech Univ Munich, D-80333 Munich, Germany
关键词
D O I
10.1109/IROS47612.2022.9982175
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
While learning from synthetic training data has recently gained an increased attention, in real-world robotic applications, there are still performance deficiencies due to the so-called Sim-to-Real gap. In practice, this gap is hard to resolve with only synthetic data. Therefore, we focus on an efficient acquisition of real data within a Sim-to-Real learning pipeline. Concretely, we employ deep Bayesian active learning to minimize manual annotation efforts and devise an autonomous learning paradigm to select the data that is considered useful for the human expert to annotate. To achieve this, a Bayesian Neural Network (BNN) object detector providing reliable uncertainty estimates is adapted to infer the informativeness of the unlabeled data. Furthermore, to cope with misalignments of the label distribution in uncertainty-based sampling, we develop an effective randomized sampling strategy that performs favorably compared to other complex alternatives. In our experiments on object classification and detection, we show benefits of our approach and provide evidence that labeling efforts can be reduced significantly. Finally, we demonstrate the practical effectiveness of this idea in a grasping task on an assistive robot.
引用
收藏
页码:10820 / 10827
页数:8
相关论文
共 50 条
  • [21] Real is Better than Perfect: Sim-to-Real Robotic System in Secondary School Education
    Gao, Jiasi
    Guo, Haole
    Cao, Zhanxiang
    Huang, Pengfei
    Zhou, Guyue
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IROS, 2023, : 4480 - 4487
  • [22] Seg-CURL: Segmented Contrastive Unsupervised Reinforcement Learning for Sim-to-Real in Visual Robotic Manipulation
    Xu, Binzhao
    Hassan, Taimur
    Hussain, Irfan
    IEEE ACCESS, 2023, 11 : 50195 - 50204
  • [23] Dual Action Policy for Robust Sim-to-Real Reinforcement Learning
    Terence, Ng Wen Zheng
    Chen Jianda
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT IV, 2024, 15019 : 369 - 380
  • [24] Sim-to-Real Transfer in Deep Reinforcement Learning for Robotics: a Survey
    Zhao, Wenshuai
    Queralta, Jorge Pena
    Westerlund, Tomi
    2020 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2020, : 737 - 744
  • [25] A Sim-to-Real Object Recognition and Localization Framework for Industrial Robotic Bin Picking
    Li, Xianzhi
    Cao, Rui
    Feng, Yidan
    Chen, Kai
    Yang, Biqi
    Fu, Chi-Wing
    Li, Yichuan
    Dou, Qi
    Liu, Yun-Hui
    Heng, Pheng-Ann
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) : 3961 - 3968
  • [26] Learning Locomotion Skills for Cassie: Iterative Design and Sim-to-Real
    Xie, Zhaoming
    Clary, Patrick
    Dao, Jeremy
    Morais, Pedro
    Hurst, Jonanthan
    van de Panne, Michiel
    CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [27] Sim-to-Real Control of Trifinger Robot by Deep Reinforcement Learning
    Wan, Qiang
    Wu, Tianyang
    Ye, Jiawei
    Wan, Lipeng
    Lau, Xuguang
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2024, PT VI, 2025, 15206 : 300 - 314
  • [28] Dynamic Bipedal Turning through Sim-to-Real Reinforcement Learning
    Yu, Fangzhou
    Batke, Ryan
    Dao, Jeremy
    Hurst, Jonathan
    Green, Kevin
    Fern, Alan
    2022 IEEE-RAS 21ST INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2022, : 903 - 910
  • [29] Solving a Simple Geduldspiele Cube with a Robotic Gripper via Sim-to-Real Transfer
    Yoo, Ji-Hyeon
    Jung, Ho-Jin
    Kim, Jang-Hyeon
    Sim, Dae-Han
    Yoon, Han-Ul
    APPLIED SCIENCES-BASEL, 2022, 12 (19):
  • [30] Survey on Sim-to-real Transfer Reinforcement Learning in Robot Systems
    Lin Q.
    Yu C.
    Wu X.-W.
    Dong Y.-Z.
    Xu X.
    Zhang Q.
    Guo X.
    Ruan Jian Xue Bao/Journal of Software, 2024, 35 (02): : 711 - 738