Self-supervised Attention Learning for Robot Control

被引:0
|
作者
Cong, Lin [1 ]
Shi, Yunlei [1 ]
Zhang, Jianwei [1 ]
机构
[1] Univ Hamburg, Dept Informat, TAMS Grp, Hamburg, Germany
基金
美国国家科学基金会;
关键词
D O I
10.1109/ROBIO54168.2021.9739240
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Inspired by how humans solve a task: only paying attention to the useful part in the vision while neglecting the irrelevant information, we propose an attention mechanism, which learns to focus on the motion part (such as the robot arm and the manipulating target) in an image, while neglecting the noisy background. The model takes RGB images as input, using CNN to extract intermediate spatial features first; then by transporting learned features between two different images and minimizing the reconstruction loss, the attention module is forced to learn a pattern of attention. The output of the model is soft attention map, highlighting the useful part while suppressing background clutter in the image, which can be used as the combined weights for downstream control. Our method is trained in a fully self-supervised way, no manual labeling data is used during training, which increases its ease of use in robot tasks.
引用
收藏
页码:1153 / 1158
页数:6
相关论文
共 50 条
  • [1] Self-Supervised Learning of Robot Manipulation
    Tommy, Robin
    Krishnan, Athira R.
    2020 4TH INTERNATIONAL CONFERENCE ON AUTOMATION, CONTROL AND ROBOTS (ICACR 2020), 2020, : 22 - 25
  • [2] Guiding Attention for Self-Supervised Learning with Transformers
    Deshpande, Ameet
    Narasimhan, Karthik
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 4676 - 4686
  • [3] MOBILE ROBOT CONTROL BY NEURAL NETWORKS USING SELF-SUPERVISED LEARNING
    SAGA, K
    SUGASAKA, T
    SEKIGUCHI, M
    NAGATA, S
    ASAKAWA, K
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 1992, 39 (06) : 537 - 542
  • [4] Reinforcement Learning with Attention that Works: A Self-Supervised Approach
    Manchin, Anthony
    Abbasnejad, Ehsan
    van den Hengel, Anton
    NEURAL INFORMATION PROCESSING, ICONIP 2019, PT V, 2019, 1143 : 223 - 230
  • [5] Graph Multihead Attention Pooling with Self-Supervised Learning
    Wang, Yu
    Hu, Liang
    Wu, Yang
    Gao, Wanfu
    ENTROPY, 2022, 24 (12)
  • [6] Heuristic Attention Representation Learning for Self-Supervised Pretraining
    Van Nhiem Tran
    Liu, Shen-Hsuan
    Li, Yung-Hui
    Wang, Jia-Ching
    SENSORS, 2022, 22 (14)
  • [7] Self-Supervised Attention-Aware Reinforcement Learning
    Wu, Haiping
    Khetarpa, Khimya
    Precup, Doina
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10311 - 10319
  • [8] Exploring Attention and Self-Supervised Learning Mechanism for Graph Similarity Learning
    Wen, Guangqi
    Gao, Xin
    Tan, Wenhui
    Cao, Peng
    Yang, Jinzhu
    Li, Weiping
    Zaiane, Osmar R.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [9] An Attention Network With Self-Supervised Learning for Rheumatoid Arthritis Scoring
    Ling, Deyu
    Yu, Wenxin
    Zhang, Zhiqiang
    Zou, Jinmei
    2024 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2024, 2024,
  • [10] Improving robot navigation through self-supervised Online learning
    Sofman, Boris
    Lin, Ellie
    Bagnell, J. Andrew
    Cole, John
    Vandapel, Nicolas
    Stentz, Anthony
    JOURNAL OF FIELD ROBOTICS, 2006, 23 (11-12) : 1059 - 1075