kPAM 2.0: Feedback Control for Category-Level Robotic Manipulation

被引:33
|
作者
Gao, Wei [1 ]
Tedrake, Russ [1 ]
机构
[1] MIT, CASIL, 77 Massachusetts Ave, Cambridge, MA 02139 USA
基金
美国国家科学基金会;
关键词
Robots; Task analysis; Robot kinematics; Shape; Three-dimensional displays; Service robots; Grasping; Dexterous manipulation; generalizable robotic manipulation; perception for grasping and manipulation;
D O I
10.1109/LRA.2021.3062315
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
In this letter, we explore generalizable, perception-to-action robotic manipulation for precise, contact-rich tasks. In particular, we contribute a framework for closed-loop robotic manipulation that automatically handles a category of objects, despite potentially unseen object instances and significant intra-category variations in shape, size and appearance. Previous approaches typically build a feedback loop on top of a realtime 6-DOF pose estimator. However, representing an object with a parameterized transformation from a fixed geometric template does not capture large intra-category shape variation. Hence we adopt the keypoint-based object representation proposed in [13] for category-level pick-and-place, and extend it to closed-loop manipulation policies with contact-rich tasks. We first augment keypoints with local orientation information. Using the oriented keypoints, we propose a novel object-centric action representation in terms of regulating the linear/angular velocity or force/torque of these oriented keypoints. This formulation is surprisingly versatile - we demonstrate that it can accomplish contact-rich manipulation tasks that require precision and dexterity for a category of objects with different shapes, sizes and appearances, such as peg-hole insertion for pegs and holes with significant shape variation and tight clearance. With the proposed object and action representation, our framework is also agnostic to the robot grasp pose and initial object configuration, making it flexible for integration and deployment. Video demonstration, source code and supplemental materials are available on https://sites.google.com/view/kpam2/home.
引用
收藏
页码:2962 / 2969
页数:8
相关论文
共 50 条
  • [1] KPAM: KeyPoint Affordances for Category-Level Robotic Manipulation
    Manuelli, Lucas
    Gao, Wei
    Florence, Peter
    Tedrake, Russ
    ROBOTICS RESEARCH: THE 19TH INTERNATIONAL SYMPOSIUM ISRR, 2022, 20 : 132 - 157
  • [2] SKP: Semantic 3D Keypoint Detection for Category-Level Robotic Manipulation
    Luo, Zhongzhen
    Xue, Wenjie
    Chae, Julia
    Fu, Guoyi
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02): : 5437 - 5444
  • [3] CAMS: CAnonicalized Manipulation Spaces for Category-Level Functional Hand-Object Manipulation Synthesis
    Zheng, Juntian
    Zheng, Qingyuan
    Fang, Lixing
    Liu, Yun
    Yi, Li
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 585 - 594
  • [4] You Only Demonstrate Once: Category-Level Manipulation from Single Visual Demonstration
    Wen, Bowen
    Lian, Wenzhao
    Bekris, Kostas
    Schaal, Stefan
    ROBOTICS: SCIENCE AND SYSTEM XVIII, 2022,
  • [5] UniGarmentManip: A Unified Framework for Category-Level Garment Manipulation via Dense Visual Correspondence
    Wu, Ruihai
    Lu, Haoran
    Wang, Yiyan
    Wang, Yubo
    Dong, Hao
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 16340 - 16350
  • [6] Learning Category-Level Manipulation Tasks from Point Clouds with Dynamic Graph CNNs
    Liang, Junchi
    Boularias, Abdeslam
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 1807 - 1813
  • [7] Category-level contributions to the alphanumeric category effect in visual search
    J. Paul Hamilton
    Michelle Mirkin
    Thad A. Polk
    Psychonomic Bulletin & Review, 2006, 13 : 1074 - 1077
  • [8] Category-level contributions to the alphanumeric category effect in visual search
    Hamilton, J. Paul
    Mirkin, Michelle
    Polk, Thad A.
    PSYCHONOMIC BULLETIN & REVIEW, 2006, 13 (06) : 1074 - 1077
  • [9] Category-Level Articulated Object Pose Estimation
    Li, Xiaolong
    Wang, He
    Yi, Li
    Guibas, Leonidas
    Abbott, A. Lynn
    Song, Shuran
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 3703 - 3712
  • [10] GarmentTracking: Category-Level Garment Pose Tracking
    Xue, Han
    Xu, Wenqiang
    Zhang, Jieyi
    Tang, Tutian
    Li, Yutong
    Du, Wenxin
    Ye, Ruolin
    Lu, Cewu
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 21233 - 21242