Deep Visual Constraints: Neural Implicit Models for Manipulation Planning From Visual Input

被引:10
|
作者
Ha, Jung-Su [1 ]
Driess, Danny [1 ,2 ]
Toussaint, Marc [1 ]
机构
[1] TU Berlin, Learning & Intelligent Syst Lab, Berlin, Germany
[2] TU Berlin, Sci Intelligence Excellence Cluster, Berlin, Germany
关键词
Integrated planning and learning; manipulation planning; representation learning;
D O I
10.1109/LRA.2022.3194955
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Manipulation planning is the problem of finding a sequence of robot configurations that involves interactions with objects in the scene, e.g., grasping and placing an object, or more general tool-use. To achieve such interactions, traditional approaches require hand-engineering of object representations and interaction constraints, which easily becomes tedious when complex objects/interactions are considered. Inspired by recent advances in 3D modeling, e.g. NeRF, we propose a method to represent objects as continuous functions upon which constraint features are defined and jointly trained. In particular, the proposed pixel-aligned representation is directly inferred from images with known camera geometry and naturally acts as a perception component in the whole manipulation pipeline, thereby enabling long-horizon planning only from visual input.
引用
收藏
页码:10857 / 10864
页数:8
相关论文
共 50 条
  • [1] Handling implicit and explicit constraints in manipulation planning
    Mirabel, Joseph
    Lamiraux, Florent
    ROBOTICS: SCIENCE AND SYSTEMS XIV, 2018,
  • [2] Neural timing of visual implicit categorization
    Pernet, C
    Basan, S
    Doyon, B
    Cardebat, D
    Démonet, JF
    Celsis, P
    COGNITIVE BRAIN RESEARCH, 2003, 17 (02): : 327 - 338
  • [3] Practical motion planning with visual constraints
    Sutanto, H
    Sharma, R
    1997 IEEE INTERNATIONAL SYMPOSIUM ON ASSEMBLY AND TASK PLANNING (ISATP'97) - TOWARDS FLEXIBLE AND AGILE ASSEMBLY AND MANUFACTURING, 1997, : 237 - 242
  • [4] Visual Repetition Sampling for Robot Manipulation Planning
    Puang, En Yen
    Lehner, Peter
    Marton, Zoltan-Csaba
    Durner, Maximilian
    Triebel, Rudolph
    Albu-Schaeffer, Alin
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 9236 - 9242
  • [5] Deep Visual Heuristics: Learning Feasibility of Mixed-Integer Programs for Manipulation Planning
    Driess, Danny
    Oguz, Ozgur
    Ha, Jung-Su
    Toussaint, Marc
    2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 9563 - 9569
  • [6] Visual Working Memory Enhances the Neural Response to Matching Visual Input
    Gayet, Surya
    Guggenmos, Matthias
    Christophel, Thomas B.
    Haynes, John-Dylan
    Paffen, Chris L. E.
    Van der Stigchel, Stefan
    Sterzer, Philipp
    JOURNAL OF NEUROSCIENCE, 2017, 37 (28): : 6638 - 6647
  • [7] Building qualitative event models automatically from visual input
    Fernyhough, J
    Cohn, AG
    Hogg, DC
    SIXTH INTERNATIONAL CONFERENCE ON COMPUTER VISION, 1998, : 350 - 355
  • [8] Robotic manipulation based on 3-D visual servoing and deep neural networks
    Al-Shanoon, Abdulrahman
    Lang, Haoxiang
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2022, 152
  • [9] Are Deep Neural Networks Adequate Behavioral Models of Human Visual Perception?
    Wichmann, Felix A.
    Geirhos, Robert
    ANNUAL REVIEW OF VISION SCIENCE, 2023, 9 : 501 - 524
  • [10] Trajectory Planning for Visual Servoing with Some Constraints
    Li Haifeng
    Liu Jingtai
    Li Yan
    Lu Xiang
    Yu Kaiyan
    Sun Lei
    PROCEEDINGS OF THE 29TH CHINESE CONTROL CONFERENCE, 2010, : 3636 - 3642