Affordance-based robot object retrieval

被引:4
|
作者
Thao Nguyen [1 ]
Gopalan, Nakul [1 ,4 ]
Patel, Roma [1 ]
Corsaro, Matt [2 ]
Pavlick, Ellie [3 ]
Tellex, Stefanie [3 ]
机构
[1] Brown Univ, Providence, RI 02912 USA
[2] Brown Univ, Comp Sci, George Konidariss Intelligent Robot Lab, Providence, RI 02912 USA
[3] Brown Univ, Comp Sci, Providence, RI 02912 USA
[4] Georgia Inst Technol, Atlanta, GA 30332 USA
基金
美国国家科学基金会;
关键词
Robots;
D O I
10.1007/s10514-021-10008-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Natural language object retrieval is a highly useful yet challenging task for robots in human-centric environments. Previous work has primarily focused on commands specifying the desired object's type such as "scissors" and/or visual attributes such as "red," thus limiting the robot to only known object classes. We develop a model to retrieve objects based on descriptions of their usage. The model takes in a language command containing a verb, for example "Hand me something to cut," and RGB images of candidate objects; and outputs the object that best satisfies the task specified by the verb. Our model directly predicts an object's appearance from the object's use specified by a verb phrase, without needing an object's class label. Based on contextual information present in the language commands, our model can generalize to unseen object classes and unknown nouns in the commands. Our model correctly selects objects out of sets of five candidates to fulfill natural language commands, and achieves a mean reciprocal rank of 77.4% on a held-out test set of unseen ImageNet object classes and 69.1% on unseen object classes and unknown nouns. Our model also achieves a mean reciprocal rank of 71.8% on unseen YCB object classes, which have a different image distribution from ImageNet. We demonstrate our model on a KUKA LBR iiwa robot arm, enabling the robot to retrieve objects based on natural language descriptions of their usage (Video recordings of the robot demonstrations can be found at ). We also present a new dataset of 655 verb-object pairs denoting object usage over 50 verbs and 216 object classes (The dataset and code for the project can be found at https://github.com/Thaonguyen3095/affordance- language).
引用
收藏
页码:83 / 98
页数:16
相关论文
共 50 条
  • [1] Affordance-based robot object retrieval
    Thao Nguyen
    Nakul Gopalan
    Roma Patel
    Matt Corsaro
    Ellie Pavlick
    Stefanie Tellex
    Autonomous Robots, 2022, 46 : 83 - 98
  • [2] Affordance-based human-robot interaction
    Moratz, Reinhard
    Tenbrink, Thora
    TOWARDS AFFORDANCE-BASED ROBOT CONTROL, 2008, 4760 : 63 - +
  • [3] Speakers prioritise affordance-based object semantics in scene descriptions
    Barker, M.
    Rehrig, G.
    Ferreira, F.
    LANGUAGE COGNITION AND NEUROSCIENCE, 2023, 38 (08) : 1045 - 1067
  • [4] Affordance-Based Mobile Robot Navigation Among Movable Obstacles
    Wang, Maozhen
    Luo, Rui
    Onol, Aykut Ozgun
    Padir, Taskin
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 2734 - 2740
  • [5] Affordance-Based Human-Robot Interaction With Reinforcement Learning
    Munguia-Galeano, Francisco
    Veeramani, Satheeshkumar
    Hernandez, Juan David
    Wen, Qingmeng
    Ji, Ze
    IEEE ACCESS, 2023, 11 : 31282 - 31292
  • [6] Affordance-based 3D Feature for Generic Object Recognition
    Iizuka, M.
    Akizuki, S.
    Hashimoto, M.
    THIRTEENTH INTERNATIONAL CONFERENCE ON QUALITY CONTROL BY ARTIFICIAL VISION 2017, 2017, 10338
  • [7] Affordance-based indirect task communication for astronaut-robot cooperation
    Heikkila, Seppo S.
    Halme, Aarne
    Schiele, Andre
    JOURNAL OF FIELD ROBOTICS, 2012, 29 (04) : 576 - 600
  • [8] Affordance-based altruistic robotic architecture for human-robot collaboration
    Imre, Mert
    Oztop, Erhan
    Nagai, Yukie
    Ugur, Emre
    ADAPTIVE BEHAVIOR, 2019, 27 (04) : 223 - 241
  • [9] Affordance-based modeling of a human-robot cooperative system for area exploration
    Jeongsik Kim
    Jungmok Ma
    Namhun Kim
    Journal of Mechanical Science and Technology, 2020, 34 : 877 - 887
  • [10] Real-time Multisensory Affordance-based Control for Adaptive Object Manipulation
    Chu, Vivian
    Gutierrez, Reymundo A.
    Chernova, Sonia
    Thomaz, Andrea L.
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 7776 - 7783