Leverage Interactive Affinity for Affordance Learning

被引:8
|
作者
Luo, Hongchen [1 ]
Zhai, Wei [1 ]
Zhang, Jing [2 ]
Cao, Yang [1 ,4 ]
Tao, Dacheng [2 ,3 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Univ Sydney, Camperdown, Australia
[3] JD Explore Acad, Beijing, Peoples R China
[4] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei, Peoples R China
基金
国家重点研发计划;
关键词
D O I
10.1109/CVPR52729.2023.00658
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Perceiving potential "action possibilities" (i.e., affordance) regions of images and learning interactive functionalities of objects from human demonstration is a challenging task due to the diversity of human-object interactions. Prevailing affordance learning algorithms often adopt the label assignment paradigm and presume that there is a unique relationship between functional region and affordance label, yielding poor performance when adapting to unseen environments with large appearance variations. In this paper, we propose to leverage interactive affinity for affordance learning, i.e.extracting interactive affinity from human-object interaction and transferring it to non-interactive objects. Interactive affinity, which represents the contacts between different parts of the human body and local regions of the target object, can provide inherent cues of interconnectivity between humans and objects, thereby reducing the ambiguity of the perceived action possibilities. Specifically, we propose a pose-aided interactive affinity learning framework that exploits human pose to guide the network to learn the interactive affinity from human-object interactions. Particularly, a keypoint heuristic perception (KHP) scheme is devised to exploit the keypoint association of human pose to alleviate the uncertainties due to interaction diversities and contact occlusions. Besides, a contact-driven affordance learning (CAL) dataset is constructed by collecting and labeling over 5, 000 images. Experimental results demonstrate that our method outperforms the representative models regarding objective metrics and visual quality. Code and dataset: github.com/lhc1224/PIAL-Net.
引用
收藏
页码:6809 / 6819
页数:11
相关论文
共 50 条
  • [41] The Leverage of a Self Concept in Incremental Learning
    Samsonovich, Alexei V.
    PROCEEDINGS OF THE TWENTY-SIXTH ANNUAL CONFERENCE OF THE COGNITIVE SCIENCE SOCIETY, 2004, : 1627 - 1627
  • [42] OWING, BUT GROWING - LEARNING TO LOVE LEVERAGE
    FREEDMAN, W
    CHEMICAL WEEK, 1995, 157 (09) : 28 - 30
  • [43] A framework to leverage and mature learning ecosystems
    Redmond W.D.
    Macfadyen L.P.
    International Journal of Emerging Technologies in Learning, 2020, 15 (05) : 75 - 99
  • [44] Professional and Personal Experiences as Leverage for Learning
    Kappert, Annette
    FRONTIERS IN EDUCATION, 2020, 5
  • [45] The New Thinking in Emotional User Experience: From Visual Metaphor to Interactive Affordance
    Chen, Xin
    ADVANCES IN USABILITY AND USER EXPERIENCE, 2020, 972 : 490 - 497
  • [46] Recent Advances of Deep Robotic Affordance Learning: A Reinforcement Learning Perspective
    Yang, Xintong
    Ji, Ze
    Wu, Jing
    Lai, Yu-Kun
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2023, 15 (03) : 1139 - 1149
  • [47] Affordance, learning opportunities, and the lesson plan pro forma
    Anderson, Jason
    ELT JOURNAL, 2015, 69 (03) : 228 - 238
  • [48] Learning Visual Object Categories for Robot Affordance Prediction
    Sun, Jie
    Moore, Joshua L.
    Bobick, Aaron
    Rehg, James M.
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2010, 29 (2-3): : 174 - 197
  • [49] A Perceptual Memory System for Affordance Learning in Humanoid Robots
    Kammer, Marc
    Tscherepanow, Marko
    Schack, Thomas
    Nagai, Yukie
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2011, PT II, 2011, 6792 : 349 - 356
  • [50] A Bayesian Approach Towards Affordance Learning in Artificial Agents
    Stramandinoli, Francesca
    Tikhanoff, Vadim
    Pattacini, Ugo
    Nori, Francesco
    5TH INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING AND ON EPIGENETIC ROBOTICS (ICDL-EPIROB), 2015, : 298 - 299