Building object models through interactive perception and foveated vision

被引:2
|
作者
Bevec, Robert [1 ]
Ude, Ales [1 ,2 ]
机构
[1] Jozef Stefan Inst, Dept Automat Biocybernet & Robot, Humanoid & Cognit Robot Lab, Ljubljana, Slovenia
[2] ATR Computat Neurosci Labs, Dept Brain Robot Interface, Kyoto, Japan
关键词
active perception; object recognition; autonomous learning; SEGMENTATION;
D O I
10.1080/01691864.2015.1028999
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Autonomous robots that operate in unstructured environments must be able to seamlessly expand their knowledge base. To detect and manipulate previously unknown objects, a robot should be able to acquire new object knowledge even when no prior information about the objects or the environment is available. Additional information that is needed to identify new objects can come through motion cues induced by interactive manipulation. In the proposed system, changes in the scene are caused by a teacher manipulating the object to be learned. We propose to improve visual object learning and recognition by exploiting the advantages of foveated vision. The proposed approach first creates object hypotheses in peripheral stereo cameras. By directing its attention towards the identified object area in the foveal views, the robot can conduct a more thorough investigation of a smaller area of the scene, which is seen in higher resolution. We compare two methods for validating the hypotheses in the foveal views and experimentally show the advantages of foveated vision compared to stereo vision with a fixed field of view.
引用
收藏
页码:611 / 623
页数:13
相关论文
共 50 条
  • [1] Object Learning through Interactive Manipulation and Foveated Vision
    Bevec, Robert
    Ude, Ales
    [J]. 2013 13TH IEEE-RAS INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2013, : 234 - 239
  • [2] Pushing and Grasping for Autonomous Learning of Object Models with Foveated Vision
    Bevec, Robert
    Ude, Ales
    [J]. PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS (ICAR), 2015, : 237 - 243
  • [3] Object recognition on humanoids with foveated vision
    Ude, A
    Cheng, G
    [J]. 2004 4TH IEEE/RAS INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS, VOLS 1 AND 2, PROCEEDINGS, 2004, : 885 - 898
  • [4] Influence of Foveated Vision on Video Quality Perception
    Vranjes, Mario
    Rimac-Drlje, Snjezana
    Nemcic, Ognjen
    [J]. PROCEEDINGS ELMAR-2009, 2009, : 29 - 32
  • [5] Ditto in the House: Building Articulation Models of Indoor Scenes through Interactive Perception
    Hsu, Cheng-Chun
    Jiang, Zhenyu
    Zhu, Yuke
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 3933 - 3939
  • [6] A Foveated Stereo Vision System for Active Depth Perception
    Olaya, Emerson J.
    Torres-Mendez, Luz A.
    [J]. 2009 IEEE INTERNATIONAL WORKSHOP ON ROBOTIC AND SENSORS ENVIRONMENTS (ROSE 2009), 2009, : 110 - 115
  • [7] Autonomous Object Segmentation in Cluttered Environment Through Interactive Perception
    Wu, Rui
    Zhao, Dongfang
    Liu, Jiafeng
    Tang, Xianglong
    Huang, Qingcheng
    [J]. INTELLIGENCE SCIENCE AND BIG DATA ENGINEERING, ISCIDE 2017, 2017, 10559 : 346 - 355
  • [8] Object detection through search with a foveated visual system
    Akbas, Emre
    Eckstein, Miguel P.
    [J]. PLOS COMPUTATIONAL BIOLOGY, 2017, 13 (10)
  • [9] Redundant control of a humanoid robot head with foveated vision for object tracking
    Omrcen, Damir
    Ude, Ales
    [J]. 2010 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2010, : 4151 - 4156
  • [10] Building Interdisciplinary Research Models Through Interactive Education
    Hessels, Amanda J.
    Robinson, Brian
    O'Rourke, Michael
    Begg, Melissa D.
    Larson, Elaine L.
    [J]. CTS-CLINICAL AND TRANSLATIONAL SCIENCE, 2015, 8 (06): : 793 - 799