A New Principle toward Robust Matching in Human-like Stereovision

被引:3
|
作者
Xie, Ming [1 ]
Lai, Tingfeng [1 ]
Fang, Yuhui [1 ]
机构
[1] Nanyang Technol Univ, Sch Mech & Aerosp Engn, Singapore 639798, Singapore
关键词
visual signals; stereovision; image sampling; feature extraction; incremental learning; match-maker; cognition; recognition; possibility function;
D O I
10.3390/biomimetics8030285
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Visual signals are the upmost important source for robots, vehicles or machines to achieve human-like intelligence. Human beings heavily depend on binocular vision to understand the dynamically changing world. Similarly, intelligent robots or machines must also have the innate capabilities of perceiving knowledge from visual signals. Until today, one of the biggest challenges faced by intelligent robots or machines is the matching in stereovision. In this paper, we present the details of a new principle toward achieving a robust matching solution which leverages on the use and integration of top-down image sampling strategy, hybrid feature extraction, and Restricted Coulomb Energy (RCE) neural network for incremental learning (i.e., cognition) as well as robust match-maker (i.e., recognition). A preliminary version of the proposed solution has been implemented and tested with data from Maritime RobotX Challenge. The contribution of this paper is to attract more research interest and effort toward this new direction which may eventually lead to the development of robust solutions expected by future stereovision systems in intelligent robots, vehicles, and machines.
引用
收藏
页数:20
相关论文
共 50 条
  • [41] What is Human-like?: Decomposing Robots' Human-like Appearance Using the Anthropomorphic roBOT (ABOT) Database
    Phillips, Elizabeth
    Zhao, Xuan
    Ullman, Daniel
    Malle, Bertram F.
    HRI '18: PROCEEDINGS OF THE 2018 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2018, : 105 - 113
  • [42] Wide human-like neural network incorporating driving styles for human-like driving intention analysis
    Xie, Jiming
    Zhang, Yan
    Qin, Yaqin
    Li, Ke
    Dong, Shuai
    Liu, Siyu
    Xia, Yulan
    JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS, 2024,
  • [43] Toward Human-like Billiard AI Bot Based on Backward Induction and Machine Learning
    Tung, Kuei Gu
    Wang, Sheng Wen
    Tai, Wen Kai
    Way, Der Lor
    Chang, Chin Chen
    2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 924 - 932
  • [44] Consumer Attitudes Toward Human-Like Avatars in Advertisements: The Effect of Category Knowledge and Imagery
    Gammoh, Bashar S.
    Jimenez, Fernando R.
    Wergin, Rand
    INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE, 2018, 22 (03) : 325 - 348
  • [45] Qualitative description representing brain functional connection for human emotional states toward human-like agents
    Tawatsuji, Yoshimasa
    Muramatsu, Keiichi
    Matsui, Tatsunori
    Transactions of the Japanese Society for Artificial Intelligence, 2015, 30 (05) : 626 - 638
  • [46] Development of a new human-like head robot WE-4
    Miwa, H
    Okuchi, T
    Takanobu, H
    Takanishi, A
    2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-3, PROCEEDINGS, 2002, : 2443 - 2448
  • [47] A new matching approach based on line moments in binocular stereovision
    Brochard, J
    Khoudeir, M
    Eliot, C
    Simon, T
    ICECS 96 - PROCEEDINGS OF THE THIRD IEEE INTERNATIONAL CONFERENCE ON ELECTRONICS, CIRCUITS, AND SYSTEMS, VOLS 1 AND 2, 1996, : 688 - 691
  • [48] Generating human-like motion for robots
    Gielniak, Michael J.
    Liu, C. Karen
    Thomaz, Andrea L.
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (11): : 1275 - 1301
  • [49] Human-Like Spatial Reasoning Formalisms
    Walega, Przemyslaw Andrze
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 5054 - 5055
  • [50] Modelling trust in human-like technologies
    Gulati, Siddharth
    Sousa, Sonia
    Lamas, David
    INDIAHCI'18: PROCEEDINGS OF THE 9TH INDIAN CONFERENCE ON HUMAN COMPUTER INTERACTION, 2018, : 1 - 10