ART-based fusion of multi-modal perception for robots

被引:7
|
作者
Berghoefer, Elmar [1 ]
Schulze, Denis [3 ,4 ]
Rauch, Christian [2 ]
Tscherepanow, Marko [4 ]
Koehler, Tim [1 ]
Wachsmuth, Sven [3 ,4 ]
机构
[1] DFKI GmbH, Robot Innovat Ctr, D-28359 Bremen, Germany
[2] Univ Bremen, Robot Res Grp, D-28359 Bremen, Germany
[3] CITEC, Ctr Excellence, D-33615 Bielefeld, Germany
[4] Univ Bielefeld, Fac Technol, D-33615 Bielefeld, Germany
关键词
Sensor data fusion; Incremental learning; Adaptive Resonance Theory; ART; Robotic systems; ARTMAP; NEURAL-NETWORK; FUZZY ARTMAP; ARCHITECTURE;
D O I
10.1016/j.neucom.2012.08.035
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Robotic application scenarios in uncontrolled environments pose high demands on mobile robots. This is especially true if human-robot interaction or robot-robot interaction is involved. Here, potential interaction partners need to be identified. To tackle challenges like this, robots make use of different sensory systems. In many cases, these robots have to deal with erroneous data from different sensory systems which often are processed separately. A possible strategy to improve identification results is to combine different processing results of complementary sensors. Their relation is often hard coded and difficult to learn incrementally if new kinds of objects or events occur. In this paper, we present a new fusion strategy which we call the Simplified Fusion ARTMAP (SiFuAM) which is very flexible and therefore can be easily adapted to new domains or sensor configurations. As our approach is based on the Adaptive Resonance Theory (ART) it is inherently capable of incremental on-line learning. We show its applicability in different robotic scenarios and platforms and give an overview of its performance. (C) 2012 Elsevier B.V. All rights reserved.
引用
收藏
页码:11 / 22
页数:12
相关论文
共 50 条
  • [1] ART-Based Fusion of Multi-modal Information for Mobile Robots
    Berghoefer, Elmar
    Schulze, Denis
    Tscherepanow, Marko
    Wachsmuth, Sven
    [J]. ENGINEERING APPLICATIONS OF NEURAL NETWORKS, PT I, 2011, 363 : 1 - 10
  • [2] Multi-modal Perception Fusion Method Based on Cross Attention
    Zhang, Bing-Li
    Pan, Ze-Hao
    Jiang, Jun-Zhao
    Zhang, Cheng-Biao
    Wang, Yi-Xin
    Yang, Cheng-Lei
    [J]. Zhongguo Gonglu Xuebao/China Journal of Highway and Transport, 2024, 37 (03): : 181 - 193
  • [3] Multi-modal Perception
    Kondo, T.
    [J]. Denshi Joho Tsushin Gakkai Shi/Journal of the Institute of Electronics, Information and Communications Engineers, 78 (12):
  • [4] Multi-modal perception
    [J]. BT Technol J, 1 (35-46):
  • [5] Multi-modal perception
    Hollier, MP
    Rimell, AN
    Hands, DS
    Voelcker, RM
    [J]. BT TECHNOLOGY JOURNAL, 1999, 17 (01) : 35 - 46
  • [6] Multi-Modal Pedestrian Detection Algorithm Based on Illumination Perception Weight Fusion
    Liu Keqi
    Dong Mianmian
    Gao Hui
    Zhigang Lu
    Guo Baoyi
    Pang Min
    [J]. LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (16)
  • [7] Multi-modal Fusion
    Liu, Huaping
    Hussain, Amir
    Wang, Shuliang
    [J]. INFORMATION SCIENCES, 2018, 432 : 462 - 462
  • [8] Research on multi-modal environment perception and interaction technology of wearable robots
    Zhang, Fusheng
    Xu, Benlian
    Xu, Jiazhong
    Jiang, Anbo
    Ran, Qi
    [J]. BASIC & CLINICAL PHARMACOLOGY & TOXICOLOGY, 2019, 125 : 39 - 40
  • [9] LiCaNet: Further Enhancement of Joint Perception and Motion Prediction Based on Multi-Modal Fusion
    Khalil, Yasser H.
    Mouftah, Hussein T.
    [J]. IEEE OPEN JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 3 : 222 - 235
  • [10] Human In-Hand Motion Recognition Based on Multi-Modal Perception Information Fusion
    Xue, Yaxu
    Yu, Yadong
    Yin, Kaiyang
    Li, Pengfei
    Xie, Shuangxi
    Ju, Zhaojie
    [J]. IEEE SENSORS JOURNAL, 2022, 22 (07) : 6793 - 6805