Multi-modal broad learning for material recognition

被引:3
|
作者
Wang, Zhaoxin [1 ,2 ]
Liu, Huaping [1 ,2 ]
Xu, Xinying [1 ,2 ]
Sun, Fuchun [1 ,2 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, Beijing, Peoples R China
[2] Beijing Natl Res Ctr Informat Sci & Technol, State Key Lab Intelligent Technol & Syst, Beijing, Peoples R China
关键词
Human robot interaction - Learning systems;
D O I
10.1049/ccs2.12004
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Material recognition plays an important role in the interaction between robots and the external environment. For example, household service robots need to replace humans in the home environment to complete housework, so they need to interact with daily necessities and obtain their material performance. Images provide rich visual information about objects; however, it is often difficult to apply when objects are not visually distinct. In addition, tactile signals can be used to capture multiple characteristics of objects, such as texture, roughness, softness, and friction, which provides another crucial way for perception. How to effectively integrate multi-modal information is an urgent problem to be addressed. Therefore, a multi-modal material recognition framework CFBRL-KCCA for target recognition tasks is proposed in the paper. The preliminary features of each model are extracted by cascading broad learning, which is combined with the kernel canonical correlation learning, considering the differences among different models of heterogeneous data. Finally, the open dataset of household objects is evaluated. The results demonstrate that the proposed fusion algorithm provides an effective strategy for material recognition.
引用
下载
收藏
页码:123 / 130
页数:8
相关论文
共 50 条
  • [1] MULTI-MODAL LEARNING FOR GESTURE RECOGNITION
    Cao, Congqi
    Zhang, Yifan
    Lu, Hanqing
    2015 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO (ICME), 2015,
  • [2] Surface Material Recognition Using Active Multi-modal Extreme Learning Machine
    Liu, Huaping
    Fang, Jing
    Xu, Xinying
    Sun, Fuchun
    COGNITIVE COMPUTATION, 2018, 10 (06) : 937 - 950
  • [3] Surface Material Recognition Using Active Multi-modal Extreme Learning Machine
    Huaping Liu
    Jing Fang
    Xinying Xu
    Fuchun Sun
    Cognitive Computation, 2018, 10 : 937 - 950
  • [4] Multi-modal deep learning for landform recognition
    Du, Lin
    You, Xiong
    Li, Ke
    Meng, Liqiu
    Cheng, Gong
    Xiong, Liyang
    Wang, Guangxia
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2019, 158 : 63 - 75
  • [5] A Multi-Modal Deep Learning Approach for Emotion Recognition
    Shahzad, H. M.
    Bhatti, Sohail Masood
    Jaffar, Arfan
    Rashid, Muhammad
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2023, 36 (02): : 1561 - 1570
  • [6] Multi-modal anchor adaptation learning for multi-modal summarization
    Chen, Zhongfeng
    Lu, Zhenyu
    Rong, Huan
    Zhao, Chuanjun
    Xu, Fan
    NEUROCOMPUTING, 2024, 570
  • [7] Multi-Modal Face Recognition
    Shen, Haihong
    Ma, Liqun
    Zhang, Qishan
    2ND IEEE INTERNATIONAL CONFERENCE ON ADVANCED COMPUTER CONTROL (ICACC 2010), VOL. 5, 2010, : 612 - 616
  • [8] Multi-Modal Face Recognition
    Shen, Haihong
    Ma, Liqun
    Zhang, Qishan
    2010 8TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2010, : 720 - 723
  • [9] Multi-Modal Multi-Instance Learning for Retinal Disease Recognition
    Li, Xirong
    Zhou, Yang
    Wang, Jie
    Lin, Hailan
    Zhao, Jianchun
    Ding, Dayong
    Yu, Weihong
    Chen, Youxin
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2474 - 2482
  • [10] InstaIndoor and multi-modal deep learning for indoor scene recognition
    Glavan, Andreea
    Talavera, Estefania
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (09): : 6861 - 6877