Joint long and short span self-attention network for multi-view classification

被引:1
|
作者
Chen, Zhikui [1 ]
Lou, Kai [1 ]
Liu, Zhenjiao [1 ]
Li, Yue [1 ]
Luo, Yiming [1 ]
Zhao, Liang [1 ]
机构
[1] Dalian Univ Technol, Sch Software, Dalian 116620, Peoples R China
关键词
Multi-view classification; Self-attention mechanism; Multi-view fusion; DIMENSIONALITY; MODEL;
D O I
10.1016/j.eswa.2023.121152
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-view classification aims to efficiently utilize information from different views to improve classification performance. In recent researches, many effective multi-view learning methods have been proposed to perform multi-view data analysis. However, most existing methods only consider the correlations between views but ignore the potential correlations between samples. Normally, the views of samples belonging to the same category should have more consistency information and those belonging to different categories should have more distinctions. Therefore, we argue that the correlations and distinctions between the views of different samples also contribute to the construction of feature representations that are more conducive to classification. In order to construct a end-to-end general multi-view classification framework that can better utilize sample information to obtain more reasonable feature representation, we propose a novel joint long and short span self -attention network (JLSSAN). We designed two different self-attention spans to focus on different information. They enable each feature vector to be iteratively updated based on its attention to other views and other samples, which provides better integration of information from different views and different samples. Besides, we adopt a novel weight-based loss fusion strategy, which facilitates the model to learn more reasonable self-attention map between views. Our method outperforms the state-of-the-art methods by more than 3% in accuracy on multiple benchmarks, which demonstrates that our method is effective.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Multi-view self-attention networks
    Xu, Mingzhou
    Yang, Baosong
    Wong, Derek F.
    Chao, Lidia S.
    [J]. KNOWLEDGE-BASED SYSTEMS, 2022, 241
  • [2] Improved Multi-Head Self-Attention Classification Network for Multi-View Fetal Echocardiography Recognition
    Zhang, Yingying
    Zhu, Haogang
    Wang, Yan
    Wang, Jingyi
    He, Yihua
    [J]. 2023 45TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY, EMBC, 2023,
  • [3] MULTI-VIEW SELF-ATTENTION BASED TRANSFORMER FOR SPEAKER RECOGNITION
    Wang, Rui
    Ao, Junyi
    Zhou, Long
    Liu, Shujie
    Wei, Zhihua
    Ko, Tom
    Li, Qing
    Zhang, Yu
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 6732 - 6736
  • [4] Multi-view 3D Reconstruction with Self-attention
    Qian, Qiuting
    [J]. 2021 14TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTER THEORY AND ENGINEERING (ICACTE 2021), 2021, : 20 - 26
  • [5] Multi-view Instance Attention Fusion Network for classification
    Li, Jinxing
    Zhou, Chuhao
    Ji, Xiaoqiang
    Li, Mu
    Lu, Guangming
    Xu, Yong
    Zhang, David
    [J]. INFORMATION FUSION, 2024, 101
  • [6] MHSAN: Multi-view hierarchical self-attention network for 3D shape recognition
    Cao, Jiangzhong
    Yu, Lianggeng
    Ling, Bingo Wing-Kuen
    Yao, Zijie
    Dai, Qingyun
    [J]. PATTERN RECOGNITION, 2024, 150
  • [7] Multi-View Group Recommendation Integrating Self-Attention and Graph Convolution
    Wang, Yonggui
    Wang, Xinru
    [J]. Computer Engineering and Applications, 2024, 60 (08) : 287 - 295
  • [8] Efficient Multi-View Graph Convolutional Network with Self-Attention for Multi-Class Motor Imagery Decoding
    Tan, Xiyue
    Wang, Dan
    Xu, Meng
    Chen, Jiaming
    Wu, Shuhan
    [J]. BIOENGINEERING-BASEL, 2024, 11 (09):
  • [9] Incomplete multi-view clustering via self-attention networks and feature reconstruction
    Zhang, Yong
    Jiang, Li
    Liu, Da
    Liu, Wenzhe
    [J]. APPLIED INTELLIGENCE, 2024, 54 (04) : 2998 - 3016
  • [10] Self-attention Multi-view Representation Learning with Diversity-promoting Complementarity
    Liu, Jian-wei
    Ding, Xi-hao
    Lu, Run-kun
    Luo, Xionglin
    [J]. PROCEEDINGS OF THE 32ND 2020 CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2020), 2020, : 3972 - 3978