Asymmetric Gaussian Process multi-view learning for visual classification

被引:19
|
作者
Li, Jinxing [1 ,2 ]
Li, Zhaoqun [1 ]
Lu, Guangming [4 ]
Xu, Yong [4 ]
Zhang, Bob [5 ]
Zhang, David [1 ,3 ]
机构
[1] Chinese Univ Hong Kong Shenzhen, Shenzhen, Peoples R China
[2] Univ Sci & Technol China, Hefei, Peoples R China
[3] Shenzhen Inst Artificial Intelligence & Robot Soc, Shenzhen, Peoples R China
[4] Harbin Inst Technol, Dept Comp Sci, Shenzhen, Peoples R China
[5] Univ Macau, Dept Comp & Informat Sci, Taipa, Macao, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Multi-view; Gaussian Process; View-shared; View-specific; Classification; MAXIMUM-ENTROPY DISCRIMINATION; LATENT VARIABLE MODEL;
D O I
10.1016/j.inffus.2020.08.020
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Methods of multi-view learning attain outstanding performance in different fields compared with the single-view based strategies. In this paper, the Gaussian Process Latent Variable Model (GPVLM), which is a generative and non-parametric model, is exploited to represent multiple views in a common subspace. Specifically, there exists a shared latent variable across various views that is assumed to be transformed to observations by using distinctive Gaussian Process projections. However, this assumption is only a generative strategy, being intractable to simply estimate the fused variable at the testing step. In order to tackle this problem, another projection from observed data to the shared variable is simultaneously learned by enjoying the view-shared and view-specific kernel parameters under the Gaussian Process structure. Furthermore, to achieve the classification task, label information is also introduced to be the generation from the latent variable through a Gaussian Process transformation. Extensive experimental results on multi-view datasets demonstrate the superiority and effectiveness of our model in comparison to state-of-the-art algorithms.
引用
收藏
页码:108 / 118
页数:11
相关论文
共 50 条
  • [11] Generative multi-view and multi-feature learning for classification
    Li, Jinxing
    Zhang, Bob
    Lu, Guangming
    Zhang, David
    [J]. INFORMATION FUSION, 2019, 45 : 215 - 226
  • [12] Image ordinal classification with deep multi-view learning
    Zhang, Chao
    Xu, Xun
    Zhu, Ce
    [J]. ELECTRONICS LETTERS, 2018, 54 (22) : 1280 - 1281
  • [13] When multi-view classification meets ensemble learning
    Shi, Shaojun
    Nie, Feiping
    Wang, Rong
    Li, Xuelong
    [J]. NEUROCOMPUTING, 2022, 490 : 17 - 29
  • [14] Multi-View Learning for High Dimensional Data Classification
    Li, Kunlun
    Meng, Xiaoqian
    Cao, Zheng
    Sun, Xue
    [J]. CCDC 2009: 21ST CHINESE CONTROL AND DECISION CONFERENCE, VOLS 1-6, PROCEEDINGS, 2009, : 3766 - 3770
  • [15] Multi-view learning for hyperspectral image classification: An overview
    Li, Xuefei
    Liu, Baodi
    Zhang, Kai
    Chen, Honglong
    Cao, Weijia
    Liu, Weifeng
    Tao, Dapeng
    [J]. NEUROCOMPUTING, 2022, 500 : 499 - 517
  • [16] Multi-View Synthesis and Analysis Dictionaries Learning for Classification
    Wu, Fei
    Dong, Xiwei
    Han, Lu
    Jing, Xiao-Yuan
    Ji, Yi-mu
    [J]. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2019, E102D (03) : 659 - 662
  • [17] MULTI-VIEW DEEP METRIC LEARNING FOR IMAGE CLASSIFICATION
    Li, Dewei
    Tang, Jingjing
    Tian, Yingjie
    Ju, Xuchan
    [J]. 2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 4142 - 4146
  • [18] Multi-View Analysis Dictionary Learning for Image Classification
    Wang, Qianyu
    Guo, Yanqing
    Wang, Jiujun
    Luo, Xiangyang
    Kong, Xiangwei
    [J]. IEEE ACCESS, 2018, 6 : 20174 - 20183
  • [19] Long-tailed visual classification based on supervised contrastive learning with multi-view fusion
    Zeng, Liang
    Feng, Zheng
    Chen, Jia
    Wang, Shanshan
    [J]. KNOWLEDGE-BASED SYSTEMS, 2024, 301
  • [20] MULTI-VIEW VISUAL SPEECH RECOGNITION BASED ON MULTI TASK LEARNING
    Han, HouJeung
    Kang, Sunghun
    Yoo, Chang D.
    [J]. 2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 3983 - 3987