Large-margin multi-view Gaussian process

被引:0
|
作者
Chang Xu
Dacheng Tao
Yangxi Li
Chao Xu
机构
[1] Peking University,Key Laboratory of Machine Perception (Ministry of Education)
[2] University of Technology,Centre for Quantum Computation & Intelligent Systems, Faculty of Engineering and Information Technology
[3] Sydney,undefined
[4] National Computer Network Emergency Response Technical Team/Coordination Center of China (CNCERT/CC),undefined
来源
Multimedia Systems | 2015年 / 21卷
关键词
Multi-view learning; Large margin; Gaussian process;
D O I
暂无
中图分类号
学科分类号
摘要
In image classification, the goal was to decide whether an image belongs to a certain category or not. Multiple features are usually employed to comprehend the contents of images substantially for the improvement of classification accuracy. However, it also brings in some new problems that how to effectively combine multiple features together and how to handle the high-dimensional features from multiple views given the small training set. In this paper, we integrate the large-margin idea into the Gaussian process to discover the latent subspace shared by multiple features. Therefore, our approach inherits all the advantages of Gaussian process and large-margin principle. A probabilistic explanation is provided by Gaussian process to embed multiple features into the shared low-dimensional subspace, which derives a strong discriminative ability from the large-margin principle, and thus, the subsequent classification task can be effectively accomplished. Finally, we demonstrate the advantages of the proposed algorithm on real-world image datasets for discovering discriminative latent subspace and improving the classification performance.
引用
收藏
页码:147 / 157
页数:10
相关论文
共 50 条
  • [1] Large-margin multi-view Gaussian process
    Xu, Chang
    Tao, Dacheng
    Li, Yangxi
    Xu, Chao
    [J]. MULTIMEDIA SYSTEMS, 2015, 21 (02) : 147 - 157
  • [2] Large-Margin Multi-View Information Bottleneck
    Xu, Chang
    Tao, Dacheng
    Xu, Chao
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2014, 36 (08) : 1559 - 1572
  • [3] A Geometric Perspective of Large-Margin Training of Gaussian Models
    Xiao, Lin
    Deng, Li
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2010, 27 (06) : 118 - 123
  • [4] Multi-view Collaborative Gaussian Process Dynamical Systems
    Sun, Shiliang
    Fei, Jingjing
    Zhao, Jing
    Mao, Liang
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2023, 24
  • [5] Hierarchical large-margin Gaussian mixture models for phonetic classification
    Chang, Hung-An
    Glass, James R.
    [J]. 2007 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING, VOLS 1 AND 2, 2007, : 272 - 277
  • [6] Asymmetric Gaussian Process multi-view learning for visual classification
    Li, Jinxing
    Li, Zhaoqun
    Lu, Guangming
    Xu, Yong
    Zhang, Bob
    Zhang, David
    [J]. INFORMATION FUSION, 2021, 65 : 108 - 118
  • [7] Multi-view Regularized Gaussian Processes
    Liu, Qiuyang
    Sun, Shiliang
    [J]. ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2017, PT II, 2017, 10235 : 655 - 667
  • [8] Multi-view Deep Gaussian Processes
    Sun, Shiliang
    Liu, Qiuyang
    [J]. NEURAL INFORMATION PROCESSING (ICONIP 2018), PT I, 2018, 11301 : 130 - 139
  • [9] Nonparametric Bayesian Multi-Task Large-margin Classification
    Du, Changying
    He, Jia
    Zhuang, Fuzhen
    Qi, Yuan
    He, Qing
    [J]. 21ST EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE (ECAI 2014), 2014, 263 : 255 - +
  • [10] Large-Margin Supervised Hashing
    Zhang, Xiaopeng
    Zhang, Hui
    Chen, Yong
    Liu, Xianglong
    [J]. NEURAL INFORMATION PROCESSING, ICONIP 2017, PT I, 2017, 10634 : 259 - 269