Epoch-Evolving Gaussian Process Guided Learning for Classification

被引:0
|
作者
Cui, Jiabao [1 ]
Li, Xuewei [1 ]
Zhao, Hanbin [1 ]
Wang, Hui [1 ]
Li, Bin [2 ]
Li, Xi [3 ,4 ,5 ,6 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Peoples R China
[2] Zhejiang Univ, Coll Informat Sci & Elect Engn, Hangzhou 310027, Peoples R China
[3] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 311121, Peoples R China
[4] Shanghai AI Lab, Shanghai 200000, Peoples R China
[5] Zhejiang Univ, Shanghai Inst Adv Study, Shanghai 200000, Peoples R China
[6] Zhejiang Singapore Innovat & AI Joint Res Lab, Hangzhou 311121, Peoples R China
基金
中国国家自然科学基金;
关键词
Computational modeling; Pipelines; Deep learning; Context modeling; Predictive models; Feature extraction; Data models; Gaussian process (GP); global distribution-aware learning; non-parametric modeling; top-down strategy; PROCESS REGRESSION; NEURAL-NETWORKS;
D O I
10.1109/TNNLS.2022.3174207
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The conventional mini-batch gradient descent algorithms are usually trapped in the local batch-level distribution information, resulting in the ``zig-zag'' effect in the learning process. To characterize the correlation information between the batch-level distribution and the global data distribution, we propose a novel learning scheme called epoch-evolving Gaussian process guided learning (GPGL) to encode the global data distribution information in a non-parametric way. Upon a set of class-aware anchor samples, our GP model is built to estimate the class distribution for each sample in mini-batch through label propagation from the anchor samples to the batch samples. The class distribution, also named the context label, is provided as a complement for the ground-truth one-hot label. Such a class distribution structure has a smooth property and usually carries a rich body of contextual information that is capable of speeding up the convergence process. With the guidance of the context label and ground-truth label, the GPGL scheme provides a more efficient optimization through updating the model parameters with a triangle consistency loss. Furthermore, our GPGL scheme can be generalized and naturally applied to the current deep models, outperforming the state-of-the-art optimization methods on six benchmark datasets.
引用
收藏
页码:326 / 337
页数:12
相关论文
共 50 条
  • [1] LEARNING FILTERS IN GAUSSIAN PROCESS CLASSIFICATION PROBLEMS
    Ruiz, Pablo
    Mateos, Javier
    Molina, Rafael
    Katsaggelos, Aggelos K.
    2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, : 2913 - 2917
  • [2] Gaussian Process Classification and Active Learning with Multiple Annotators
    Rodrigues, Filipe
    Pereira, Francisco C.
    Ribeiro, Bernardete
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 32 (CYCLE 2), 2014, 32 : 433 - 441
  • [3] Evolving Gaussian Mixture Models for Classification
    Reichhuber, Simon
    Tomforde, Sven
    ICAART: PROCEEDINGS OF THE 14TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 3, 2022, : 964 - 974
  • [4] Efficient Active Learning for Gaussian Process Classification by Error Reduction
    Zhao, Guang
    Dougherty, Edward R.
    Yoon, Byung-Jun
    Alexander, Francis J.
    Qian, Xiaoning
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [5] Gaussian Process Classification as Metric Learning for Forensic Writer Identification
    Wahlberg, Fredrik
    2018 13TH IAPR INTERNATIONAL WORKSHOP ON DOCUMENT ANALYSIS SYSTEMS (DAS), 2018, : 175 - 180
  • [6] Active Learning With Gaussian Process Classifier for Hyperspectral Image Classification
    Sun, Shujin
    Zhong, Ping
    Xiao, Huaitie
    Wang, Runsheng
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2015, 53 (04): : 1746 - 1760
  • [7] ACTIVE ENSEMBLE LEARNING FOR EEG EPOCH CLASSIFICATION
    Gupta, M.
    Klerman, E. B.
    SLEEP, 2017, 40 : A43 - A43
  • [8] Asymmetric Gaussian Process multi-view learning for visual classification
    Li, Jinxing
    Li, Zhaoqun
    Lu, Guangming
    Xu, Yong
    Zhang, Bob
    Zhang, David
    INFORMATION FUSION, 2021, 65 : 108 - 118
  • [9] Gaussian process classification bandits
    Hayashi, Tatsuya
    Ito, Naoki
    Tabata, Koji
    Nakamura, Atsuyoshi
    Fujita, Katsumasa
    Harada, Yoshinori
    Komatsuzaki, Tamiki
    PATTERN RECOGNITION, 2024, 149
  • [10] Confidence-Guided Learning Process for Continuous Classification of Time Series
    Sun, Chenxi
    Song, Moxian
    Cai, Derun
    Zhang, Baofeng
    Hong, Shenda
    Li, Hongyan
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 4525 - 4529