Semi-supervised subspace learning with L2graph

被引:11
|
作者
Peng, Xi [1 ]
Yuan, Miaolong [1 ]
Yu, Zhiding [2 ]
Yau, Wei Yun [1 ]
Zhang, Lei [3 ]
机构
[1] ASTAR, Inst Infocomm Res, Singapore 138632, Singapore
[2] Carnegie Mellon Univ, Dept Elect & Comp Engn, Pittsburgh, PA 15213 USA
[3] Sichuan Univ, Coll Comp Sci, Machine Intelligence Lab, Chengdu 610065, Peoples R China
关键词
Bio-inspired feature learning; Automatic subspace learning; Dimension reduction; Graph embedding; FACE-RECOGNITION; DIMENSIONALITY REDUCTION; EXTENSIONS; SELECTION; SPARSITY;
D O I
10.1016/j.neucom.2015.11.112
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Subspace learning aims to learn a projection matrix from a given training set so that a transformation of raw data to a low-dimensional representation can be obtained. In practice, the labels of some training samples are available, which can be used to improve the discrimination of low-dimensional representation. In this paper, we propose a semi-supervised learning method which is inspired by the biological observation of similar inputs having similar codes (SISC), i.e., the same collection of cortical columns of the mammal's visual cortex is always activated by the similar stimuli. More specifically, we propose a mathematical formulation of SISC which minimizes the distance among the data points with the same label while maximizing the separability between different subjects in the projection space. The proposed method, namely, semi-supervised L2graph (SeL2graph) has two advantages: (1) unlike the classical dimension reduction methods such as principle component analysis, SeL2graph can automatically determine the dimension of feature space. This remarkably reduces the effort to find an optimal feature dimension for a good performance; and (2) it fully exploits the prior knowledge carried by the labeled samples and thus the obtained features are with higher discrimination and compactness. Extensive experiments show that the proposed method outperforms 7 subspace learning algorithms on 15 data sets with respect to classification accuracy, computational efficiency, and robustness to noises and disguises. (C) 2016 Elsevier B.V. All rights reserved.
引用
收藏
页码:143 / 152
页数:10
相关论文
共 50 条
  • [21] Deep graph learning for semi-supervised classification
    Lin, Guangfeng
    Kang, Xiaobing
    Liao, Kaiyang
    Zhao, Fan
    Chen, Yajun
    [J]. PATTERN RECOGNITION, 2021, 118
  • [22] Graph-based semi-supervised learning
    Subramanya, Amarnag
    Talukdar, Partha Pratim
    [J]. Synthesis Lectures on Artificial Intelligence and Machine Learning, 2014, 29 : 1 - 126
  • [23] Graph-based semi-supervised learning
    Changshui Zhang
    Fei Wang
    [J]. Artificial Life and Robotics, 2009, 14 (4) : 445 - 448
  • [24] SEMI-SUPERVISED SUBSPACE SEGMENTATION
    Wang, Dong
    Yin, Qiyue
    He, Ran
    Wang, Liang
    Tan, Tieniu
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, : 2854 - 2858
  • [25] Sharpened graph ensemble for semi-supervised learning
    Choi, Inae
    Park, Kanghee
    Shin, Hyunjung
    [J]. INTELLIGENT DATA ANALYSIS, 2013, 17 (03) : 387 - 398
  • [26] Link prediction in graph construction for supervised and semi-supervised learning
    Berton, Lilian
    Valverde-Rebaza, Jorge
    Lopes, Alneu de Andrade
    [J]. 2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2015,
  • [27] Lγ-PageRank for semi-supervised learning
    Esteban Bautista
    Patrice Abry
    Paulo Gonçalves
    [J]. Applied Network Science, 4
  • [28] Lγ-PageRank for semi-supervised learning
    Bautista, Esteban
    Abry, Patrice
    Goncalves, Paulo
    [J]. APPLIED NETWORK SCIENCE, 2019, 4 (01)
  • [29] SemiGraphFL: Semi-supervised Graph Federated Learning for Graph Classification
    Tao, Ye
    Li, Ying
    Wu, Zhonghai
    [J]. PARALLEL PROBLEM SOLVING FROM NATURE - PPSN XVII, PPSN 2022, PT I, 2022, 13398 : 474 - 487