Supervised and Unsupervised Parallel Subspace Learning for Large-Scale Image Recognition

被引:17
|
作者
Jing, Xiao-Yuan [1 ,2 ,3 ]
Li, Sheng [4 ]
Zhang, David [5 ]
Yang, Jian [6 ]
Yang, Jing-Yu [6 ]
机构
[1] Wuhan Univ, State Key Lab Software Engn, Wuhan 430072, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Coll Automat, Nanjing 210003, Peoples R China
[3] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210046, Jiangsu, Peoples R China
[4] Nanjing Univ Posts & Telecommun, Coll Comp Sci, Nanjing 210003, Peoples R China
[5] Hong Kong Polytech Univ, Biometr Res Ctr, Dept Comp, Kowloon, Hong Kong, Peoples R China
[6] Nanjing Univ Sci & Technol, Sch Comp Sci & Technol, Nanjing 210094, Jiangsu, Peoples R China
基金
美国国家科学基金会;
关键词
Feature selection; graph embedding; large-scale image recognition; parallel linear discriminant analysis (PLDA); parallel locality preserving projection (PLPP); parallel subspace learning framework; LINEAR DISCRIMINANT-ANALYSIS; FACE RECOGNITION; ALGORITHM; LDA; FRAMEWORK;
D O I
10.1109/TCSVT.2012.2202079
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Subspace learning is an effective and widely used image feature extraction and classification technique. However, for the large-scale image recognition issue in real-world applications, many subspace learning methods often suffer from large computational burden. In order to reduce the computational time and improve the recognition performance of subspace learning technique under this situation, we introduce the idea of parallel computing which can reduce the time complexity by splitting the original task into several subtasks. We develop a parallel subspace learning framework. In this framework, we first divide the sample set into several subsets by designing two random data division strategies that are equal data division and unequal data division. These two strategies correspond to equal and unequal computational abilities of nodes under parallel computing environment. Next, we calculate projection vectors from each subset in parallel. The graph embedding technique is employed to provide a general formulation for parallel feature extraction. After combining the extracted features from all nodes, we present a unified criterion to select most distinctive features for classification. Under the developed framework, we separately propose supervised and unsupervised parallel subspace learning approaches, which are called parallel linear discriminant analysis (PLDA) and parallel locality preserving projection (PLPP). PLDA selects features with the largest Fisher scores by estimating the weighted and unweighted sample scatter, while PLPP selects features with the smallest Laplacian scores by constructing a whole affinity matrix. Theoretically, we analyze the time complexities of proposed approaches and provide the fundamental supports for applying random division strategies. In the experiments, we establish two real parallel computing environments and employ four public image and video databases as the test data. Experimental results demonstrate that the proposed approaches outperform several related supervised and unsupervised subspace learning methods, and significantly reduce the computational time.
引用
收藏
页码:1497 / 1511
页数:15
相关论文
共 50 条
  • [41] Fast Parallel Stochastic Subspace Algorithms for Large-Scale Ambient Oscillation Monitoring
    Wu, Tianying
    Venkatasubramanian, Vaithianathan
    Pothen, Alex
    IEEE TRANSACTIONS ON SMART GRID, 2017, 8 (03) : 1494 - 1503
  • [42] An efficient algorithm for large-scale quasi-supervised learning
    Karacali, Bilge
    PATTERN ANALYSIS AND APPLICATIONS, 2016, 19 (02) : 311 - 323
  • [43] Self-supervised Learning for Large-scale Item Recommendations
    Yao, Tiansheng
    Yi, Xinyang
    Cheng, Derek Zhiyuan
    Yu, Felix
    Chen, Ting
    Menon, Aditya
    Hong, Lichan
    Chi, Ed H.
    Tjoa, Steve
    Kang, Jieqi
    Ettinger, Evan
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 4321 - 4330
  • [44] An efficient algorithm for large-scale quasi-supervised learning
    Bilge Karaçalı
    Pattern Analysis and Applications, 2016, 19 : 311 - 323
  • [45] Self-Collaborative Unsupervised Hashing for Large-Scale Image Retrieval
    Zhao, Hongmin
    Luo, Zhigang
    IEEE ACCESS, 2022, 10 : 103588 - 103597
  • [46] Unsupervised Rank-Preserving Hashing for Large-Scale Image Retrieval
    Kararnan, Svebor
    Lin, Xudong
    Hu, Xuefeng
    Chang, Shih-Fu
    ICMR'19: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2019, : 192 - 196
  • [47] Transductive Centroid Projection for Semi-supervised Large-Scale Recognition
    Liu, Yu
    Song, Guanglu
    Shao, Jing
    Jin, Xiao
    Wang, Xiaogang
    COMPUTER VISION - ECCV 2018, PT V, 2018, 11209 : 72 - 89
  • [48] Semi-supervised Learning for Large Scale Image Cosegmentation
    Wang, Zhengxiang
    Liu, Rujie
    2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, : 393 - 400
  • [49] Large-Scale Subspace Clustering Based on Purity Kernel Tensor Learning
    Zheng, Yilu
    Zhao, Shuai
    Zhang, Xiaoqian
    Xu, Yinlong
    Peng, Lifan
    ELECTRONICS, 2024, 13 (01)
  • [50] DISTRIBUTED BINARY SUBSPACE LEARNING ON LARGE-SCALE CROSS MEDIA DATA
    Zhao, Xueyi
    Zhang, Chenyi
    Zhang, Zhongfei
    2014 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2014,