Supervised and Unsupervised Parallel Subspace Learning for Large-Scale Image Recognition

被引:17
|
作者
Jing, Xiao-Yuan [1 ,2 ,3 ]
Li, Sheng [4 ]
Zhang, David [5 ]
Yang, Jian [6 ]
Yang, Jing-Yu [6 ]
机构
[1] Wuhan Univ, State Key Lab Software Engn, Wuhan 430072, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Coll Automat, Nanjing 210003, Peoples R China
[3] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210046, Jiangsu, Peoples R China
[4] Nanjing Univ Posts & Telecommun, Coll Comp Sci, Nanjing 210003, Peoples R China
[5] Hong Kong Polytech Univ, Biometr Res Ctr, Dept Comp, Kowloon, Hong Kong, Peoples R China
[6] Nanjing Univ Sci & Technol, Sch Comp Sci & Technol, Nanjing 210094, Jiangsu, Peoples R China
基金
美国国家科学基金会;
关键词
Feature selection; graph embedding; large-scale image recognition; parallel linear discriminant analysis (PLDA); parallel locality preserving projection (PLPP); parallel subspace learning framework; LINEAR DISCRIMINANT-ANALYSIS; FACE RECOGNITION; ALGORITHM; LDA; FRAMEWORK;
D O I
10.1109/TCSVT.2012.2202079
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Subspace learning is an effective and widely used image feature extraction and classification technique. However, for the large-scale image recognition issue in real-world applications, many subspace learning methods often suffer from large computational burden. In order to reduce the computational time and improve the recognition performance of subspace learning technique under this situation, we introduce the idea of parallel computing which can reduce the time complexity by splitting the original task into several subtasks. We develop a parallel subspace learning framework. In this framework, we first divide the sample set into several subsets by designing two random data division strategies that are equal data division and unequal data division. These two strategies correspond to equal and unequal computational abilities of nodes under parallel computing environment. Next, we calculate projection vectors from each subset in parallel. The graph embedding technique is employed to provide a general formulation for parallel feature extraction. After combining the extracted features from all nodes, we present a unified criterion to select most distinctive features for classification. Under the developed framework, we separately propose supervised and unsupervised parallel subspace learning approaches, which are called parallel linear discriminant analysis (PLDA) and parallel locality preserving projection (PLPP). PLDA selects features with the largest Fisher scores by estimating the weighted and unweighted sample scatter, while PLPP selects features with the smallest Laplacian scores by constructing a whole affinity matrix. Theoretically, we analyze the time complexities of proposed approaches and provide the fundamental supports for applying random division strategies. In the experiments, we establish two real parallel computing environments and employ four public image and video databases as the test data. Experimental results demonstrate that the proposed approaches outperform several related supervised and unsupervised subspace learning methods, and significantly reduce the computational time.
引用
收藏
页码:1497 / 1511
页数:15
相关论文
共 50 条
  • [1] Large-scale image recognition based on parallel kernel supervised and semi-supervised subspace learning
    Fei Wu
    Xiao-Yuan Jing
    Qian Liu
    Song-Song Wu
    Guo-Liang He
    Neural Computing and Applications, 2017, 28 : 483 - 498
  • [2] Large-scale image recognition based on parallel kernel supervised and semi-supervised subspace learning
    Wu, Fei
    Jing, Xiao-Yuan
    Liu, Qian
    Wu, Song-Song
    He, Guo-Liang
    NEURAL COMPUTING & APPLICATIONS, 2017, 28 (03): : 483 - 498
  • [3] Augmenting Supervised Neural Networks with Unsupervised Objectives for Large-scale Image Classification
    Zhang, Yuting
    Lee, Kibok
    Lee, Honglak
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [4] Semi-supervised learning on large-scale geotagged photos for situation recognition
    Tang, Mengfan
    Nie, Feiping
    Pongpaichet, Siripen
    Jain, Ramesh
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2017, 48 : 310 - 316
  • [5] Joint learning based deep supervised hashing for large-scale image retrieval
    Gu, Guanghua
    Liu, Jiangtao
    Li, Zhuoyi
    Huo, Wenhua
    Zhao, Yao
    NEUROCOMPUTING, 2020, 385 : 348 - 357
  • [6] Large-scale supervised similarity learning in networks
    Shiyu Chang
    Guo-Jun Qi
    Yingzhen Yang
    Charu C. Aggarwal
    Jiayu Zhou
    Meng Wang
    Thomas S. Huang
    Knowledge and Information Systems, 2016, 48 : 707 - 740
  • [7] Large-scale supervised similarity learning in networks
    Chang, Shiyu
    Qi, Guo-Jun
    Yang, Yingzhen
    Aggarwal, Charu C.
    Zhou, Jiayu
    Wang, Meng
    Huang, Thomas S.
    KNOWLEDGE AND INFORMATION SYSTEMS, 2016, 48 (03) : 707 - 740
  • [8] Large-Scale Subspace Clustering by Independent Distributed and Parallel Coding
    Li, Jun
    Tao, Zhiqiang
    Wu, Yue
    Zhong, Bineng
    Fu, Yun
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (09) : 9090 - 9100
  • [9] Parallel Large-Scale Image Processing for Orthorectification
    Im, ChangJin
    Jeong, Jae-Heon
    Jeong, Chang-Sung
    PROCEEDINGS OF TENCON 2018 - 2018 IEEE REGION 10 CONFERENCE, 2018, : 2153 - 2157
  • [10] Large-scale image retrieval with supervised sparse hashing
    Xu, Yan
    Shen, Fumin
    Xu, Xing
    Gao, Lianli
    Wang, Yuan
    Tan, Xiao
    NEUROCOMPUTING, 2017, 229 : 45 - 53