Representation learning with deep sparse auto-encoder for multi-task learning

被引:12
|
作者
Zhu, Yi [1 ,2 ,3 ]
Wu, Xindong [2 ,3 ]
Qiang, Jipeng [1 ]
Hu, Xuegang [2 ,3 ]
Zhang, Yuhong [2 ,3 ]
Li, Peipei [2 ,3 ]
机构
[1] Yangzhou Univ, Sch Informat Engn, Yangzhou, Peoples R China
[2] Hefei Univ Technol, Key Lab Knowledge Engn Big Data, Minist Educ China, Hefei, Peoples R China
[3] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep sparse auto-encoder; Multi-task learning; RICA; Labeled and unlabeled data; SUPPORT VECTOR MACHINES; FEATURE-SELECTION; REGULARIZATION; KNOWLEDGE;
D O I
10.1016/j.patcog.2022.108742
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We demonstrate an effective framework to achieve a better performance based on Deep Sparse auto encoder for Multi-task Learning, called DSML for short. To learn the reconstructed and higher-level features on cross-domain instances for multiple tasks, we combine the labeled and unlabeled data from all tasks to reconstruct the feature representations. Furthermore, we propose the model of Stacked Reconstruction Independence Component Analysis (SRICA for short) for the optimization of feature representations with a large amount of unlabeled data, which can effectively address the redundancy of image data. Our proposed SRICA model is developed from RICA and is based on deep sparse auto-encoder. In addition, we adopt a Semi-Supervised Learning method (SSL for short) based on model parameter regularization to build a unified model for multi-task learning. There are several advantages in our proposed framework as follows: 1) The proposed SRICA makes full use of a large amount of unlabeled data from all tasks. It is used to pursue an optimal sparsity feature representation, which can overcome the over fitting problem effectively. 2) The deep architecture used in our SRICA model is applied for higher-level and better representation learning, which is designed to train on patches for sphering the input data. 3) Training parameters in our proposed framework has lower computational cost compared to other common deep learning methods such as stacked denoising auto-encoders. Extensive experiments on several real image datasets demonstrate our proposed framework outperforms the state-of-the-art methods.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Deep Auto-encoder Based Multi-task Learning Using Probabilistic Transcriptions
    Das, Amit
    Hasegawa-Johnson, Mark
    Vesely, Karel
    18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, : 2073 - 2077
  • [2] A Novel Sparse Auto-Encoder for Deep Unsupervised Learning
    Jiang, Xiaojuan
    Zhang, Yinghua
    Zhang, Wensheng
    Xiao, Xian
    2013 SIXTH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE (ICACI), 2013, : 256 - 261
  • [3] Learning Sparse Representation With Variational Auto-Encoder for Anomaly Detection
    Sun, Jiayu
    Wang, Xinzhou
    Xiong, Naixue
    Shao, Jie
    IEEE ACCESS, 2018, 6 : 33353 - 33361
  • [4] Deep Sparse Auto-Encoder Features Learning for Arabic Text Recognition
    Rahal, Najoua
    Tounsi, Maroua
    Hussain, Amir
    Alimi, Adel M.
    IEEE ACCESS, 2021, 9 (09): : 18569 - 18584
  • [5] Learning a good representation with unsymmetrical auto-encoder
    Sun, Yanan
    Mao, Hua
    Guo, Quan
    Yi, Zhang
    NEURAL COMPUTING & APPLICATIONS, 2016, 27 (05): : 1361 - 1367
  • [6] Discriminative Representation Learning with Supervised Auto-encoder
    Fang Du
    Jiangshe Zhang
    Nannan Ji
    Junying Hu
    Chunxia Zhang
    Neural Processing Letters, 2019, 49 : 507 - 520
  • [7] Learning a good representation with unsymmetrical auto-encoder
    Yanan Sun
    Hua Mao
    Quan Guo
    Zhang Yi
    Neural Computing and Applications, 2016, 27 : 1361 - 1367
  • [8] Discriminative Representation Learning with Supervised Auto-encoder
    Du, Fang
    Zhang, Jiangshe
    Ji, Nannan
    Hu, Junying
    Zhang, Chunxia
    NEURAL PROCESSING LETTERS, 2019, 49 (02) : 507 - 520
  • [9] Learning Facial Expression Codes with Sparse Auto-Encoder
    Hu, Dekun
    Duan, Guiduo
    ADVANCES IN MECHATRONICS AND CONTROL ENGINEERING II, PTS 1-3, 2013, 433-435 : 334 - +
  • [10] Online deep learning based on auto-encoder
    Zhang, Si-si
    Liu, Jian-wei
    Zuo, Xin
    Lu, Run-kun
    Lian, Si-ming
    APPLIED INTELLIGENCE, 2021, 51 (08) : 5420 - 5439