Representation learning with deep sparse auto-encoder for multi-task learning

被引:12
|
作者
Zhu, Yi [1 ,2 ,3 ]
Wu, Xindong [2 ,3 ]
Qiang, Jipeng [1 ]
Hu, Xuegang [2 ,3 ]
Zhang, Yuhong [2 ,3 ]
Li, Peipei [2 ,3 ]
机构
[1] Yangzhou Univ, Sch Informat Engn, Yangzhou, Peoples R China
[2] Hefei Univ Technol, Key Lab Knowledge Engn Big Data, Minist Educ China, Hefei, Peoples R China
[3] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep sparse auto-encoder; Multi-task learning; RICA; Labeled and unlabeled data; SUPPORT VECTOR MACHINES; FEATURE-SELECTION; REGULARIZATION; KNOWLEDGE;
D O I
10.1016/j.patcog.2022.108742
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We demonstrate an effective framework to achieve a better performance based on Deep Sparse auto encoder for Multi-task Learning, called DSML for short. To learn the reconstructed and higher-level features on cross-domain instances for multiple tasks, we combine the labeled and unlabeled data from all tasks to reconstruct the feature representations. Furthermore, we propose the model of Stacked Reconstruction Independence Component Analysis (SRICA for short) for the optimization of feature representations with a large amount of unlabeled data, which can effectively address the redundancy of image data. Our proposed SRICA model is developed from RICA and is based on deep sparse auto-encoder. In addition, we adopt a Semi-Supervised Learning method (SSL for short) based on model parameter regularization to build a unified model for multi-task learning. There are several advantages in our proposed framework as follows: 1) The proposed SRICA makes full use of a large amount of unlabeled data from all tasks. It is used to pursue an optimal sparsity feature representation, which can overcome the over fitting problem effectively. 2) The deep architecture used in our SRICA model is applied for higher-level and better representation learning, which is designed to train on patches for sphering the input data. 3) Training parameters in our proposed framework has lower computational cost compared to other common deep learning methods such as stacked denoising auto-encoders. Extensive experiments on several real image datasets demonstrate our proposed framework outperforms the state-of-the-art methods.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Deep Auto-Encoder Neural Networks in Reinforcement Learning
    Lange, Sascha
    Riedmiller, Martin
    2010 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS IJCNN 2010, 2010,
  • [22] A Hybrid Algorithm of Extreme Learning Machine and Sparse Auto-Encoder
    Lin, Yu
    Liang, Yanchun
    Yoshida, Shinichi
    Feng, Xiaoyue
    Guan, Renchu
    SMART COMPUTING AND COMMUNICATION, SMARTCOM 2016, 2017, 10135 : 194 - 204
  • [23] Stacked sparse auto-encoder for deep clustering
    Cai, Jinyu
    Wang, Shiping
    Guo, Wenzhong
    2019 IEEE INTL CONF ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, BIG DATA & CLOUD COMPUTING, SUSTAINABLE COMPUTING & COMMUNICATIONS, SOCIAL COMPUTING & NETWORKING (ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM 2019), 2019, : 1532 - 1538
  • [24] Continual Representation Learning for Images with Variational Continual Auto-Encoder
    Jeon, Ik Hwan
    Shin, Soo Young
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE (ICAART), VOL 2, 2019, : 367 - 373
  • [25] Auto-encoder Based Co-training Multi-view Representation Learning
    Lu, Run-kun
    Liu, Jian-wei
    Wang, Yuan-fang
    Xie, Hao-jie
    Zuo, Xin
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2019, PT III, 2019, 11441 : 119 - 130
  • [26] SCAE: Structural Contrastive Auto-Encoder for Incomplete Multi-View Representation Learning
    Li, Mengran
    Zhang, Ronghui
    Zhang, Yong
    Piao, Xinglin
    Zhao, Shiyu
    Yin, Baocai
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (09)
  • [27] Sparse Multi-Task Reinforcement Learning
    Calandriello, Daniele
    Lazaric, Alessandro
    Restelli, Marcello
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [28] Sparse multi-task reinforcement learning
    Calandriello, Daniele
    Lazaric, Alessandro
    Restelli, Marcello
    INTELLIGENZA ARTIFICIALE, 2015, 9 (01) : 5 - 20
  • [29] Correlative Data Based Sparse Denoising Auto-Encoder for Feature Learning
    Zhao, Yudi
    Ding, Yongsheng
    Hao, Kuangrong
    Tang, Xuesong
    PROCEEDINGS OF THE 36TH CHINESE CONTROL CONFERENCE (CCC 2017), 2017, : 10896 - 10901
  • [30] Compressed Auto-encoder Building Block for Deep Learning Network
    Feng, Qiying
    Chen, C. L. Philip
    Chen, Long
    IEEE ICCSS 2016 - 2016 3RD INTERNATIONAL CONFERENCE ON INFORMATIVE AND CYBERNETICS FOR COMPUTATIONAL SOCIAL SYSTEMS (ICCSS), 2016, : 131 - 136