Learning a good representation with unsymmetrical auto-encoder

被引:13
|
作者
Sun, Yanan [1 ]
Mao, Hua [1 ]
Guo, Quan [1 ]
Yi, Zhang [1 ]
机构
[1] Sichuan Univ, Coll Comp Sci, Machine Intelligence Lab, Chengdu 610065, Peoples R China
来源
NEURAL COMPUTING & APPLICATIONS | 2016年 / 27卷 / 05期
基金
美国国家科学基金会;
关键词
Auto-encoder; Neural networks; Feature learning; Deep learning; Unsupervised learning;
D O I
10.1007/s00521-015-1939-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Auto-encoders play a fundamental role in unsupervised feature learning and learning initial parameters of deep architectures for supervised tasks. For given input samples, robust features are used to generate robust representations from two perspectives: (1) invariant to small variation of samples and (2) reconstruction by decoders with minimal error. Traditional auto-encoders with different regularization terms have symmetrical numbers of encoder and decoder layers, and sometimes parameters. We investigate the relation between the number of layers and propose an unsymmetrical structure, i.e., an unsymmetrical auto-encoder (UAE), to learn more effective features. We present empirical results of feature learning using the UAE and state-of-the-art auto-encoders for classification tasks with a range of datasets. We also analyze the gradient vanishing problem mathematically and provide suggestions for the appropriate number of layers to use in UAEs with a logistic activation function. In our experiments, UAEs demonstrated superior performance with the same configuration compared to other autoencoders.
引用
收藏
页码:1361 / 1367
页数:7
相关论文
共 50 条
  • [1] Learning a good representation with unsymmetrical auto-encoder
    Yanan Sun
    Hua Mao
    Quan Guo
    Zhang Yi
    [J]. Neural Computing and Applications, 2016, 27 : 1361 - 1367
  • [2] Discriminative Representation Learning with Supervised Auto-encoder
    Fang Du
    Jiangshe Zhang
    Nannan Ji
    Junying Hu
    Chunxia Zhang
    [J]. Neural Processing Letters, 2019, 49 : 507 - 520
  • [3] Discriminative Representation Learning with Supervised Auto-encoder
    Du, Fang
    Zhang, Jiangshe
    Ji, Nannan
    Hu, Junying
    Zhang, Chunxia
    [J]. NEURAL PROCESSING LETTERS, 2019, 49 (02) : 507 - 520
  • [4] An Auto-Encoder for Learning Conversation Representation Using LSTM
    Zhou, Xiaoqiang
    Hu, Baotian
    Chen, Qingcai
    Wang, Xiaolong
    [J]. NEURAL INFORMATION PROCESSING, PT I, 2015, 9489 : 310 - 317
  • [5] Continual Representation Learning for Images with Variational Continual Auto-Encoder
    Jeon, Ik Hwan
    Shin, Soo Young
    [J]. PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE (ICAART), VOL 2, 2019, : 367 - 373
  • [6] Learning Sparse Representation With Variational Auto-Encoder for Anomaly Detection
    Sun, Jiayu
    Wang, Xinzhou
    Xiong, Naixue
    Shao, Jie
    [J]. IEEE ACCESS, 2018, 6 : 33353 - 33361
  • [7] HRTF Representation with Convolutional Auto-encoder
    Chen, Wei
    Hu, Ruimin
    Wang, Xiaochen
    Li, Dengshi
    [J]. MULTIMEDIA MODELING (MMM 2020), PT I, 2020, 11961 : 605 - 616
  • [8] Twin Variational Auto-Encoder for Representation Learning in IoT Intrusion Detection
    Phai Vu Dinh
    Nguyen Quang Uy
    Nguyen, Diep N.
    Dinh Thai Hoang
    Son Pham Bao
    Dutkiewicz, Eryk
    [J]. 2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 848 - 853
  • [9] Representation learning with deep sparse auto-encoder for multi-task learning
    Zhu, Yi
    Wu, Xindong
    Qiang, Jipeng
    Hu, Xuegang
    Zhang, Yuhong
    Li, Peipei
    [J]. PATTERN RECOGNITION, 2022, 129
  • [10] An asymmetrical-structure auto-encoder for unsupervised representation learning of skeleton sequences
    Zhou, Jiaxin
    Komuro, Takashi
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2022, 222