Auto-encoder based dimensionality reduction

被引:533
|
作者
Wang, Yasi [1 ]
Yao, Hongxun [1 ]
Zhao, Sicheng [1 ]
机构
[1] Harbin Inst Technol, Sch Comp Sci & Technol, Harbin 150006, Peoples R China
基金
中国国家自然科学基金;
关键词
Auto-encoder; Dimensionality reduction; Visualization; Intrinsic dimensionality; Dimensionality-accuracy; CAPABILITIES; AUTOENCODER; BOUNDS;
D O I
10.1016/j.neucom.2015.08.104
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Auto-encoder a tricky three-layered neural network, known as auto-association before, constructs the "building block" of deep learning, which has been demonstrated to achieve good performance in various domains. In this paper, we try to investigate the dimensionality reduction ability of auto-encoder, and see if it has some kind of good property that might accumulate when being stacked and thus contribute to the success of deep learning. Based on the above idea, this paper starts from auto-encoder and focuses on its ability to reduce the dimensionality, trying to understand the difference between auto-encoder and state-of-the-art dimensionality reduction methods. Experiments are conducted both on the synthesized data for an intuitive understanding of the method, mainly on two and three-dimensional spaces for better visualization, and on some real datasets, including MNIST and Olivetti face datasets. The results show that auto-encoder can indeed learn something different from other methods. Besides, we preliminarily investigate the influence of the number of hidden layer nodes on the performance of auto-encoder and its possible relation with the intrinsic dimensionality of input data. (C) 2015 Elsevier B.V. All rights reserved.
引用
收藏
页码:232 / 242
页数:11
相关论文
共 50 条
  • [41] Online News Recommender Based on Stacked Auto-Encoder
    Cao, Sanxing
    Yang, Nan
    Liu, Zhengzheng
    [J]. 2017 16TH IEEE/ACIS INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION SCIENCE (ICIS 2017), 2017, : 721 - 726
  • [42] Noise reduction in single-shot images using an auto-encoder
    Bartlett, Oliver J.
    Benoit, David M.
    Pimbblet, Kevin A.
    Simmons, Brooke
    Hunt, Laura
    [J]. MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, 2023, 521 (04) : 6318 - 6329
  • [43] Optimized Stacked Auto-Encoder for Unnecessary Data Reduction in Cloud of Things
    Rahmany, Ines
    Dhahri, Najwa
    Moulahi, Tarek
    Alabdulatif, Abdulatif
    [J]. 2022 INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING, IWCMC, 2022, : 110 - 115
  • [44] Sparse Auto-encoder with Smoothed Regularization
    Zhang, Li
    Lu, Yaping
    Wang, Bangjun
    Li, Fanzhang
    Zhang, Zhao
    [J]. NEURAL PROCESSING LETTERS, 2018, 47 (03) : 829 - 839
  • [45] On Improving the accuracy with Auto-Encoder on Conjunctivitis
    Li, Wei
    Liu, Xiao
    Liu, Jin
    Chen, Ping
    Wan, Shaohua
    Cui, Xiaohui
    [J]. APPLIED SOFT COMPUTING, 2019, 81
  • [46] An iterative stacked weighted auto-encoder
    Sun, Tongfeng
    Ding, Shifei
    Xu, Xinzheng
    [J]. SOFT COMPUTING, 2021, 25 (06) : 4833 - 4843
  • [47] Wavelet Loss Function for Auto-Encoder
    Zhu, Qiuyu
    Wang, Hu
    Zhang, Ruixin
    [J]. IEEE ACCESS, 2021, 9 : 27101 - 27108
  • [48] Auto-encoder generative adversarial networks
    Zhai, Zhonghua
    [J]. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2018, 35 (03) : 3043 - 3049
  • [49] An FPGA Implementation of a Convolutional Auto-Encoder
    Zhao, Wei
    Jia, Zuchen
    Wei, Xiaosong
    Wang, Hai
    [J]. APPLIED SCIENCES-BASEL, 2018, 8 (04):
  • [50] An iterative stacked weighted auto-encoder
    Tongfeng Sun
    Shifei Ding
    Xinzheng Xu
    [J]. Soft Computing, 2021, 25 : 4833 - 4843