Multi-layer Extreme Learning Machine Autoencoder With Subspace Structure Preserving

被引:0
|
作者
Chen X.-Y. [1 ]
Chen Y. [1 ]
机构
[1] College of Mathematics and Computer Science, Fuzhou University, Fuzhou
来源
基金
中国国家自然科学基金;
关键词
Autoencoder; Dimensional reduction; Multi-layer extreme learning machine; Subspace learning;
D O I
10.16383/j.aas.c200684
中图分类号
学科分类号
摘要
To deal with the clustering problem of high-dimensional complex data, it is usually reguired to reduce the dimensionality and then cluster, but the common dimensional reduction method does not consider the clustering characteristic of the data and the correlation between the samples, so it is difficult to ensure that the dimensional reduction method matches the clustering algorithm, which leads to the loss of clustering information. The nonlinear unsupervised dimensionality reduction method extreme learning machine autoencoder (ELM-AE) has been widely used in dimensionality reduction and denoising in recent years because of its fast learning speed and good generalization performance. In order to maintain the original subspace structure when high-dimensional data is projected into a low-dimensional space, the dimensional reduction method ML-SELM-AE is proposed. This method captures the deep features of the sample set by using the multi-layer extreme learning machine autoencoder while maintaining multi-subspace structure of clustered samples by self-representation model. Experimental results show that the method can effectively improve the clustering accuracy and achieve higher learning efficiency on UCI data, EEG data and gene expression data. Copyright ©2022 Acta Automatica Sinica. All rights reserved.
引用
收藏
页码:1091 / 1104
页数:13
相关论文
共 26 条
  • [1] Hinton G E, Salakhutdinov R R., Reducing the dimensionality of data with neural networks, Science, 313, 5786, pp. 504-507, (2006)
  • [2] Tian Juan-Xiu, Liu Guo-Cai, Gu Shan-Shan, Ju Zhong-Jian, Liu Jin-Guang, Gu Dong-Dong, Deep learning in medical image analysis and its challenges, Acta Automatica Sinica, 44, 3, pp. 401-424, (2018)
  • [3] Rik D, Ekta W., Partition selection with sparse autoencoders for content based image classification, Neural Computing and Applications, 31, 3, pp. 675-690, (2019)
  • [4] Shao H D, Jiang H K, Zhao H W., A novel deep autoencoder feature learning method for rotating machinery fault diagnosis, Mechanical Systems and Signal Processing, 95, pp. 187-204, (2017)
  • [5] Chiang H T, Hsieh Y Y, Fu S W, Hung K H, Tsao Y, Chien S Y., Noise reduction in ECG signals using fully convolutional denoising autoencoders, IEEE Access, 7, pp. 60806-60813, (2019)
  • [6] Yildirim O, Tan R S, Acharya U R., An efficient compression of ECG signals using deep convolutional autoencoders, Cognitive Systems Research, 52, pp. 198-211, (2018)
  • [7] Liu W F, Ma T Z, Xie Q S, Tao D P, Cheng J., LMAE: a large margin auto-encoders for classification, Signal Processing, 141, pp. 137-143, (2017)
  • [8] Ji P, Zhang T, Li H, Salzmann M, Reid L., Deep subspace clustering networks, Proceedings of the 31st Conference on Neural Information Processing Systems, pp. 23-32, (2017)
  • [9] Kasun L L C, Yang Y, Huang G B, Zhang ZH Y., Dimension reduction with extreme learning machine, IEEE Transactions on Image Processing, 25, 8, pp. 3906-3918, (2016)
  • [10] Huang G B, Zhu Q Y, Siew C K., Extreme learning machine: A new learning scheme of feedforward neural networks, Proceedings of Internatinaol Joint Conference on Neural Networks, pp. 985-990, (2004)