ChebNet: Efficient and Stable Constructions of Deep Neural Networks with Rectified Power Units via Chebyshev Approximation

被引:1
|
作者
Tang, Shanshan [1 ]
Li, Bo [2 ]
Yu, Haijun [3 ,4 ]
机构
[1] Ind & Commercial Bank China, Software Dev Ctr, 16 Bldg ZhongGuanCun Software Pk, Beijing 100193, Peoples R China
[2] Huawei Technol Co Ltd, Hisilicon Semicond & Component Business Dept, Labs 2012, Bantian St, Shenzhen 518129, Peoples R China
[3] Acad Math & Syst Sci, Inst Computat Math & Sci Engn Comp, NCMIS & LSEC, Beijing 100190, Peoples R China
[4] Univ Chinese Acad Sci, Sch Math Sci, Beijing 100049, Peoples R China
关键词
Deep neural networks; Rectified power units; Chebyshev polynomial; High-dimensional approximation; Stability; Constructive spectral approximation; SPARSE GRID METHODS; DIMENSIONAL PROBLEMS; SMOOTH FUNCTIONS; ELEMENT METHOD; ERROR-BOUNDS; INTERPOLATION; ALGORITHM; EQUATIONS; ACCURACY;
D O I
10.1007/s40304-023-00392-0
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
In a previous study by Li et al. (Commun Comput Phys 27(2):379-411, 2020), it is shown that deep neural networks built with rectified power units (RePU) as activation functions can give better approximation for sufficient smooth functions than those built with rectified linear units, by converting polynomial approximations using power series into deep neural networks with optimal complexity and no approximation error. However, in practice, power series approximations are not easy to obtain due to the associated stability issue. In this paper, we propose a new and more stable way to construct RePU deep neural networks based on Chebyshev polynomial approximations. By using a hierarchical structure of Chebyshev polynomial approximation in frequency domain, we obtain efficient and stable deep neural network construction, which we call ChebNet. The approximation of smooth functions by ChebNets is no worse than the approximation by deep RePU nets using power series. On the same time, ChebNets are much more stable. Numerical results show that the constructed ChebNets can be further fine-tuned to obtain much better results than those obtained by tuning deep RePU nets constructed by power series approach. As spectral accuracy is hard to obtain by direct training of deep neural networks, ChebNets provide a practical way to obtain spectral accuracy, it is expected to be useful in real applications that require efficient approximations of smooth functions.
引用
收藏
页数:27
相关论文
共 50 条
  • [1] PowerNet: Efficient Representations of Polynomials and Smooth Functions by Deep Neural Networks with Rectified Power Units
    Li, Bo
    Tang, Shanshan
    Yu, Haijun
    JOURNAL OF MATHEMATICAL STUDY, 2020, 53 (02) : 159 - 191
  • [2] Better Approximations of High Dimensional Smooth Functions by Deep Neural Networks with Rectified Power Units
    Li, Bo
    Tang, Shanshan
    Yu, Haijun
    COMMUNICATIONS IN COMPUTATIONAL PHYSICS, 2020, 27 (02) : 379 - 411
  • [3] Understanding Weight Normalized Deep Neural Networks with Rectified Linear Units
    Xu, Yixi
    Wang, Xiao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [4] Deep neural networks with Elastic Rectified Linear Units for object recognition
    Jiang, Xiaoheng
    Pang, Yanwei
    Li, Xuelong
    Pan, Jing
    Xie, Yinghong
    NEUROCOMPUTING, 2018, 275 : 1132 - 1139
  • [5] IMPROVING DEEP NEURAL NETWORKS FOR LVCSR USING RECTIFIED LINEAR UNITS AND DROPOUT
    Dahl, George E.
    Sainath, Tara N.
    Hinton, Geoffrey E.
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 8609 - 8613
  • [6] Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units
    Shang, Wenling
    Sohn, Kihyuk
    Almeida, Diogo
    Lee, Honglak
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [7] Optimal deep neural networks by maximization of the approximation power
    Calvo-Pardo, Hector
    Mancini, Tullio
    Olmo, Jose
    COMPUTERS & OPERATIONS RESEARCH, 2023, 156
  • [8] Stable tensor neural networks for efficient deep learning
    Newman, Elizabeth
    Horesh, Lior
    Avron, Haim
    Kilmer, Misha E.
    FRONTIERS IN BIG DATA, 2024, 7
  • [9] Full Approximation of Deep Neural Networks through Efficient Optimization
    De la Parra, Cecilia
    Guntoro, Andre
    Kumar, Akash
    2020 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2020,
  • [10] A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations
    Hutzenthaler, Martin
    Jentzen, Arnulf
    Kruse, Thomas
    Nguyen, Tuan Anh
    PARTIAL DIFFERENTIAL EQUATIONS AND APPLICATIONS, 2020, 1 (02):