共 50 条
- [21] Convergence of a Gradient Algorithm with Penalty for Training Two-layer Neural Networks 2009 2ND IEEE INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND INFORMATION TECHNOLOGY, VOL 4, 2009, : 76 - +
- [23] How degenerate is the parametrization of neural networks with the ReLU activation function? ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
- [25] Convergence rates of training deep neural networks via alternating minimization methods Optimization Letters, 2024, 18 : 909 - 923
- [26] Existence and Uniqueness of Solution for Impulsive Cellular Neural Networks SECOND INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN, VOL 2, PROCEEDINGS, 2009, : 413 - +
- [27] Provable Accelerated Convergence of Nesterov's Momentum for Deep ReLU Neural Networks INTERNATIONAL CONFERENCE ON ALGORITHMIC LEARNING THEORY, VOL 237, 2024, 237
- [28] PLATEAU PHENOMENON IN GRADIENT DESCENT TRAINING OF RELU NETWORKS: EXPLANATION, QUANTIFICATION, AND AVOIDANCE SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2021, 43 (05): : A3438 - A3468
- [30] Researching Multi-Site Artificial Neural Networks' Activation Rates and Activation Cycles BUSINESS MODELING AND SOFTWARE DESIGN, BMSD 2024, 2024, 523 : 186 - 206