共 50 条
- [1] A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions ZEITSCHRIFT FUR ANGEWANDTE MATHEMATIK UND PHYSIK, 2022, 73 (05):
- [5] Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation ELECTRONIC RESEARCH ARCHIVE, 2023, 31 (05): : 2519 - 2554
- [6] On the Proof of Global Convergence of Gradient Descent for Deep ReLU Networks with Linear Widths INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139