共 50 条
- [1] Superpolynomial Lower Bounds for Learning One-Layer Neural Networks using Gradient Descent [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
- [2] Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial Convergence and SQ Lower Bounds [J]. CONFERENCE ON LEARNING THEORY, VOL 99, 2019, 99
- [3] Minimax Lower Bounds for Transfer Learning with Linear and One-hidden Layer Neural Networks [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
- [4] An effective and efficient green federated learning method for one-layer neural networks [J]. 39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 1050 - 1052
- [5] FedHEONN: Federated and homomorphically encrypted learning method for one-layer neural networks [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 149 : 200 - 211
- [6] A global optimum approach for one-layer neural networks [J]. NEURAL COMPUTATION, 2002, 14 (06) : 1429 - 1449
- [7] Learning One-hidden-layer ReLU Networks via Gradient Descent [J]. 22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
- [8] Learning Distributions Generated by One-Layer ReLU Networks [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
- [10] Regularized One-Layer Neural Networks for Distributed and Incremental Environments [J]. ADVANCES IN COMPUTATIONAL INTELLIGENCE (IWANN 2021), PT II, 2021, 12862 : 343 - 355