Surrogate optimization of deep neural networks for groundwater predictions

被引:55
|
作者
Mueller, Juliane [1 ]
Park, Jangho [1 ]
Sahu, Reetik [1 ]
Varadharajan, Charuleka [2 ]
Arora, Bhavna [2 ]
Faybishenko, Boris [2 ]
Agarwal, Deborah [1 ]
机构
[1] Lawrence Berkeley Natl Lab, Computat Res Div, 1 Cyclotron Rd, Berkeley, CA 94720 USA
[2] Lawrence Berkeley Natl Lab, Earth & Environm Sci Area, 1 Cyclotron Rd, Berkeley, CA 94720 USA
关键词
Hyperparameter optimization; Machine learning; Derivative-free optimization; Groundwater prediction; Surrogate models; DIRECT SEARCH ALGORITHMS; WATER-TABLE DEPTH; GLOBAL OPTIMIZATION; MODEL ALGORITHM; APPROXIMATION; ARBF;
D O I
10.1007/s10898-020-00912-0
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
Sustainable management of groundwater resources under changing climatic conditions require an application of reliable and accurate predictions of groundwater levels. Mechanistic multi-scale, multi-physics simulation models are often too hard to use for this purpose, especially for groundwater managers who do not have access to the complex compute resources and data. Therefore, we analyzed the applicability and performance of four modern deep learning computational models for predictions of groundwater levels. We compare three methods for optimizing the models' hyperparameters, including two surrogate model-based algorithms and a random sampling method. The models were tested using predictions of the groundwater level in Butte County, California, USA, taking into account the temporal variability of streamflow, precipitation, and ambient temperature. Our numerical study shows that the optimization of the hyperparameters can lead to reasonably accurate performance of all models (root mean squared errors of groundwater predictions of 2 meters or less), but the "simplest" network, namely a multilayer perceptron (MLP) performs overall better for learning and predicting groundwater data than the more advanced long short-term memory or convolutional neural networks in terms of prediction accuracy and time-to-solution, making the MLP a suitable candidate for groundwater prediction.
引用
收藏
页码:203 / 231
页数:29
相关论文
共 50 条
  • [41] Knowledge Distillation for Optimization of Quantized Deep Neural Networks
    Shin, Sungho
    Boo, Yoonho
    Sung, Wonyong
    2020 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS), 2020, : 111 - 116
  • [42] An Efficient Optimization Technique for Training Deep Neural Networks
    Mehmood, Faisal
    Ahmad, Shabir
    Whangbo, Taeg Keun
    MATHEMATICS, 2023, 11 (06)
  • [43] Differentially Private Optimization Algorithms for Deep Neural Networks
    Gylberth, Roan
    Adnan, Risman
    Yazid, Setiadi
    Basaruddin, T.
    2017 INTERNATIONAL CONFERENCE ON ADVANCED COMPUTER SCIENCE AND INFORMATION SYSTEMS (ICACSIS), 2017, : 387 - 393
  • [44] Fine-grained Optimization of Deep Neural Networks
    Ozay, Mete
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [45] MULTICOMPOSITE NONCONVEX OPTIMIZATION FOR TRAINING DEEP NEURAL NETWORKS
    Cui, Ying
    He, Ziyu
    Pang, Jong-Shi
    SIAM JOURNAL ON OPTIMIZATION, 2020, 30 (02) : 1693 - 1723
  • [46] Scalable Bayesian Optimization Using Deep Neural Networks
    Snoek, Jasper
    Rippel, Oren
    Swersky, Kevin
    Kiros, Ryan
    Satish, Nadathur
    Sundaram, Narayanan
    Patwary, Md. Mostofa Ali
    Prabhat
    Adams, Ryan P.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 37, 2015, 37 : 2171 - 2180
  • [47] The Whale Optimization Algorithm Approach for Deep Neural Networks
    Brodzicki, Andrzej
    Piekarski, Michal
    Jaworek-Korjakowska, Joanna
    SENSORS, 2021, 21 (23)
  • [48] Favorable Random Gradients for Optimization of Deep Neural Networks
    Savchenko, Yehor
    Zavalnyi, Oleksandr
    2018 11TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), VOL 1, 2018, : 351 - 355
  • [49] Deep neural networks and mixed integer linear optimization
    Fischetti, Matteo
    Jo, Jason
    CONSTRAINTS, 2018, 23 (03) : 296 - 309
  • [50] Optimization of Analog Accelerators for Deep Neural Networks Inference
    Fasoli, Andrea
    Ambrogio, Stefano
    Narayanan, Pritish
    Tsai, Hsinyu
    Mackin, Charles
    Spoon, Katherine
    Friz, Alexander
    Chen, An
    Burr, Geoffrey W.
    2020 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2020,