Assessing the value of deep reinforcement learning for irrigation scheduling

被引:2
|
作者
Kelly, T. D. [1 ]
Foster, T. [1 ]
Schultz, David M. [2 ,3 ]
机构
[1] Univ Manchester, Sch Engn, Manchester, England
[2] Univ Manchester, Dept Earth & Environm Sci, Manchester, England
[3] Univ Manchester, Ctr Crisis Studies & Mitigat, Manchester, England
来源
关键词
Water; Irrigation; Optimisation; Deep learning; Economic; Scheduling; DEFICIT IRRIGATION; WATER; MODEL; AQUACROP; YIELD;
D O I
10.1016/j.atech.2024.100403
中图分类号
S2 [农业工程];
学科分类号
0828 ;
摘要
Due to increasing global water scarcity pressure, researchers, policy makers and industry are looking for innovative solutions to increasing agricultural water productivity. Motivated by recent success within complex decision-making environments, Deep Reinforcement Learning (DRL) is being proposed as a method for optimizing irrigation strategies. Early research has hinted towards increased profits with DRL compared to heuristic approaches such as soil-moisture thresholds or fixed schedules. However, an assessment of the value of DRL for irrigation scheduling that incorporates local climate variability and water-use restrictions has yet to be performed. To address this gap in the literature, we created aquacrop-gym, an open-source Python framework for researchers to train and evaluate customized irrigation strategies within the crop-water model AquaCrop-OSPy. In this analysis, aquacrop-gym was used to quantify the value of DRL in comparison to conventional irrigation scheduling techniques (e.g., optimized soil-moisture heuristic) for maize production in an intensively irrigated region of the central United States. The DRL and heuristic approaches were both trained on 70 years of weather data produced from the weather generator LARS-WG, and evaluated on 30 unseen validation years of generated weather data. Findings from this analysis show that in the presence of high rainfall variability, DRL does not outperform conventional optimized heuristics. However, in the scenario where rainfall is set to zero, DRL approaches achieve higher profits on the unseen validation years. Similarly, DRL approaches also outperform optimized heuristics when severe water-use restrictions are introduced. Our analysis demonstrates that DRL approaches are a promising method of irrigation scheduling, notably in regions where farmers are faced with significant physical or regulatory water scarcity.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] DEEP REINFORCEMENT LEARNING-BASED IRRIGATION SCHEDULING
    Yang, Y.
    Hu, J.
    Porter, D.
    Marek, T.
    Heflin, K.
    Kong, H.
    Sun, L.
    [J]. TRANSACTIONS OF THE ASABE, 2020, 63 (03) : 549 - 556
  • [2] DRLIC: Deep Reinforcement Learning for Irrigation Control
    Ding, Xianzhong
    Du, Wan
    [J]. 2022 21ST ACM/IEEE INTERNATIONAL CONFERENCE ON INFORMATION PROCESSING IN SENSOR NETWORKS (IPSN 2022), 2022, : 41 - 53
  • [3] Deep Reinforcement Learning for Scheduling in Cellular Networks
    Wang, Jian
    Xu, Chen
    Huangfu, Yourui
    Li, Rong
    Ge, Yiqun
    Wang, Jun
    [J]. 2019 11TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING (WCSP), 2019,
  • [4] Deep Reinforcement Learning for Job Scheduling on Cluster
    Yao, Zhenjie
    Chen, Lan
    Zhang, He
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT IV, 2021, 12894 : 613 - 624
  • [5] Deep Reinforcement Learning for Semiconductor Production Scheduling
    Waschneck, Bernd
    Reichstaller, Andre
    Belzner, Lenz
    Altenmueller, Thomas
    Bauernhansl, Thomas
    Knapp, Alexander
    Kyek, Andreas
    [J]. 2018 29TH ANNUAL SEMI ADVANCED SEMICONDUCTOR MANUFACTURING CONFERENCE (ASMC), 2018, : 301 - 306
  • [6] Opportunistic maintenance scheduling with deep reinforcement learning
    Valet, Alexander
    Altenmueller, Thomas
    Waschneck, Bernd
    May, Marvin Carl
    Kuhnle, Andreas
    Lanza, Gisela
    [J]. JOURNAL OF MANUFACTURING SYSTEMS, 2022, 64 : 518 - 534
  • [7] Scheduling the NASA Deep Space Network with Deep Reinforcement Learning
    Goh, Edwin
    Venkataram, Hamsa Shwetha
    Hoffmann, Mark
    Johnston, Mark D.
    Wilson, Brian
    [J]. 2021 IEEE AEROSPACE CONFERENCE (AEROCONF 2021), 2021,
  • [8] Cloud Resource Scheduling With Deep Reinforcement Learning and Imitation Learning
    Guo, Wenxia
    Tian, Wenhong
    Ye, Yufei
    Xu, Lingxiao
    Wu, Kui
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (05): : 3576 - 3586
  • [9] Dynamic VNF Scheduling: A Deep Reinforcement Learning Approach
    Zhang, Zixiao
    He, Fujun
    Oki, Eiji
    [J]. IEICE TRANSACTIONS ON COMMUNICATIONS, 2023, E106B (07) : 557 - 570
  • [10] Data Centers Job Scheduling with Deep Reinforcement Learning
    Liang, Sisheng
    Yang, Zhou
    Jin, Fang
    Chen, Yong
    [J]. ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2020, PT II, 2020, 12085 : 906 - 917