Assessing the value of deep reinforcement learning for irrigation scheduling

被引:2
|
作者
Kelly, T. D. [1 ]
Foster, T. [1 ]
Schultz, David M. [2 ,3 ]
机构
[1] Univ Manchester, Sch Engn, Manchester, England
[2] Univ Manchester, Dept Earth & Environm Sci, Manchester, England
[3] Univ Manchester, Ctr Crisis Studies & Mitigat, Manchester, England
来源
关键词
Water; Irrigation; Optimisation; Deep learning; Economic; Scheduling; DEFICIT IRRIGATION; WATER; MODEL; AQUACROP; YIELD;
D O I
10.1016/j.atech.2024.100403
中图分类号
S2 [农业工程];
学科分类号
0828 ;
摘要
Due to increasing global water scarcity pressure, researchers, policy makers and industry are looking for innovative solutions to increasing agricultural water productivity. Motivated by recent success within complex decision-making environments, Deep Reinforcement Learning (DRL) is being proposed as a method for optimizing irrigation strategies. Early research has hinted towards increased profits with DRL compared to heuristic approaches such as soil-moisture thresholds or fixed schedules. However, an assessment of the value of DRL for irrigation scheduling that incorporates local climate variability and water-use restrictions has yet to be performed. To address this gap in the literature, we created aquacrop-gym, an open-source Python framework for researchers to train and evaluate customized irrigation strategies within the crop-water model AquaCrop-OSPy. In this analysis, aquacrop-gym was used to quantify the value of DRL in comparison to conventional irrigation scheduling techniques (e.g., optimized soil-moisture heuristic) for maize production in an intensively irrigated region of the central United States. The DRL and heuristic approaches were both trained on 70 years of weather data produced from the weather generator LARS-WG, and evaluated on 30 unseen validation years of generated weather data. Findings from this analysis show that in the presence of high rainfall variability, DRL does not outperform conventional optimized heuristics. However, in the scenario where rainfall is set to zero, DRL approaches achieve higher profits on the unseen validation years. Similarly, DRL approaches also outperform optimized heuristics when severe water-use restrictions are introduced. Our analysis demonstrates that DRL approaches are a promising method of irrigation scheduling, notably in regions where farmers are faced with significant physical or regulatory water scarcity.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Multi-path Scheduling with Deep Reinforcement Learning
    Molla Rosello, Marc
    [J]. 2019 EUROPEAN CONFERENCE ON NETWORKS AND COMMUNICATIONS (EUCNC), 2019, : 400 - 405
  • [22] Teleconsultation dynamic scheduling with a deep reinforcement learning approach
    Chen, Wenjia
    Li, Jinlin
    [J]. ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 149
  • [23] Scheduling conditional task graphs with deep reinforcement learning
    Debner, Anton
    Krahn, Maximilian
    Hirvisalo, Vesa
    [J]. NORTHERN LIGHTS DEEP LEARNING CONFERENCE, VOL 233, 2024, 233 : 46 - 52
  • [24] Deep Reinforcement Learning for Agile Satellite Scheduling Problem
    Chen, Ming
    Chen, Yuning
    Chen, Yingwu
    Qi, Weihua
    [J]. 2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 126 - 132
  • [25] SCHED2 : Scheduling Deep Learning Training via Deep Reinforcement Learning
    Luan, Yunteng
    Chen, Xukun
    Zhao, Hanyu
    Yang, Zhi
    Dai, Yafei
    [J]. 2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [26] Optimizing Irrigation Efficiency using Deep Reinforcement Learning in the Field
    Ding, Xianzhong
    Du, Wan
    [J]. ACM TRANSACTIONS ON SENSOR NETWORKS, 2024, 20 (04)
  • [27] Learning to Dispatch for Job Shop Scheduling via Deep Reinforcement Learning
    Zhang, Cong
    Song, Wen
    Cao, Zhiguang
    Zhang, Jie
    Tan, Puay Siew
    Xu, Chi
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [28] Workflow scheduling based on deep reinforcement learning in the cloud environment
    Tingting Dong
    Fei Xue
    Chuangbai Xiao
    Jiangjiang Zhang
    [J]. Journal of Ambient Intelligence and Humanized Computing, 2021, 12 : 10823 - 10835
  • [29] Deep contextual reinforcement learning algorithm for scalable power scheduling
    Ebrie, Awol Seid
    Paik, Chunhyun
    Chung, Yongjoo
    Kim, Young Jin
    [J]. APPLIED SOFT COMPUTING, 2024, 167
  • [30] Robust Deep Reinforcement Learning Scheduling via Weight Anchoring
    Gracla, Steffen
    Beck, Edgar
    Bockelmann, Carsten
    Dekorsy, Armin
    [J]. IEEE COMMUNICATIONS LETTERS, 2023, 27 (01) : 210 - 213