Deep reinforcement learning for irrigation scheduling using high-dimensional sensor feedback

被引:1
|
作者
Saikai, Yuji [1 ]
Peake, Allan [2 ,4 ]
Chenu, Karine [3 ]
机构
[1] Univ Melbourne, Sch Math & Stat, Melbourne, Vic, Australia
[2] CSIRO Agr, Toowoomba, Qld, Australia
[3] Univ Queensland, Queensland Alliance Agr & Food Innovat, Toowoomba, Qld, Australia
[4] Meat & Livestock Australia, Bowen Hills, Qld, Australia
来源
PLOS WATER | 2023年 / 2卷 / 09期
关键词
WATER; WHEAT; YIELD; AGRICULTURE; MODEL;
D O I
10.1371/journal.pwat.0000169
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Deep reinforcement learning has considerable potential to improve irrigation scheduling in many cropping systems by applying adaptive amounts of water based on various measurements overtime. The goal is to discover an intelligent decision rule that processes information available to growers and prescribes sensible irrigation amounts for the time steps considered. Due to the technical novelty, however, the research on the technique remains sparse and impractical. To accelerate the progress, the paper proposes a principled framework and actionable procedure that allow researchers to formulate their own optimisation problems and implement solution algorithms based on deep reinforcement learning. The effectiveness of the framework was demonstrated using a case study of irrigated wheat grown in a productive region of Australia where profits were maximised. Specifically, the decision rule takes nine state variable inputs: crop phenological stage, leaf area index, extractable soil water for each of the five top layers, cumulative rainfall and cumulative irrigation. It returns a probabilistic prescription over five candidate irrigation amounts (0, 10, 20, 30 and 40 mm) every day. The production system was simulated at Goondiwindi using the APSIM-Wheat crop model. After training in the learning environment using 1981-2010 weather data, the learned decision rule was tested individually for each year of 2011-2020. The results were compared against the benchmark profits obtained by a conventional rule common in the region. The discovered decision rule prescribed daily irrigation amounts that uniformly improved on the conventional rule for all the testing years, and the largest improvement reached 17% in 2018. The framework is general and applicable to a wide range of cropping systems with realistic optimisation problems.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] High-Dimensional Stock Portfolio Trading with Deep Reinforcement Learning
    Pigorsch, Uta
    Schaefer, Sebastian
    2022 IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE FOR FINANCIAL ENGINEERING AND ECONOMICS (CIFER), 2022,
  • [2] A Deep Reinforcement Learning Framework for High-Dimensional Circuit Linearization
    Rong, Chao
    Paramesh, Jeyanandh
    Carley, L. Richard
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2022, 69 (09) : 3665 - 3669
  • [3] High-dimensional multi-period portfolio allocation using deep reinforcement learning
    Jiang, Yifu
    Olmo, Jose
    Atwi, Majed
    INTERNATIONAL REVIEW OF ECONOMICS & FINANCE, 2025, 98
  • [4] Assessing the value of deep reinforcement learning for irrigation scheduling
    Kelly, T. D.
    Foster, T.
    Schultz, David M.
    SMART AGRICULTURAL TECHNOLOGY, 2024, 7
  • [5] DEEP REINFORCEMENT LEARNING-BASED IRRIGATION SCHEDULING
    Yang, Y.
    Hu, J.
    Porter, D.
    Marek, T.
    Heflin, K.
    Kong, H.
    Sun, L.
    TRANSACTIONS OF THE ASABE, 2020, 63 (03) : 549 - 556
  • [6] Deep Reinforcement Learning Approach for Material Scheduling Considering High-Dimensional Environment of Hybrid Flow-Shop Problem
    Gil, Chang-Bae
    Lee, Jee-Hyong
    APPLIED SCIENCES-BASEL, 2022, 12 (18):
  • [7] High-dimensional Function Optimisation by Reinforcement Learning
    Wu, Q. H.
    Liao, H. L.
    2010 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2010,
  • [8] Reinforcement learning for high-dimensional problems with symmetrical actions
    Kamal, MAS
    Murata, J
    2004 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN & CYBERNETICS, VOLS 1-7, 2004, : 6192 - 6197
  • [9] Offline reinforcement learning in high-dimensional stochastic environments
    Félicien Hêche
    Oussama Barakat
    Thibaut Desmettre
    Tania Marx
    Stephan Robert-Nicoud
    Neural Computing and Applications, 2024, 36 : 585 - 598
  • [10] Offline reinforcement learning in high-dimensional stochastic environments
    Heche, Felicien
    Barakat, Oussama
    Desmettre, Thibaut
    Marx, Tania
    Robert-Nicoud, Stephan
    NEURAL COMPUTING & APPLICATIONS, 2023, 36 (2): : 585 - 598