Enhancing cotton irrigation with distributional actor-critic reinforcement learning

被引:0
|
作者
Chen, Yi [1 ]
Lin, Meiwei [4 ]
Yu, Zhuo [4 ]
Sun, Weihong [4 ]
Fu, Weiguo [4 ]
He, Liang [1 ,2 ,3 ]
机构
[1] Xinjiang Univ, Sch Comp Sci & Technol, Urumqi 830017, Peoples R China
[2] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
[3] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol, Beijing 100084, Peoples R China
[4] Jiangsu Univ, Sch Agr Engn, Zhenjiang 212013, Peoples R China
关键词
Distributional reinforcement learning; Irrigation decision; DSSAT model; Agricultural management; Cotton irrigation; QUALITY; YIELD; CROP;
D O I
10.1016/j.agwat.2024.109194
中图分类号
S3 [农学(农艺学)];
学科分类号
0901 ;
摘要
Accurate predictions of irrigation's impact on crop yield are crucial for effective decision-making. However, current research predominantly focuses on the relationship between irrigation events and soil moisture, often neglecting the physiological state of the crops themselves. This study introduces a novel intelligent irrigation approach based on distributional reinforcement learning, ensuring that the algorithm simultaneously considers weather, soil, and crop conditions to make optimal irrigation decisions for long-term benefits. To achieve this, we collected climate data from 1980 to 2024 and conducted a two-year cotton planting experiment in 2023 and 2024. We used soil and plant state indicators from 5 experimental groups with varying irrigation treatments to calibrate and validate the DSSAT model. Subsequently, we innovatively integrated a distributional reinforcement learning method-an effective machine learning technique for continuous control problems. Our algorithm focuses on 17 indicators, including crop leaf area, stem leaf count, and soil evapotranspiration, among others. Through a well-designed network structure and cumulative rewards, our approach effectively captures the relationships between irrigation events and these states. Additionally, we validated the robustness and generalizability of the model using three years of extreme weather data and two consecutive years of cross-site observations. This method surpasses previous irrigation strategies managed by standard reinforcement learning techniques (e.g., DQN). Empirical results indicate that our approach significantly outperforms traditional agronomic decision-making, enhancing cotton yield by 13.6% and improving water use efficiency per kilogram of crop by 6.7%. In 2024, our method was validated in actual field experiments, achieving the highest yield among all approaches, with a 12.9% increase compared to traditional practices. Our research provides a robust framework for intelligent cotton irrigation in the region and offers promising new directions for implementing smart agricultural decision systems across diverse areas.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Improving Generalization of Reinforcement Learning with Minimax Distributional Soft Actor-Critic
    Ren, Yangang
    Duan, Jingliang
    Li, Shengbo Eben
    Guan, Yang
    Sun, Qi
    2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2020,
  • [2] A World Model for Actor-Critic in Reinforcement Learning
    Panov, A. I.
    Ugadiarov, L. A.
    PATTERN RECOGNITION AND IMAGE ANALYSIS, 2023, 33 (03) : 467 - 477
  • [3] Curious Hierarchical Actor-Critic Reinforcement Learning
    Roeder, Frank
    Eppe, Manfred
    Nguyen, Phuong D. H.
    Wermter, Stefan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT II, 2020, 12397 : 408 - 419
  • [4] Actor-Critic based Improper Reinforcement Learning
    Zaki, Mohammadi
    Mohan, Avinash
    Gopalan, Aditya
    Mannor, Shie
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [5] Integrated Actor-Critic for Deep Reinforcement Learning
    Zheng, Jiaohao
    Kurt, Mehmet Necip
    Wang, Xiaodong
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT IV, 2021, 12894 : 505 - 518
  • [6] A fuzzy Actor-Critic reinforcement learning network
    Wang, Xue-Song
    Cheng, Yu-Hu
    Yi, Jian-Qiang
    INFORMATION SCIENCES, 2007, 177 (18) : 3764 - 3781
  • [7] A modified actor-critic reinforcement learning algorithm
    Mustapha, SM
    Lachiver, G
    2000 CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING, CONFERENCE PROCEEDINGS, VOLS 1 AND 2: NAVIGATING TO A NEW ERA, 2000, : 605 - 609
  • [8] Research on actor-critic reinforcement learning in RoboCup
    Guo, He
    Liu, Tianying
    Wang, Yuxin
    Chen, Feng
    Fan, Jianming
    WCICA 2006: SIXTH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-12, CONFERENCE PROCEEDINGS, 2006, : 205 - 205
  • [9] Reinforcement actor-critic learning as a rehearsal in MicroRTS
    Manandhar, Shiron
    Banerjee, Bikramjit
    KNOWLEDGE ENGINEERING REVIEW, 2024, 39
  • [10] Multi-actor mechanism for actor-critic reinforcement learning
    Li, Lin
    Li, Yuze
    Wei, Wei
    Zhang, Yujia
    Liang, Jiye
    INFORMATION SCIENCES, 2023, 647