Enhancing cotton irrigation with distributional actor-critic reinforcement learning

被引:0
|
作者
Chen, Yi [1 ]
Lin, Meiwei [4 ]
Yu, Zhuo [4 ]
Sun, Weihong [4 ]
Fu, Weiguo [4 ]
He, Liang [1 ,2 ,3 ]
机构
[1] Xinjiang Univ, Sch Comp Sci & Technol, Urumqi 830017, Peoples R China
[2] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
[3] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol, Beijing 100084, Peoples R China
[4] Jiangsu Univ, Sch Agr Engn, Zhenjiang 212013, Peoples R China
关键词
Distributional reinforcement learning; Irrigation decision; DSSAT model; Agricultural management; Cotton irrigation; QUALITY; YIELD; CROP;
D O I
10.1016/j.agwat.2024.109194
中图分类号
S3 [农学(农艺学)];
学科分类号
0901 ;
摘要
Accurate predictions of irrigation's impact on crop yield are crucial for effective decision-making. However, current research predominantly focuses on the relationship between irrigation events and soil moisture, often neglecting the physiological state of the crops themselves. This study introduces a novel intelligent irrigation approach based on distributional reinforcement learning, ensuring that the algorithm simultaneously considers weather, soil, and crop conditions to make optimal irrigation decisions for long-term benefits. To achieve this, we collected climate data from 1980 to 2024 and conducted a two-year cotton planting experiment in 2023 and 2024. We used soil and plant state indicators from 5 experimental groups with varying irrigation treatments to calibrate and validate the DSSAT model. Subsequently, we innovatively integrated a distributional reinforcement learning method-an effective machine learning technique for continuous control problems. Our algorithm focuses on 17 indicators, including crop leaf area, stem leaf count, and soil evapotranspiration, among others. Through a well-designed network structure and cumulative rewards, our approach effectively captures the relationships between irrigation events and these states. Additionally, we validated the robustness and generalizability of the model using three years of extreme weather data and two consecutive years of cross-site observations. This method surpasses previous irrigation strategies managed by standard reinforcement learning techniques (e.g., DQN). Empirical results indicate that our approach significantly outperforms traditional agronomic decision-making, enhancing cotton yield by 13.6% and improving water use efficiency per kilogram of crop by 6.7%. In 2024, our method was validated in actual field experiments, achieving the highest yield among all approaches, with a 12.9% increase compared to traditional practices. Our research provides a robust framework for intelligent cotton irrigation in the region and offers promising new directions for implementing smart agricultural decision systems across diverse areas.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Asynchronous Actor-Critic for Multi-Agent Reinforcement Learning
    Xiao, Yuchen
    Tan, Weihao
    Amato, Christopher
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [32] An Actor-Critic Hierarchical Reinforcement Learning Model for Course Recommendation
    Liang, Kun
    Zhang, Guoqiang
    Guo, Jinhui
    Li, Wentao
    ELECTRONICS, 2023, 12 (24)
  • [33] Provable Benefits of Actor-Critic Methods for Offline Reinforcement Learning
    Zanette, Andrea
    Wainwright, Martin J.
    Brunskill, Emma
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [34] Swarm Reinforcement Learning Method Based on an Actor-Critic Method
    Iima, Hitoshi
    Kuroe, Yasuaki
    SIMULATED EVOLUTION AND LEARNING, 2010, 6457 : 279 - 288
  • [35] Manipulator Motion Planning based on Actor-Critic Reinforcement Learning
    Li, Qiang
    Nie, Jun
    Wang, Haixia
    Lu, Xiao
    Song, Shibin
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 4248 - 4254
  • [36] Hybrid Actor-Critic Reinforcement Learning in Parameterized Action Space
    Fan, Zhou
    Su, Rui
    Zhang, Weinan
    Yu, Yong
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2279 - 2285
  • [37] Actor-critic reinforcement learning for the feedback control of a swinging chain
    Dengler, C.
    Lohmann, B.
    IFAC PAPERSONLINE, 2018, 51 (13): : 378 - 383
  • [38] A Prioritized objective actor-critic method for deep reinforcement learning
    Ngoc Duy Nguyen
    Thanh Thi Nguyen
    Peter Vamplew
    Richard Dazeley
    Saeid Nahavandi
    Neural Computing and Applications, 2021, 33 : 10335 - 10349
  • [39] A Prioritized objective actor-critic method for deep reinforcement learning
    Nguyen, Ngoc Duy
    Nguyen, Thanh Thi
    Vamplew, Peter
    Dazeley, Richard
    Nahavandi, Saeid
    NEURAL COMPUTING & APPLICATIONS, 2021, 33 (16): : 10335 - 10349
  • [40] Evaluating Correctness of Reinforcement Learning based on Actor-Critic Algorithm
    Kim, Youngjae
    Hussain, Manzoor
    Suh, Jae-Won
    Hong, Jang-Eui
    2022 THIRTEENTH INTERNATIONAL CONFERENCE ON UBIQUITOUS AND FUTURE NETWORKS (ICUFN), 2022, : 320 - 325