Enhancing cotton irrigation with distributional actor-critic reinforcement learning

被引:0
|
作者
Chen, Yi [1 ]
Lin, Meiwei [4 ]
Yu, Zhuo [4 ]
Sun, Weihong [4 ]
Fu, Weiguo [4 ]
He, Liang [1 ,2 ,3 ]
机构
[1] Xinjiang Univ, Sch Comp Sci & Technol, Urumqi 830017, Peoples R China
[2] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
[3] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol, Beijing 100084, Peoples R China
[4] Jiangsu Univ, Sch Agr Engn, Zhenjiang 212013, Peoples R China
关键词
Distributional reinforcement learning; Irrigation decision; DSSAT model; Agricultural management; Cotton irrigation; QUALITY; YIELD; CROP;
D O I
10.1016/j.agwat.2024.109194
中图分类号
S3 [农学(农艺学)];
学科分类号
0901 ;
摘要
Accurate predictions of irrigation's impact on crop yield are crucial for effective decision-making. However, current research predominantly focuses on the relationship between irrigation events and soil moisture, often neglecting the physiological state of the crops themselves. This study introduces a novel intelligent irrigation approach based on distributional reinforcement learning, ensuring that the algorithm simultaneously considers weather, soil, and crop conditions to make optimal irrigation decisions for long-term benefits. To achieve this, we collected climate data from 1980 to 2024 and conducted a two-year cotton planting experiment in 2023 and 2024. We used soil and plant state indicators from 5 experimental groups with varying irrigation treatments to calibrate and validate the DSSAT model. Subsequently, we innovatively integrated a distributional reinforcement learning method-an effective machine learning technique for continuous control problems. Our algorithm focuses on 17 indicators, including crop leaf area, stem leaf count, and soil evapotranspiration, among others. Through a well-designed network structure and cumulative rewards, our approach effectively captures the relationships between irrigation events and these states. Additionally, we validated the robustness and generalizability of the model using three years of extreme weather data and two consecutive years of cross-site observations. This method surpasses previous irrigation strategies managed by standard reinforcement learning techniques (e.g., DQN). Empirical results indicate that our approach significantly outperforms traditional agronomic decision-making, enhancing cotton yield by 13.6% and improving water use efficiency per kilogram of crop by 6.7%. In 2024, our method was validated in actual field experiments, achieving the highest yield among all approaches, with a 12.9% increase compared to traditional practices. Our research provides a robust framework for intelligent cotton irrigation in the region and offers promising new directions for implementing smart agricultural decision systems across diverse areas.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Uncertainty Weighted Actor-Critic for Offline Reinforcement Learning
    Wu, Yue
    Zhai, Shuangfei
    Srivastava, Nitish
    Susskind, Joshua
    Zhang, Jian
    Salakhutdinov, Ruslan
    Goh, Hanlin
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [22] Deep Actor-Critic Reinforcement Learning for Anomaly Detection
    Zhong, Chen
    Gursoy, M. Cenk
    Velipasalar, Senem
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [23] MARS: Malleable Actor-Critic Reinforcement Learning Scheduler
    Baheri, Betis
    Tronge, Jacob
    Fang, Bo
    Li, Ang
    Chaudhary, Vipin
    Guan, Qiang
    2022 IEEE INTERNATIONAL PERFORMANCE, COMPUTING, AND COMMUNICATIONS CONFERENCE, IPCCC, 2022,
  • [24] Averaged Soft Actor-Critic for Deep Reinforcement Learning
    Ding, Feng
    Ma, Guanfeng
    Chen, Zhikui
    Gao, Jing
    Li, Peng
    COMPLEXITY, 2021, 2021
  • [25] GMAC: A Distributional Perspective on Actor-Critic Framework
    Nam, Daniel Wontae
    Kim, Younghoon
    Park, Chan Y.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [26] Distributional Soft Actor-Critic: Off-Policy Reinforcement Learning for Addressing Value Estimation Errors
    Duan, Jingliang
    Guan, Yang
    Li, Shengbo Eben
    Ren, Yangang
    Sun, Qi
    Cheng, Bo
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (11) : 6584 - 6598
  • [27] Learning Locomotion for Quadruped Robots via Distributional Ensemble Actor-Critic
    Li, Sicen
    Pang, Yiming
    Bai, Panju
    Li, Jiawei
    Liu, Zhaojin
    Hu, Shihao
    Wang, Liquan
    Wang, Gang
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (02) : 1811 - 1818
  • [28] Forward Actor-Critic for Nonlinear Function Approximation in Reinforcement Learning
    Veeriah, Vivek
    van Seijen, Harm
    Sutton, Richard S.
    AAMAS'17: PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2017, : 556 - 564
  • [29] THE APPLICATION OF ACTOR-CRITIC REINFORCEMENT LEARNING FOR FAB DISPATCHING SCHEDULING
    Kim, Namyong
    Shin, IIayong
    2017 WINTER SIMULATION CONFERENCE (WSC), 2017, : 4570 - 4571
  • [30] ACTOR-CRITIC DEEP REINFORCEMENT LEARNING FOR DYNAMIC MULTICHANNEL ACCESS
    Zhong, Chen
    Lu, Ziyang
    Gursoy, M. Cenk
    Velipasalar, Senem
    2018 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2018), 2018, : 599 - 603