Enhancing cotton irrigation with distributional actor-critic reinforcement learning

被引:0
|
作者
Chen, Yi [1 ]
Lin, Meiwei [4 ]
Yu, Zhuo [4 ]
Sun, Weihong [4 ]
Fu, Weiguo [4 ]
He, Liang [1 ,2 ,3 ]
机构
[1] Xinjiang Univ, Sch Comp Sci & Technol, Urumqi 830017, Peoples R China
[2] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
[3] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol, Beijing 100084, Peoples R China
[4] Jiangsu Univ, Sch Agr Engn, Zhenjiang 212013, Peoples R China
关键词
Distributional reinforcement learning; Irrigation decision; DSSAT model; Agricultural management; Cotton irrigation; QUALITY; YIELD; CROP;
D O I
10.1016/j.agwat.2024.109194
中图分类号
S3 [农学(农艺学)];
学科分类号
0901 ;
摘要
Accurate predictions of irrigation's impact on crop yield are crucial for effective decision-making. However, current research predominantly focuses on the relationship between irrigation events and soil moisture, often neglecting the physiological state of the crops themselves. This study introduces a novel intelligent irrigation approach based on distributional reinforcement learning, ensuring that the algorithm simultaneously considers weather, soil, and crop conditions to make optimal irrigation decisions for long-term benefits. To achieve this, we collected climate data from 1980 to 2024 and conducted a two-year cotton planting experiment in 2023 and 2024. We used soil and plant state indicators from 5 experimental groups with varying irrigation treatments to calibrate and validate the DSSAT model. Subsequently, we innovatively integrated a distributional reinforcement learning method-an effective machine learning technique for continuous control problems. Our algorithm focuses on 17 indicators, including crop leaf area, stem leaf count, and soil evapotranspiration, among others. Through a well-designed network structure and cumulative rewards, our approach effectively captures the relationships between irrigation events and these states. Additionally, we validated the robustness and generalizability of the model using three years of extreme weather data and two consecutive years of cross-site observations. This method surpasses previous irrigation strategies managed by standard reinforcement learning techniques (e.g., DQN). Empirical results indicate that our approach significantly outperforms traditional agronomic decision-making, enhancing cotton yield by 13.6% and improving water use efficiency per kilogram of crop by 6.7%. In 2024, our method was validated in actual field experiments, achieving the highest yield among all approaches, with a 12.9% increase compared to traditional practices. Our research provides a robust framework for intelligent cotton irrigation in the region and offers promising new directions for implementing smart agricultural decision systems across diverse areas.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Asymmetric Actor-Critic for Adapting to Changing Environments in Reinforcement Learning
    Yue, Wangyang
    Zhou, Yuan
    Zhang, Xiaochuan
    Hua, Yuchen
    Li, Minne
    Fan, Zunlin
    Wang, Zhiyuan
    Kou, Guang
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT IV, 2024, 15019 : 325 - 339
  • [42] Natural Actor-Critic for Robust Reinforcement Learning with Function Approximation
    Zhou, Ruida
    Liu, Tao
    Cheng, Min
    Kalathil, Dileep
    Kumar, P. R.
    Tian, Chao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [43] Dual Variable Actor-Critic for Adaptive Safe Reinforcement Learning
    Lee, Junseo
    Heo, Jaeseok
    Kim, Dohyeong
    Lee, Gunmin
    Oh, Songhwai
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 7568 - 7573
  • [44] Dynamic Charging Scheme Problem With Actor-Critic Reinforcement Learning
    Yang, Meiyi
    Liu, Nianbo
    Zuo, Lin
    Feng, Yong
    Liu, Minghui
    Gong, Haigang
    Liu, Ming
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (01) : 370 - 380
  • [45] Bringing Fairness to Actor-Critic Reinforcement Learning for Network Utility Optimization
    Chen, Jingdi
    Wang, Yimeng
    Lan, Tian
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
  • [46] An extension of Genetic Network Programming with Reinforcement Learning using actor-critic
    Hatakeyama, Hiroyuki
    Mabu, Shingo
    Hirasawa, Kotaro
    Hu, Jinglu
    2006 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION, VOLS 1-6, 2006, : 1522 - +
  • [47] A Survey of Actor-Critic Reinforcement Learning: Standard and Natural Policy Gradients
    Grondman, Ivo
    Busoniu, Lucian
    Lopes, Gabriel A. D.
    Babuska, Robert
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 2012, 42 (06): : 1291 - 1307
  • [48] Enhancing HVAC Control Systems Using a Steady Soft Actor-Critic Deep Reinforcement Learning Approach
    Sun, Hongtao
    Hu, Yushuang
    Luo, Jinlu
    Guo, Qiongyu
    Zhao, Jianzhe
    BUILDINGS, 2025, 15 (04)
  • [49] AN ACTOR-CRITIC REINFORCEMENT LEARNING ALGORITHM BASED ON ADAPTIVE RBF NETWORK
    Li, Chun-Gui
    Wang, Meng
    Huang, Zhen-Jin
    Zhang, Zeng-Fang
    PROCEEDINGS OF 2009 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-6, 2009, : 984 - 988
  • [50] Actor-Critic Algorithms for Constrained Multi-agent Reinforcement Learning
    Diddigi, Raghuram Bharadwaj
    Reddy, D. Sai Koti
    Prabuchandran, K. J.
    Bhatnagar, Shalabh
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 1931 - 1933