DRLIC: Deep Reinforcement Learning for Irrigation Control

被引:10
|
作者
Ding, Xianzhong [1 ]
Du, Wan [1 ]
机构
[1] Univ Calif Merced, Merced, CA 95343 USA
基金
美国国家科学基金会;
关键词
INFILTRATION; PREDICTION;
D O I
10.1109/IPSN54338.2022.00011
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Agricultural irrigation is a major consumer of freshwater. Current irrigation systems used in the field are not efficient, since they are mainly based on soil moisture sensors' measurement and growers' experience, but not future soil moisture loss. It is hard to predict soil moisture loss, as it depends on a variety of factors, such as soil texture, weather and plants' characteristics. To improve irrigation efficiency, this paper presents DRLIC, a deep reinforcement learning (DRL)-based irrigation system. DRLIC uses a neural network (DRL control agent) to learn an optimal control policy that takes both current soil moisture measurement and future soil moisture loss into account. We define an irrigation reward function that facilitates the control agent to learn from past experience. Sometimes, our DRL control agent may output an unsafe action (e.g., irrigating too much water or too little). To prevent any possible damage to plants' health, we adopt a safe mechanism that leverages a soil moisture predictor to estimate each action's performance. If it is unsafe, we will perform a relatively-conservative action instead. Finally, we develop a real-world irrigation system that is composed of sprinklers, sensing and control nodes, and a wireless network. We deploy DRLIC in our testbed composed of six almond trees. Through a 15-day in-field experiment, we find that DRLIC can save up to 9.52% of water over a widely-used irrigation scheme.
引用
收藏
页码:41 / 53
页数:13
相关论文
共 50 条
  • [31] A survey on deep learning and deep reinforcement learning in robotics with a tutorial on deep reinforcement learning
    Morales, Eduardo F.
    Murrieta-Cid, Rafael
    Becerra, Israel
    Esquivel-Basaldua, Marco A.
    [J]. INTELLIGENT SERVICE ROBOTICS, 2021, 14 (05) : 773 - 805
  • [32] A survey on deep learning and deep reinforcement learning in robotics with a tutorial on deep reinforcement learning
    Eduardo F. Morales
    Rafael Murrieta-Cid
    Israel Becerra
    Marco A. Esquivel-Basaldua
    [J]. Intelligent Service Robotics, 2021, 14 : 773 - 805
  • [33] Irrigation optimization with a deep reinforcement learning model: Case study on a site in Portugal
    Alibabaei, Khadijeh
    Gaspar, Pedro D.
    Assuncao, Eduardo
    Alirezazadeh, Saeid
    Lima, Tania M.
    [J]. AGRICULTURAL WATER MANAGEMENT, 2022, 263
  • [34] Glider: rethinking congestion control with deep reinforcement learning
    Zhenchang Xia
    Libing Wu
    Fei Wang
    Xudong Liao
    Haiyan Hu
    Jia Wu
    Dan Wu
    [J]. World Wide Web, 2023, 26 : 115 - 137
  • [35] SATELLITE FORMATION CONTROL VIA DEEP REINFORCEMENT LEARNING
    Broida, Jacob
    Linares, Richard
    [J]. FIRST IAA/AAS SCITECH FORUM ON SPACE FLIGHT MECHANICS AND SPACE STRUCTURES AND MATERIALS, 2020, 170 : 343 - 352
  • [36] A Deep Reinforcement Learning Approach to Traffic Signal Control
    Razack, Aquib Junaid
    Ajith, Vysyakh
    Gupta, Rajiv
    [J]. 2021 IEEE CONFERENCE ON TECHNOLOGIES FOR SUSTAINABILITY (SUSTECH2021), 2021,
  • [37] Nonlinear Optimal Control Using Deep Reinforcement Learning
    Bucci, Michele Alessandro
    Semeraro, Onofrio
    Allauzen, Alexandre
    Cordier, Laurent
    Mathelin, Lionel
    [J]. IUTAM LAMINAR-TURBULENT TRANSITION, 2022, 38 : 279 - 290
  • [38] Intelligent Control of Manipulator Based on Deep Reinforcement Learning
    Zhou, Jiangtao
    Zheng, Hua
    Zhao, Dongzhu
    Chen, Yingxue
    [J]. 2021 12TH INTERNATIONAL CONFERENCE ON MECHANICAL AND AEROSPACE ENGINEERING (ICMAE), 2021, : 275 - 279
  • [39] Universal quantum control through deep reinforcement learning
    Niu, Murphy Yuezhen
    Boixo, Sergio
    Smelyanskiy, Vadim N.
    Neven, Hartmut
    [J]. NPJ QUANTUM INFORMATION, 2019, 5 (1)
  • [40] Quadrotor motion control using deep reinforcement learning
    Jiang, Zifei
    Lynch, Alan F.
    [J]. JOURNAL OF UNMANNED VEHICLE SYSTEMS, 2021, 9 (04): : 234 - 251