Distributed Multi-Agent Reinforcement Learning for Autonomus Management of Renewable Energy Microgrids

被引:0
|
作者
Nuvvula, Ramakrishna S. S. [1 ]
Kumar, Polamarasetty P. [2 ]
Ahammed, Syed Riyaz [3 ]
Satyanarayana, Vanam. [4 ]
Babu, Bachina Harish [5 ]
Reddy, R. Siva Subramanyam [6 ]
Ali, Ahmed [7 ]
机构
[1] NITTE Deemed be Univ, NMAM Inst Technol, Deparmtent Elect & Elect Engn, Mangaluru, Karnataka, India
[2] GMR Inst Technol, Dept Elect & Elect Engn, Rajam, India
[3] NITTE Deemed be Univ, NMAM Inst Technol, Dept Elect & Commun Engn, Mangaluru, Karnataka, India
[4] Vaagdevicoll Engn, Dept Elect Engn, Warangal, Telangana, India
[5] VNR Vignana Jyothi Inst Engn & Technol, Dept Automobile Engn, Hyderabad, Telangana, India
[6] Sri Kalahasteeswara Inst Technol SKIT, Dept EEE, Srikalahasti, Andhra Pradesh, India
[7] Univ Johannesburg, Dept Elect & Elect Engn Technol, Johannesburg, South Africa
关键词
Multi-Agent Reinforcement Learning (MARL); Renewable Energy Microgrids; Autonomous Microgrid Management; Grid Reliability; Energy Efficiency;
D O I
10.1109/icSmartGrid61824.2024.10578150
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
This study explores the application of Multi-Agent Reinforcement Learning (MARL) in the autonomous management of renewable energy microgrids, assessing its impact on key performance metrics including grid reliability, energy efficiency, cost reduction, and scalability. Using a series of experiments comparing the MARL framework with traditional rule-based methods, we observed significant improvements across multiple dimensions.The MARL framework demonstrated a 75% reduction in blackout rates and a 60% reduction in recovery time after blackouts, enhancing grid reliability. In terms of energy efficiency, the renewable energy utilization ratio increased by 7% to 87%, while energy losses decreased by 25%. Operational efficiency also saw a notable improvement of 10 percentage points, indicating a more efficient energy management process.Regarding cost reduction, the MARL framework achieved a 33% reduction in energy cost per kWh, leading to an annual savings of approximately $7,300 for an average production rate of 500 kWh per day. Maintenance costs were reduced by 30%, and system downtime was shortened by 60%, resulting in additional annual savings. The total cost reduction with the MARL framework was estimated at $ 10,000 per year, underscoring the financial benefits of this approach.The scalability and adaptability of the MARL framework further highlight its potential for broader applications in the energy sector. Given the demonstrated improvements in reliability, energy efficiency, and cost reduction, this study suggests that MARL-based management can contribute to more sustainable and resilient renewable energy microgrids.
引用
收藏
页码:298 / 304
页数:7
相关论文
共 50 条
  • [41] Multi-Agent Based Energy Management in Microgrids Using MACSimJX
    Lakhina, Upasana
    Elamvazuthi, I
    Badruddin, Nasreen
    Meriaudeau, F.
    Ramasamy, G.
    Jangra, Ajay
    [J]. 2019 17TH IEEE STUDENT CONFERENCE ON RESEARCH AND DEVELOPMENT (SCORED), 2019, : 333 - 338
  • [42] Towards a Distributed Framework for Multi-Agent Reinforcement Learning Research
    Zhou, Yutai
    Manuel, Shawn
    Morales, Peter
    Li, Sheng
    Pena, Jaime
    Allen, Ross
    [J]. 2020 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2020,
  • [43] Multi-Agent Deep Reinforcement Learning for Distributed Load Restoration
    Linh Vu
    Tuyen Vu
    Thanh Long Vu
    Srivastava, Anurag
    [J]. IEEE TRANSACTIONS ON SMART GRID, 2024, 15 (02) : 1749 - 1760
  • [44] Distributed Inverse Constrained Reinforcement Learning for Multi-agent Systems
    Liu, Shicheng
    Zhu, Minghui
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [45] A Multi-agent Reinforcement Learning Perspective on Distributed Traffic Engineering
    Geng, Nan
    Lan, Tian
    Aggarwal, Vaneet
    Yang, Yuan
    Xu, Mingwei
    [J]. 2020 IEEE 28TH INTERNATIONAL CONFERENCE ON NETWORK PROTOCOLS (IEEE ICNP 2020), 2020,
  • [46] Distributed hierarchical reinforcement learning in multi-agent adversarial environments
    Naderializadeh, Navid
    Soleyman, Sean
    Hung, Fan
    Khosla, Deepak
    Chen, Yang
    Fadaie, Joshua G.
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS IV, 2022, 12113
  • [47] Cooperative Multi-agent Reinforcement Learning for Inventory Management
    Khirwar, Madhav
    Gurumoorthy, Karthik S.
    Jain, Ankit Ajit
    Manchenahally, Shantala
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: APPLIED DATA SCIENCE AND DEMO TRACK, ECML PKDD 2023, PT VI, 2023, 14174 : 619 - 634
  • [48] A decentralised multi-agent approach to enhance the stability of smart microgrids with renewable energy
    Rahman, M. S.
    Pota, H. R.
    Mahmud, M. A.
    Hossain, M. J.
    [J]. INTERNATIONAL JOURNAL OF SUSTAINABLE ENERGY, 2016, 35 (05) : 429 - 442
  • [49] Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park
    Zhu, Dafeng
    Yang, Bo
    Liu, Yuxiang
    Wang, Zhaojian
    Ma, Kai
    Guan, Xinping
    [J]. APPLIED ENERGY, 2022, 311
  • [50] Multi-Agent Reinforcement Learning
    Stankovic, Milos
    [J]. 2016 13TH SYMPOSIUM ON NEURAL NETWORKS AND APPLICATIONS (NEUREL), 2016, : 43 - 43