A two-stage negotiation strategy based on multi-agent using Q-learning in direct power purchase with large consumers

被引:0
|
作者
Zhang, Senlin [1 ]
Qu, Shaoqing [1 ]
Chen, Haoyong [1 ]
Zhang, Hao [2 ]
Jing, Zhaoxia [1 ]
Kuang, Weihong [1 ]
机构
[1] South China University of Technology, Guangzhou 510640, China
[2] Power Exchange Center of Northwest Power Grid, Xi'an 710000, China
来源
Dianli Xitong Zidonghua/Automation of Electric Power Systems | 2010年 / 34卷 / 06期
关键词
Electric industry - Sales - Competition - Multi agent systems - Learning algorithms - Power markets;
D O I
暂无
中图分类号
学科分类号
摘要
The negotiation actions of different traders in the negotiation process of direct power purchase with large consumers are simulated using the multi-agent technology. With the Q-learning algorithm based on historical data, an agent can strengthen its own learning capacity and timely adjust its bid price against its opponent's action. Meanwhile, in order to ensure the fairness of market competition, a two-stage negotiation mechanism of 'negotiations+auction' is proposed. It gives one more opportunity to the generator agent who has a lower reserve price but fails to achieve an agreement, due to underestimation of the situation in the negotiations. It also makes the real diversity of different generating costs reflected by contract power price, and can inspire the generators to get the negotiating initiative by lowering their costs. ©2010 State Grid Electric Power Research Institute Press.
引用
收藏
页码:37 / 41
相关论文
共 50 条
  • [31] Distributed secondary control for DC microgrids using two-stage multi-agent reinforcement learning
    Li, Fei
    Tu, Weifei
    Zhou, Yun
    Li, Heng
    Zhou, Feng
    Liu, Weirong
    Hu, Chao
    INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2025, 164
  • [32] Joint Spectrum and Power Allocation in Wireless Network: A Two-Stage Multi-Agent Reinforcement Learning Method
    Dai, Pengcheng
    Wang, He
    Hou, Huazhou
    Qian, Xusheng
    Yu, Wenwu
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (03): : 2364 - 2374
  • [33] Q-learning based cooperative multi-agent system applied to coordination of overcurrent relays
    Sadeh, J.
    Rahimiyan, M.
    Journal of Applied Sciences, 2008, 8 (21) : 3924 - 3930
  • [34] An Enterprise Multi-agent Model with Game Q-Learning Based on a Single Decision Factor
    Xu, Siying
    Zhang, Gaoyu
    Yuan, Xianzhi
    COMPUTATIONAL ECONOMICS, 2024, 64 (04) : 2523 - 2562
  • [35] Deep Q-Learning and Preference Based Multi-Agent System for Sustainable Agricultural Market
    Perez-Pons, Maria E.
    Alonso, Ricardo S.
    Garcia, Oscar
    Marreiros, Goreti
    Corchado, Juan Manuel
    SENSORS, 2021, 21 (16)
  • [36] Adaptive Speed Control of Electric Vehicles Based on Multi-Agent Fuzzy Q-Learning
    Gheisarnejad, Meysam
    Mirzavand, Ghazal
    Ardeshiri, Reza Rouhi
    Andresen, Bjorn
    Khooban, Mohammad Hassan
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2023, 7 (01): : 102 - 110
  • [37] Multi-agent Q-Learning of Channel Selection in Multi-user Cognitive Radio Systems: A Two by Two Case
    Li, Husheng
    2009 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2009), VOLS 1-9, 2009, : 1893 - 1898
  • [38] Two-Stage Distributed Robust Optimization Scheduling Considering Demand Response and Direct Purchase of Electricity by Large Consumers
    Yang, Zhaorui
    He, Yu
    Zhang, Jing
    Zhang, Zijian
    Luo, Jie
    Gan, Guomin
    Xiang, Jie
    Zou, Yang
    ELECTRONICS, 2024, 13 (18)
  • [39] Two-stage graph attention networks and Q-learning based maintenance tasks scheduling
    Gao, Xiaoyong
    Peng, Diao
    Yang, Yixu
    Huang, Fuyu
    Yuan, Yu
    Tan, Chaodong
    Li, Feifei
    APPLIED INTELLIGENCE, 2025, 55 (05)
  • [40] Resource Allocation for Multi-user Cognitive Radio Systems using Multi-agent Q-Learning
    Azzouna, Ahmed
    Guezmil, Amel
    Sakly, Anis
    Mtibaa, Abdellatif
    ANT 2012 AND MOBIWIS 2012, 2012, 10 : 46 - 53