Adaptive Speed Control of Electric Vehicles Based on Multi-Agent Fuzzy Q-Learning

被引:12
|
作者
Gheisarnejad, Meysam [1 ]
Mirzavand, Ghazal [2 ]
Ardeshiri, Reza Rouhi [3 ]
Andresen, Bjorn [1 ]
Khooban, Mohammad Hassan [1 ]
机构
[1] Aarhus Univ, Dept Elect & Comp Engn, DK-8200 Aarhus, Denmark
[2] Shiraz Univ Technol, Shiraz 7155713876, Iran
[3] Shanghai Jiao Tong Univ, Dept Elect Engn, Shanghai 200240, Peoples R China
关键词
Q-learning; Force; Unified modeling language; Vehicle dynamics; Roads; Resistance; DC motors; Fuzzy Q-learning; ultra-local model (ULM); electric vehicle (EV); multi-age system; ENERGY MANAGEMENT; SYSTEM; TRACKING; STRATEGY; CHOICE; DRIVE;
D O I
10.1109/TETCI.2022.3181159
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Electric vehicles (EVs) are expected to account for a significant percentage of the vehicle market in the coming years. The speed regulator is the inseparable part of EVs to solve challenges such as managing battery energy or adapting to road conditions. Since the operation of the EV system is time-variant due to parametric variations and road situations, the regulation of such systems is a complex task that cannot be effectively accomplished by the conventional deterministic control methodologies. To address the control challenges corresponding to the EV, the ultra-local model (ULM) with extended state observer (ESO) is developed for the speed tracking problem of the system. The Fuzzy Q-learning multi-age system (MAS) is also adopted to adaptively regulate the gains of the ULM controller in an online manner. The robustness of the established controller has been assessed by experimental data based on the new European driving cycle (NEDC) and altering some critical parameters of the EV test system. Moreover, the real-time verifications for the tracking problem of NEDC are also conducted by hardware-In-the Loop (HiL) simulation in the EV system.
引用
收藏
页码:102 / 110
页数:9
相关论文
共 50 条
  • [1] Multi-Agent Coordination Method Based on Fuzzy Q-Learning
    Peng, Jun
    Liu, Miao
    Wu, Min
    Zhang, Xiaoyong
    Lin, Kuo-Chi
    2008 7TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-23, 2008, : 5411 - +
  • [2] Multi-Agent Reward-Iteration Fuzzy Q-Learning
    Leng, Lixiong
    Li, Jingchen
    Zhu, Jinhui
    Hwang, Kao-Shing
    Shi, Haobin
    INTERNATIONAL JOURNAL OF FUZZY SYSTEMS, 2021, 23 (06) : 1669 - 1679
  • [3] Minimax fuzzy Q-learning in cooperative multi-agent systems
    Kilic, A
    Arslan, A
    ADVANCES IN INFORMATION SYSTEMS, 2002, 2457 : 264 - 272
  • [4] Multi-Agent Reward-Iteration Fuzzy Q-Learning
    Lixiong Leng
    Jingchen Li
    Jinhui Zhu
    Kao-Shing Hwang
    Haobin Shi
    International Journal of Fuzzy Systems, 2021, 23 : 1669 - 1679
  • [5] Extending Q-Learning to general adaptive multi-agent systems
    Tesauro, G
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 16, 2004, 16 : 871 - 878
  • [6] Nash Q-Learning Multi-Agent Flow Control for High-Speed Networks
    Jing, Yuanwei
    Li, Xin
    Dimirovski, Georgi M.
    Zheng, Yan
    Zhang, Siying
    2009 AMERICAN CONTROL CONFERENCE, VOLS 1-9, 2009, : 3304 - +
  • [7] Q-learning in Multi-Agent Cooperation
    Hwang, Kao-Shing
    Chen, Yu-Jen
    Lin, Tzung-Feng
    2008 IEEE WORKSHOP ON ADVANCED ROBOTICS AND ITS SOCIAL IMPACTS, 2008, : 239 - 244
  • [8] Multi-Agent Advisor Q-Learning
    Subramanian S.G.
    Taylor M.E.
    Larson K.
    Crowley M.
    Journal of Artificial Intelligence Research, 2022, 74 : 1 - 74
  • [9] Multi-Agent Advisor Q-Learning
    Subramanian, Sriram Ganapathi
    Taylor, Matthew E.
    Larson, Kate
    Crowley, Mark
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2022, 74 : 1 - 74
  • [10] Multi-Agent Advisor Q-Learning
    Subramanian, Sriram Ganapathi
    Taylor, Matthew E.
    Larson, Kate
    Crowley, Mark
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 6884 - 6889