Adaptive Speed Control of Electric Vehicles Based on Multi-Agent Fuzzy Q-Learning

被引:12
|
作者
Gheisarnejad, Meysam [1 ]
Mirzavand, Ghazal [2 ]
Ardeshiri, Reza Rouhi [3 ]
Andresen, Bjorn [1 ]
Khooban, Mohammad Hassan [1 ]
机构
[1] Aarhus Univ, Dept Elect & Comp Engn, DK-8200 Aarhus, Denmark
[2] Shiraz Univ Technol, Shiraz 7155713876, Iran
[3] Shanghai Jiao Tong Univ, Dept Elect Engn, Shanghai 200240, Peoples R China
关键词
Q-learning; Force; Unified modeling language; Vehicle dynamics; Roads; Resistance; DC motors; Fuzzy Q-learning; ultra-local model (ULM); electric vehicle (EV); multi-age system; ENERGY MANAGEMENT; SYSTEM; TRACKING; STRATEGY; CHOICE; DRIVE;
D O I
10.1109/TETCI.2022.3181159
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Electric vehicles (EVs) are expected to account for a significant percentage of the vehicle market in the coming years. The speed regulator is the inseparable part of EVs to solve challenges such as managing battery energy or adapting to road conditions. Since the operation of the EV system is time-variant due to parametric variations and road situations, the regulation of such systems is a complex task that cannot be effectively accomplished by the conventional deterministic control methodologies. To address the control challenges corresponding to the EV, the ultra-local model (ULM) with extended state observer (ESO) is developed for the speed tracking problem of the system. The Fuzzy Q-learning multi-age system (MAS) is also adopted to adaptively regulate the gains of the ULM controller in an online manner. The robustness of the established controller has been assessed by experimental data based on the new European driving cycle (NEDC) and altering some critical parameters of the EV test system. Moreover, the real-time verifications for the tracking problem of NEDC are also conducted by hardware-In-the Loop (HiL) simulation in the EV system.
引用
收藏
页码:102 / 110
页数:9
相关论文
共 50 条
  • [21] Using Fuzzy Logic and Q-Learning for Trust Modeling in Multi-agent Systems
    Aref, Abdullah
    Tran, Thomas
    FEDERATED CONFERENCE ON COMPUTER SCIENCE AND INFORMATION SYSTEMS, 2014, 2014, 2 : 59 - 66
  • [22] A novel multi-agent Q-learning algorithm in cooperative multi-agent system
    Ou, HT
    Zhang, WD
    Zhang, WY
    Xu, XM
    PROCEEDINGS OF THE 3RD WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-5, 2000, : 272 - 276
  • [23] Pricing in agent economies using multi-agent Q-learning
    Tesauro, G
    Kephart, JO
    AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2002, 5 (03) : 289 - 304
  • [24] Pricing in Agent Economies Using Multi-Agent Q-Learning
    Gerald Tesauro
    Jeffrey O. Kephart
    Autonomous Agents and Multi-Agent Systems, 2002, 5 : 289 - 304
  • [25] A Multi-Agent Q-Learning Based Rendezvous Strategy for Cognitive Radios
    Watson, Clifton L.
    Chakravarthy, Vasu D.
    Biswas, Subir
    2017 COGNITIVE COMMUNICATIONS FOR AEROSPACE APPLICATIONS WORKSHOP (CCAA), 2017,
  • [26] Modular Q-learning based multi-agent cooperation for robot soccer
    Park, KH
    Kim, YJ
    Kim, JH
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2001, 35 (02) : 109 - 122
  • [27] Regional Cooperative Multi-agent Q-learning Based on Potential Field
    Liu, Liang
    Li, Longshu
    ICNC 2008: FOURTH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION, VOL 6, PROCEEDINGS, 2008, : 535 - 539
  • [28] Study on Statistics Based Q-learning Algorithm for Multi-Agent System
    Xie Ya
    Huang Zhonghua
    2013 FOURTH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS DESIGN AND ENGINEERING APPLICATIONS, 2013, : 595 - 600
  • [29] Multi-Agent Reinforcement Learning - An Exploration Using Q-Learning
    Graham, Caoimhin
    Bell, David
    Luo, Zhihui
    RESEARCH AND DEVELOPMENT IN INTELLIGENT SYSTEMS XXVI: INCORPORATING APPLICATIONS AND INNOVATIONS IN INTELLIGENT SYSTEMS XVII, 2010, : 293 - 298
  • [30] Q-Learning Policies for Multi-Agent Foraging Task
    Yogeswaran, M.
    Ponnambalam, S. C.
    TRENDS IN INTELLIGENT ROBOTICS, 2010, 103 : 194 - 201