Conceptual closed-loop design of automotive cooling systems leveraging Reinforcement Learning

被引:0
|
作者
Vanhuyse, Johan [1 ]
Bertheaume, Clement [1 ]
Gumussoy, Suat [2 ]
Nicolai, Mike [1 ]
机构
[1] Siemens Ind Software, Interleuvenlaan 68, B-3001 Heverlee, Belgium
[2] Siemens Technol, 755 Coll Rd E, Princeton, NJ 08540 USA
来源
关键词
Reinforcement learning; Thermal systems; Automotive; Generative engineering;
D O I
10.1007/s10010-025-00814-1
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The transition from conventional to battery electric vehicles has significantly altered cooling system requirements. Previously, the primary component to cool was the combustion engine, whose waste heat could also be used to heat the passenger compartment. In battery electric vehicles, the electric motor, inverter, and battery each operate optimally within different temperature ranges, with battery aging particularly affected by non-optimal temperatures. Consequently, the design of cooling systems for electric vehicles is a topic of high interest, requiring the comparison of various concepts to identify the best solution. Since the behaviour and performance of cooling system concepts largely depend on the control strategy employed, it is essential to consider this aspect for proper evaluation of their closed-loop performance. Reinforcement Learning (RL) offers a promising approach to rapidly design control strategies for thermal systems, as it learns these strategies based on a high-level objective function. Traditionally, training an RL controller involves considerable manual effort, such as hyperparameter tuning. This paper investigates whether RL can be applied to a cooling system to evaluate its optimal closed-loop performance with minimal manual tuning effort.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Leveraging closed-loop orientation and leadership for environmental sustainability
    Defee, C. Clifford
    Esper, Terry
    Mollenkopf, Diane
    SUPPLY CHAIN MANAGEMENT-AN INTERNATIONAL JOURNAL, 2009, 14 (02) : 87 - 98
  • [32] Application of Reinforcement Learning to Electrical Power System Closed-Loop Emergency Control
    Druet, C.
    Ernest, D.
    Wehenkel, L.
    LECTURE NOTES IN COMPUTER SCIENCE <D>, 2000, 1910 : 86 - 95
  • [33] Reinforcement Q-learning for Closed-loop Hypnosis Depth Control in Anesthesia
    Calvi, Giulia
    Manzoni, Eleonora
    Rampazzo, Mirco
    2022 30TH MEDITERRANEAN CONFERENCE ON CONTROL AND AUTOMATION (MED), 2022, : 164 - 169
  • [34] Closed-loop control of anesthesia and mean arterial pressure using reinforcement learning
    Padmanabhan, Regina
    Meskin, Nader
    Haddad, Wassim M.
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2015, 22 : 54 - 64
  • [35] Closed-Loop Control of Anesthesia and Mean Arterial Pressure Using Reinforcement Learning
    Padmanabhan, Regina
    Meskin, Nader
    Haddad, Wassim M.
    2014 IEEE SYMPOSIUM ON ADAPTIVE DYNAMIC PROGRAMMING AND REINFORCEMENT LEARNING (ADPRL), 2014, : 265 - 272
  • [36] A closed-loop algorithm to detect human face using color and reinforcement learning
    吴东晖
    叶秀清
    顾伟康
    "JournalofZhejiangUniversityScienceJ", 2002, (01) : 73 - 77
  • [37] A closed-loop algorithm to detect human face using color and reinforcement learning
    Wu Dong-hui
    Ye Xiu-qing
    Gu Wei-kang
    Journal of Zhejiang University-SCIENCE A, 2002, 3 (1): : 72 - 76
  • [38] Behavioral analysis of differential hebbian learning in closed-loop systems
    Kulvicius, Tomas
    Kolodziejski, Christoph
    Tamosiunaite, Minija
    Porr, Bernd
    Woergoetter, Florentin
    BIOLOGICAL CYBERNETICS, 2010, 103 (04) : 255 - 271
  • [39] Closed-Loop Dynamic Control of a Soft Manipulator Using Deep Reinforcement Learning
    Centurelli, Andrea
    Arleo, Luca
    Rizzo, Alessandro
    Tolu, Silvia
    Laschi, Cecilia
    Falotico, Egidio
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) : 4741 - 4748
  • [40] Deep reinforcement learning for closed-loop blood glucose control: two approaches
    Di Felice, Francesco
    Borri, Alessandro
    Di Benedetto, Maria Domenica
    IFAC PAPERSONLINE, 2022, 55 (40): : 115 - 120