Deep Reinforcement Learning-Based Optimal Parameter Design of Power Converters

被引:1
|
作者
Bui, Van-Hai [1 ,4 ]
Chang, Fangyuan [1 ]
Su, Wencong [1 ]
Wang, Mengqi [1 ]
Murphey, Yi Lu [1 ]
Da Silva, Felipe Leno [2 ]
Huang, Can [2 ]
Xue, Lingxiao [3 ]
Glatt, Ruben [2 ]
机构
[1] Univ Michigan Dearborn, Dept Elect & Comp Engn, Coll Engn & Comp Sci, Dearborn, MI 48128 USA
[2] Lawrence Livermore Natl Lab LLNL, Livermore, CA 94550 USA
[3] Oak Ridge Natl Lab ORNL, Oak Ridge, TN 37830 USA
[4] State Univ New York SUNY Maritime Coll, Dept Elect Engn, Throggs Neck, NY 10465 USA
关键词
deep reinforcement learning; deep neural networks; optimal parameters design; optimization; power converters; OPTIMIZATION; FREQUENCY; PFC;
D O I
10.1109/ICNC57223.2023.10074355
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The optimal design of power converters often requires a long time to process with a huge number of simulations to determine the optimal parameters. To reduce the design cycle, this paper proposes a proximal policy optimization (PPO)-based model to optimize the design parameters for Buck and Boost converters. In each training step, the learning agent carries out an action that adjusts the value of the design parameters and interacts with a dynamic Simulink model. The simulation provides feedback on power efficiency and helps the learning agent in optimizing parameter design. Unlike deep Q-learning and standard actor-critic algorithms, PPO includes a clipped objective function and the function avoids the new policy from changing too far from the old policy. This allows the proposed model to accelerate and stabilize the learning process. Finally, to show the effectiveness of the proposed method, the performance of different optimization algorithms is compared on two popular power converter topologies.
引用
收藏
页码:25 / 29
页数:5
相关论文
共 50 条
  • [31] Deep reinforcement learning-based moving target defense method in computing power network
    Zhang T.
    Xu C.
    Lian Y.
    Kang J.
    Kuang X.
    Scientia Sinica Informationis, 2023, 53 (12) : 2372 - 2385
  • [32] Deep reinforcement learning-based network for optimized power flow in islanded DC microgrid
    Pandia Rajan Jeyaraj
    Siva Prakash Asokan
    Aravind Chellachi Kathiresan
    Edward Rajan Samuel Nadar
    Electrical Engineering, 2023, 105 : 2805 - 2816
  • [33] Mitigating cascading failure in power grids with deep reinforcement learning-based remedial actions
    Zhang, Xi
    Wang, Qin
    Bi, Xiaowen
    Li, Donghong
    Liu, Dong
    Yu, Yuanjin
    Tse, Chi Kong
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2024, 250
  • [34] Deep reinforcement learning-based robust missile guidance
    Ahn, Jeongsu
    Shin, Jongho
    Kim, Hyeong-Geun
    2022 22ND INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2022), 2022, : 927 - 930
  • [35] A Deep Reinforcement Learning-Based Approach in Porker Game
    Kong, Yan
    Rui, Yefeng
    Hsia, Chih-Hsien
    Journal of Computers (Taiwan), 2023, 34 (02) : 41 - 51
  • [36] A Deep Reinforcement Learning-Based Framework for Content Caching
    Zhong, Chen
    Gursoy, M. Cenk
    Velipasalar, Senem
    2018 52ND ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2018,
  • [37] Deep Learning-Based Optimal Scheduling Scheme for Distributed Wind Power Systems
    Wang, Jing
    Wei, Xiongfei
    Fang, Yuanjie
    Zhang, Pinggai
    Juanatas, Ronaldo
    Caballero, Jonathan M.
    Niguidula, Jasmin D.
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2024,
  • [38] Deep Reinforcement Learning-based Traffic Signal Control
    Ruan, Junyun
    Tang, Jinzhuo
    Gao, Ge
    Shi, Tianyu
    Khamis, Alaa
    2023 IEEE INTERNATIONAL CONFERENCE ON SMART MOBILITY, SM, 2023, : 21 - 26
  • [39] Deep reinforcement learning-based antilock braking algorithm
    Mantripragada, V. Krishna Teja
    Kumar, R. Krishna
    VEHICLE SYSTEM DYNAMICS, 2023, 61 (05) : 1410 - 1431
  • [40] Deep Reinforcement Learning-Based Defense Strategy Selection
    Charpentier, Axel
    Boulahia-Cuppens, Nora
    Cuppens, Frederic
    Yaich, Reda
    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY AND SECURITY, ARES 2022, 2022,