Deep Reinforcement Learning-Based Optimal Parameter Design of Power Converters

被引:1
|
作者
Bui, Van-Hai [1 ,4 ]
Chang, Fangyuan [1 ]
Su, Wencong [1 ]
Wang, Mengqi [1 ]
Murphey, Yi Lu [1 ]
Da Silva, Felipe Leno [2 ]
Huang, Can [2 ]
Xue, Lingxiao [3 ]
Glatt, Ruben [2 ]
机构
[1] Univ Michigan Dearborn, Dept Elect & Comp Engn, Coll Engn & Comp Sci, Dearborn, MI 48128 USA
[2] Lawrence Livermore Natl Lab LLNL, Livermore, CA 94550 USA
[3] Oak Ridge Natl Lab ORNL, Oak Ridge, TN 37830 USA
[4] State Univ New York SUNY Maritime Coll, Dept Elect Engn, Throggs Neck, NY 10465 USA
关键词
deep reinforcement learning; deep neural networks; optimal parameters design; optimization; power converters; OPTIMIZATION; FREQUENCY; PFC;
D O I
10.1109/ICNC57223.2023.10074355
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The optimal design of power converters often requires a long time to process with a huge number of simulations to determine the optimal parameters. To reduce the design cycle, this paper proposes a proximal policy optimization (PPO)-based model to optimize the design parameters for Buck and Boost converters. In each training step, the learning agent carries out an action that adjusts the value of the design parameters and interacts with a dynamic Simulink model. The simulation provides feedback on power efficiency and helps the learning agent in optimizing parameter design. Unlike deep Q-learning and standard actor-critic algorithms, PPO includes a clipped objective function and the function avoids the new policy from changing too far from the old policy. This allows the proposed model to accelerate and stabilize the learning process. Finally, to show the effectiveness of the proposed method, the performance of different optimization algorithms is compared on two popular power converter topologies.
引用
收藏
页码:25 / 29
页数:5
相关论文
共 50 条
  • [41] Computing on Wheels: A Deep Reinforcement Learning-Based Approach
    Kazmi, S. M. Ahsan
    Tai Manh Ho
    Tuong Tri Nguyen
    Fahim, Muhammad
    Khan, Adil
    Piran, Md Jalil
    Baye, Gaspard
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (11) : 22535 - 22548
  • [42] Reinforcement Learning-Based Sequential Batch-Sampling for Bayesian Optimal Experimental Design
    Ashenafi, Yonatan
    Pandita, Piyush
    Ghosh, Sayan
    JOURNAL OF MECHANICAL DESIGN, 2022, 144 (09)
  • [43] Optimal Design of Planar Microwave Microfluidic Sensors Based on Deep Reinforcement Learning
    Wang, Bin-Xiao
    Zhao, Wen-Sheng
    Wang, Da-Wei
    Wang, Junchao
    Li, Wenjun
    Liu, Jun
    IEEE SENSORS JOURNAL, 2021, 21 (24) : 27441 - 27449
  • [44] Deep Reinforcement Learning-based Power Distribution Network Structure Design Optimization Method for High Bandwidth Memory Interposer
    Lee, Seonghi
    Kim, Hyunwoong
    Song, Kyunghwan
    Kim, Jongwook
    Park, Dongryul
    Ahn, Jangyong
    Kim, Keunwoo
    Ahn, Seungyoung
    IEEE 30TH CONFERENCE ON ELECTRICAL PERFORMANCE OF ELECTRONIC PACKAGING AND SYSTEMS (EPEPS 2021), 2021,
  • [45] Deep reinforcement learning-based optimal data-driven control of battery energy storage for power system frequency support
    Yan, Ziming
    Xu, Yan
    Wang, Yu
    Feng, Xue
    IET GENERATION TRANSMISSION & DISTRIBUTION, 2020, 14 (25) : 6071 - 6078
  • [46] Deep Reinforcement Learning-Based Distributed 3D UAV Trajectory Design
    He, Huasen
    Yuan, Wenke
    Chen, Shuangwu
    Jiang, Xiaofeng
    Yang, Feng
    Yang, Jian
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2024, 72 (06) : 3736 - 3751
  • [47] Deep Reinforcement Learning-Based Intelligent Reflecting Surface for Cooperative Jamming Model Design
    Lu, Shaofang
    Shen, Xianhao
    Zhang, Panfeng
    Wu, Zhen
    Chen, Yi
    Wang, Li
    Xie, Xiaolan
    IEEE ACCESS, 2023, 11 : 98764 - 98775
  • [48] Deep Reinforcement Learning-based Beamforming Design in ISAC-assisted Vehicular Networks
    Liu, Yiyang
    Zhang, Siyao
    Li, Xinmin
    Huang, Yi
    Fang, Yuan
    Cao, Hui
    2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
  • [49] Deep reinforcement learning-based autonomous parking design with neural network compute accelerators
    Ozeloglu, Alican
    Gurbuz, Ismihan Gul
    San, Ismail
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2022, 34 (09):
  • [50] Safe Deep Reinforcement Learning-Based Constrained Optimal Control Scheme for HEV Energy Management
    Liu, Zemin Eitan
    Zhou, Quan
    Li, Yanfei
    Shuai, Shijin
    Xu, Hongming
    IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, 2023, 9 (03): : 4278 - 4293