Optimizing Discharge Efficiency of Reconfigurable Battery With Deep Reinforcement Learning

被引:4
|
作者
Jeon, Seunghyeok [1 ]
Kim, Jiwon [1 ]
Ahn, Junick [1 ]
Cha, Hojung [1 ]
机构
[1] Yonsei Univ, Dept Comp Sci, Seoul 03722, South Korea
基金
新加坡国家研究基金会;
关键词
Deep reinforcement learning (DRL); reconfigurable battery; switch control policy; CHARGE ESTIMATION; ION; STATE; MODELS; LIFE;
D O I
10.1109/TCAD.2020.3012230
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Cell imbalance in a multicell battery occurs over time due to varying operating environments. This imbalance leads to overall inefficiency in battery discharging due to the relatively weak cells in the battery. Reconfiguring the cells in the battery is one option for addressing the problem, but relevant circuits may lead to severe safety issues. In this article, we aim to optimize the discharge efficiency of a multicell battery using safety-supplemented hardware. To this end, we first design a cell string-level reconfiguration scheme that is safe in hardware operations and also provides scalability due to the low switching complexity. Second, we propose a machine learning-based run-time switch control that considers various battery-related factors, such as the state of charge, state of health, temperature, and current distributions. Specifically, by exploiting the deep reinforcement learning (DRL) technique, we train the complex relationship among the battery factors and derive the best switch configuration in run-time. We implemented a hardware prototype, validated its functionalities, and evaluated the efficacy of the DRL-based control policy. The experimental results showed that the proposed scheme, along with the optimization method, improves the discharge efficiency of multicell batteries. In particular, the discharge efficiency gain is maximized when the cells constituting the battery are unevenly distributed in terms of cell health and exposed temperature.
引用
收藏
页码:3893 / 3905
页数:13
相关论文
共 50 条
  • [21] Optimizing smart city planning: A deep reinforcement learning framework
    Park, Junyoung
    Baek, Jiwoo
    Song, Yujae
    ICT EXPRESS, 2025, 11 (01): : 129 - 134
  • [22] Optimizing Policy via Deep Reinforcement Learning for Dialogue Management
    Xu, Guanghao
    Lee, Hyunjung
    Koo, Myoung-Wan
    Seo, Jungyun
    2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP), 2018, : 582 - 589
  • [23] Optimizing broadband metamaterial absorber using deep reinforcement learning
    Murakami, Kenki
    Kubo, Wakana
    APPLIED PHYSICS EXPRESS, 2023, 16 (08)
  • [24] OPTIMIZING HUMANITARIAN LOGISTICS WITH DEEP REINFORCEMENT LEARNING AND DIGITAL TWINS
    Soykan, Bulent
    Rabadia, Ghaith
    2024 ANNUAL MODELING AND SIMULATION CONFERENCE, ANNSIM 2024, 2024,
  • [25] Optimizing Nitrogen Management with Deep Reinforcement Learning and Crop Simulations
    Wu, Jing
    Tao, Ran
    Zhao, Pan
    Martin, Nicolas F.
    Hovakimyan, Naira
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 1711 - 1719
  • [26] Optimizing container relocation operations by using deep reinforcement learning
    Yan, Qiyao
    Song, Rui
    Kim, Kap-Hwan
    Wang, Yan
    Feng, Xuehao
    MARITIME POLICY & MANAGEMENT, 2024,
  • [27] Multiobjective Battery Charging Strategy Based on Deep Reinforcement Learning
    Xiong, Zheng
    Luo, Biao
    Wang, Bing-Chuan
    Xu, Xiaodong
    Huang, Tingwen
    IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, 2024, 10 (03): : 6893 - 6903
  • [28] Dispatching AGVs with battery constraints using deep reinforcement learning
    Singh, Nitish
    Akcay, Alp
    Dang, Quang-Vinh
    Martagan, Tugce
    Adan, Ivo
    COMPUTERS & INDUSTRIAL ENGINEERING, 2024, 187
  • [29] Optimizing the efficiency of deep learning through accelerator virtualization
    Gschwind, M.
    Kaldewey, T.
    Tam, D. K.
    IBM JOURNAL OF RESEARCH AND DEVELOPMENT, 2017, 61 (4-5)
  • [30] Learning to Calibrate Battery Models in Real-Time with Deep Reinforcement Learning
    Unagar, Ajaykumar
    Tian, Yuan
    Chao, Manuel Arias
    Fink, Olga
    ENERGIES, 2021, 14 (05)