ENHANCING SAMPLE EFFICIENCY FOR TEMPERATURE CONTROL IN DED WITH REINFORCEMENT LEARNING AND MOOSE FRAMEWORK

被引:0
|
作者
Sousa, Joao [1 ,4 ,5 ]
Darabi, Roya [1 ,2 ]
Sousa, Armando [2 ,3 ]
Reis, Luis P. [2 ,4 ]
Brueckner, Frank [5 ]
Reis, Ana [1 ,2 ]
de Sa, Jose Cesar [1 ]
机构
[1] Inst Sci & Innovat Mech & Ind Engn INEGI, Porto, Portugal
[2] Univ Porto, Fac Engn, Porto, Portugal
[3] INESC Technol & Sci INESC TEC, Porto, Portugal
[4] Artificial Intelligence & Comp Sci Lab LIACC, Porto, Portugal
[5] Fraunhofer IWS, Dresden, Germany
来源
PROCEEDINGS OF ASME 2023 INTERNATIONAL MECHANICAL ENGINEERING CONGRESS AND EXPOSITION, IMECE2023, VOL 3 | 2023年
关键词
MOOSE; DED; Reinforcement Learning; Model-Based; Q-learning; Dyna-Q;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Directed Energy Deposition (DED) is crucial in additive manufacturing for various industries like aerospace, automotive, and biomedical. Precise temperature control is essential due to high-power lasers and dynamic environmental changes. Employing Reinforcement Learning (RL) can help with temperature control, but challenges arise from standardization and sample efficiency. In this study, a model-based Reinforcement Learning (MBRL) approach is used to train a DED model, improving control and efficiency. Computational models evaluate melt pool geometry and temporal characteristics during the process. The study employs the Allen-Cahn phase field (AC-PF) model using the Finite Element Method (FEM) with the Multi-physics Object-Oriented Simulation Environment (MOOSE). MBRL, specifically Dyna-Q+, outperforms traditional Q-learning, requiring fewer samples. Insights from this research aid in advancing RL techniques for laser metal additive manufacturing.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Kalman filter control embedded into the reinforcement learning framework
    Szita, I
    Lorincz, A
    NEURAL COMPUTATION, 2004, 16 (03) : 491 - 499
  • [22] A unified framework to control estimation error in reinforcement learning
    Zhang, Yujia
    Li, Lin
    Wei, Wei
    Lv, Yunpeng
    Liang, Jiye
    NEURAL NETWORKS, 2024, 178
  • [23] Sample-efficient reinforcement learning for CERN accelerator control
    Kain, Verena
    Hirlander, Simon
    Goddard, Brennan
    Velotti, Francesco Maria
    Porta, Giovanni Zevi Della
    Bruchon, Niky
    Valentino, Gianluca
    PHYSICAL REVIEW ACCELERATORS AND BEAMS, 2020, 23 (12)
  • [24] Consciousness-driven reinforcement learning: An online learning control framework
    Wang, Xiaoyang
    Ye, Xiufen
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (01) : 770 - 798
  • [25] A Unified Lyapunov Framework for Finite-Sample Analysis of Reinforcement Learning Algorithms
    Chen Z.
    Performance Evaluation Review, 2023, 50 (03): : 12 - 15
  • [26] Reinforcement Learning and Machine Learning Controllers for Enhancing Water Quality and Process Efficiency in Electrochemical Desalination
    Ullah, Zahid
    Yun, Nakyeong
    Rossi, Ruggero
    Son, Moon
    ACS ES&T WATER, 2024, 4 (12): : 5482 - 5491
  • [27] Increasing sample efficiency in deep reinforcement learning using generative environment modelling
    Andersen, Per-Arne
    Goodwin, Morten
    Granmo, Ole-Christoffer
    EXPERT SYSTEMS, 2021, 38 (07)
  • [28] Real building implementation of a deep reinforcement learning controller to enhance energy efficiency and indoor temperature control
    Silvestri, Alberto
    Coraci, Davide
    Brandi, Silvio
    Capozzoli, Alfonso
    Borkowski, Esther
    Kohler, Johannes
    Wu, Duan
    Zeilinger, Melanie N.
    Schlueter, Arno
    APPLIED ENERGY, 2024, 368
  • [29] Improving Sample Efficiency in Model-Free Reinforcement Learning from Images
    Yarats, Denis
    Zhang, Amy
    Kostrikov, Ilya
    Amos, Brandon
    Pineau, Joelle
    Fergus, Rob
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10674 - 10681
  • [30] Utilizing Skipped Frames in Action Repeats for Improving Sample Efficiency in Reinforcement Learning
    Luu, Tung M.
    Thanh Nguyen
    Thang Vu
    Yoo, Chang D.
    IEEE ACCESS, 2022, 10 : 64965 - 64975