ENHANCING SAMPLE EFFICIENCY FOR TEMPERATURE CONTROL IN DED WITH REINFORCEMENT LEARNING AND MOOSE FRAMEWORK

被引:0
|
作者
Sousa, Joao [1 ,4 ,5 ]
Darabi, Roya [1 ,2 ]
Sousa, Armando [2 ,3 ]
Reis, Luis P. [2 ,4 ]
Brueckner, Frank [5 ]
Reis, Ana [1 ,2 ]
de Sa, Jose Cesar [1 ]
机构
[1] Inst Sci & Innovat Mech & Ind Engn INEGI, Porto, Portugal
[2] Univ Porto, Fac Engn, Porto, Portugal
[3] INESC Technol & Sci INESC TEC, Porto, Portugal
[4] Artificial Intelligence & Comp Sci Lab LIACC, Porto, Portugal
[5] Fraunhofer IWS, Dresden, Germany
来源
PROCEEDINGS OF ASME 2023 INTERNATIONAL MECHANICAL ENGINEERING CONGRESS AND EXPOSITION, IMECE2023, VOL 3 | 2023年
关键词
MOOSE; DED; Reinforcement Learning; Model-Based; Q-learning; Dyna-Q;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Directed Energy Deposition (DED) is crucial in additive manufacturing for various industries like aerospace, automotive, and biomedical. Precise temperature control is essential due to high-power lasers and dynamic environmental changes. Employing Reinforcement Learning (RL) can help with temperature control, but challenges arise from standardization and sample efficiency. In this study, a model-based Reinforcement Learning (MBRL) approach is used to train a DED model, improving control and efficiency. Computational models evaluate melt pool geometry and temporal characteristics during the process. The study employs the Allen-Cahn phase field (AC-PF) model using the Finite Element Method (FEM) with the Multi-physics Object-Oriented Simulation Environment (MOOSE). MBRL, specifically Dyna-Q+, outperforms traditional Q-learning, requiring fewer samples. Insights from this research aid in advancing RL techniques for laser metal additive manufacturing.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] Improving Energy Efficiency in Green Femtocell Networks: A Hierarchical Reinforcement Learning Framework
    Chen, Xianfu
    Zhang, Honggang
    Chen, Tao
    Lasanen, Mika
    2013 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2013,
  • [42] Enhancing Efficiency in Hierarchical Reinforcement Learning through Topological-Sorted Potential Calculation
    Zhou, Ziyun
    Shang, Jingwei
    Li, Yimang
    ELECTRONICS, 2023, 12 (17)
  • [43] A molecular framework for enhancing quality control and sample integrity in forensic genome sequencing
    Bates, Steven A.
    Budowle, Bruce
    Baker, Lee
    Mittelman, Kristen
    Mittelman, David
    FORENSIC SCIENCE INTERNATIONAL-GENETICS, 2025, 75
  • [44] Enhancing Edge Multipath Data Security Offloading Efficiency via Sequential Reinforcement Learning
    Qiao, Wenxuan
    Zhang, Yuyang
    Dong, Ping
    Du, Xiaojiang
    Yu, Chengxiao
    Zhang, Hongke
    Guizani, Mohsen
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 1265 - 1270
  • [45] Enhancing Efficiency in Collision Avoidance: A Study on Transfer Reinforcement Learning in Autonomous Ships' Navigation
    Wang, Xinrui
    Jin, Yan
    ASME OPEN JOURNAL OF ENGINEERING, 2024, 3
  • [46] Safety Filtering While Training: Improving the Performance and Sample Efficiency of Reinforcement Learning Agents
    Bejarano, Federico Pizarro
    Brunke, Lukas
    Schoellig, Angela P.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (01): : 788 - 795
  • [47] Elastic step DDPG: Multi-step reinforcement learning for improved sample efficiency
    Ly, Adrian
    Dazeley, Richard
    Vamplew, Peter
    Cruz, Francisco
    Aryal, Sunil
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [48] RLingua: Improving Reinforcement Learning Sample Efficiency in Robotic Manipulations With Large Language Models
    Chen, Liangliang
    Lei, Yutian
    Jin, Shiyu
    Zhang, Ying
    Zhang, Liangjun
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (07): : 6075 - 6082
  • [49] Addressing Sample Efficiency and Model-bias in Model-based Reinforcement Learning
    Anand, Akhil S.
    Kveen, Jens Erik
    Abu-Dakka, Fares
    Grotli, Esten Ingar
    Gravdahl, Jan Tommy
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 1 - 6
  • [50] Improving Sample Efficiency of Example-Guided Deep Reinforcement Learning for Bipedal Walking
    Galljamov, Rustam
    Zhao, Guoping
    Belousov, Boris
    Seyfarth, Andre
    Peters, Jan
    2022 IEEE-RAS 21ST INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2022, : 587 - 593