Multi-Objective Optimization of Cascade Blade Profile Based on Reinforcement Learning

被引:22
|
作者
Qin, Sheng [1 ]
Wang, Shuyue [1 ]
Wang, Liyue [1 ]
Wang, Cong [1 ]
Sun, Gang [1 ]
Zhong, Yongjian [2 ]
机构
[1] Fudan Univ, Dept Aeronaut & Astronaut, Shanghai 200433, Peoples R China
[2] AECC Commercial Aircraft Engine Co Ltd, Shanghai 200241, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2021年 / 11卷 / 01期
关键词
reinforcement learning; multi-objective optimization; DDPG; cascade blade; turbomachinery; SHAPE OPTIMIZATION; GENETIC ALGORITHM; DESIGN OPTIMIZATION; NEURAL-NETWORK; COMPRESSOR; TURBULENCE; MODEL; TRANSITION; AIRFOILS;
D O I
10.3390/app11010106
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The multi-objective optimization of compressor cascade rotor blade is important for aero engine design. Many conventional approaches are thus proposed; however, they lack a methodology for utilizing existing design data/experiences to guide actual design. Therefore, the conventional methods require and consume large computational resources due to their need for large numbers of stochastic cases for determining optimization direction in the design space of problem. This paper proposed a Reinforcement Learning method as a new approach for compressor blade multi-objective optimization. By using Deep Deterministic Policy Gradient (DDPG), the approach modifies the blade profile as an intelligent designer according to the design policy: it learns the design experience of cascade blade as accumulated knowledge from interaction with the computation-based environment; the design policy can thus be updated. The accumulated computational data is therefore transformed into design experience and policies, which are directly applied to the cascade optimization, and the good-performance profiles can be thus approached. In a case study provided in this paper, the proposed approach is applied on a blade profile, which is thus optimized in terms of total pressure loss and laminar flow area. Compared with the initial profile, the total pressure loss coefficient is reduced by 3.59%, and the relative laminar flow area at the suction surface is improved by 25.4%.
引用
收藏
页码:1 / 27
页数:27
相关论文
共 50 条
  • [31] Deep reinforcement learning for multi-objective combinatorial optimization: A case study on multi-objective traveling salesman problem
    Li, Shicheng
    Wang, Feng
    He, Qi
    Wang, Xujie
    [J]. SWARM AND EVOLUTIONARY COMPUTATION, 2023, 83
  • [32] Multi-objective optimization in fixed-outline floorplanning with reinforcement learning
    Jiang, Zhongjie
    Li, Zhiqiang
    Yao, Zhenjie
    [J]. Computers and Electrical Engineering, 2024, 120
  • [33] EMORL: Effective multi-objective reinforcement learning method for hyperparameter optimization
    Chen, SenPeng
    Wu, Jia
    Liu, XiYuan
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2021, 104
  • [34] Multi-Objective Deep Reinforcement Learning for Crowd Route Guidance Optimization
    Nishida, Ryo
    Tanigaki, Yuki
    Onishi, Masaki
    Hashimoto, Koichi
    [J]. TRANSPORTATION RESEARCH RECORD, 2024, 2678 (05) : 617 - 633
  • [35] Optimization of Fiber Radiation Processes Using Multi-Objective Reinforcement Learning
    Choi, Hye Kyung
    Lee, Whan
    Sajadieh, Seyed Mohammad Mehdi
    Do Noh, Sang
    Sim, Seung Bum
    Jung, Wu chang
    Jeong, Jeong Ho
    [J]. INTERNATIONAL JOURNAL OF PRECISION ENGINEERING AND MANUFACTURING-GREEN TECHNOLOGY, 2024,
  • [36] Multi-objective optimization of incidence features for cascade
    [J]. 2017, Beijing University of Aeronautics and Astronautics (BUAA) (32):
  • [37] Multi-Objective Reinforcement Learning Based on Decomposition: A Taxonomy and Framework
    Felten, Florian
    Talbi, El-Ghazali
    Danoy, Gregoire
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2024, 79 : 679 - 723
  • [38] Virtual machine placement based on multi-objective reinforcement learning
    Yao Qin
    Hua Wang
    Shanwen Yi
    Xiaole Li
    Linbo Zhai
    [J]. Applied Intelligence, 2020, 50 : 2370 - 2383
  • [39] Multi-objective path planning based on deep reinforcement learning
    Xu, Jian
    Huang, Fei
    Cui, Yunfei
    Du, Xue
    [J]. 2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 3273 - 3279
  • [40] An XCS-based Algorithm for Multi-Objective Reinforcement Learning
    Cheng, Xiu
    Chen, Gang
    Zhang, Mengjie
    [J]. 2016 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2016, : 4007 - 4014