Multi-Objective Deep Q-Network Control for Actively Lubricated Bearings

被引:0
|
作者
Shutin, Denis [1 ]
Kazakov, Yuri [1 ]
机构
[1] Orel State Univ, Dept Mechatron Mech & Robot, Oryol 302015, Russia
基金
俄罗斯科学基金会;
关键词
active bearings; rotor dynamics; active lubrication; multi-objective control; reinforcement learning; DQN; artificial intelligence; artificial neural networks; reducing friction; reducing power losses; PART I; FRICTION; PERFORMANCE; STABILITY; DESIGN; ROTOR;
D O I
10.3390/lubricants12070242
中图分类号
TH [机械、仪表工业];
学科分类号
0802 ;
摘要
This paper aims to study and demonstrate the possibilities of using reinforcement learning for the synthesis of multi-objective controllers for radial actively lubricated hybrid fluid film bearings (ALHBs), which are considered to be complex multi-physical systems. In addition to the rotor displacement control problem being typically solved for active bearings, the proposed approach also includes power losses due to friction and lubricant pumping in ALHBs among the control objectives to be minimized by optimizing the lubrication modes. The multi-objective controller was synthesized using the deep Q-network (DQN) learning technique. An optimal control policy was determined by the DQN agent during its repetitive interaction with the simulation model of the rotor system with ALHBs. The calculations were sped up by replacing the numerical model of an ALHB with its surrogate ANN-based counterpart and by predicting the shaft displacements in response to operation of two independent control loops. The controller synthesized considering the formulated reward function for DQN agent is able to find a stable shaft position that reduces power losses by almost half compared to the losses observed when using a passive system. It also is able to prevent the established limit of the minimum fluid film thickness being exceeded to avoid possible system damage, for example, when the rotor is unbalanced during the operation. Analysis of the development process and the results obtained allowed us to draw conclusions about the main advantages and disadvantages of the considered approach, and also allowed us to identify some important directions for further research.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] A Novel Multi-Objective Deep Q-Network: Addressing Immediate and Delayed Rewards in Multi-Objective Q-Learning
    Zhang, Youming
    [J]. IEEE Access, 2024, 12 : 144932 - 144949
  • [2] Adaptive operator selection with dueling deep Q-network for evolutionary multi-objective optimization
    Yin, Shihong
    Xiang, Zhengrong
    [J]. NEUROCOMPUTING, 2024, 581
  • [3] Multi-objective control and energy management strategy based on deep Q-network for parallel hybrid electric vehicles
    Zhang, Shiyi
    Chen, Jiaxin
    Tang, Xiaolin
    [J]. INTERNATIONAL JOURNAL OF VEHICLE PERFORMANCE, 2022, 8 (04)
  • [4] A hyper-heuristic with deep Q-network for the multi-objective unmanned surface vehicles scheduling problem
    Xu, Ningjun
    Shi, Zhangsong
    Yin, Shihong
    Xiang, Zhengrong
    [J]. NEUROCOMPUTING, 2024, 596
  • [5] A multi-objective trade-off framework for cloud resource scheduling based on the Deep Q-network algorithm
    Peng, Zhiping
    Lin, Jianpeng
    Cui, Delong
    Li, Qirui
    He, Jieguang
    [J]. CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2020, 23 (04): : 2753 - 2767
  • [6] Multi-objective optimization for improving machining benefit based on WOA-BBPN and a Deep Double Q-Network
    Lu, Juan
    Chen, Zhiheng
    Liao, Xiaoping
    Chen, Chaoyi
    Ouyang, Haibin
    Li, Steven
    [J]. APPLIED SOFT COMPUTING, 2023, 142
  • [7] Multi-Objective Flexible Flow Shop Production Scheduling Problem Based on the Double Deep Q-Network Algorithm
    Gong, Hua
    Xu, Wanning
    Sun, Wenjuan
    Xu, Ke
    [J]. PROCESSES, 2023, 11 (12)
  • [8] A multi-objective trade-off framework for cloud resource scheduling based on the Deep Q-network algorithm
    Zhiping Peng
    Jianpeng Lin
    Delong Cui
    Qirui Li
    Jieguang He
    [J]. Cluster Computing, 2020, 23 : 2753 - 2767
  • [9] Deep Deformable Q-Network: An Extension of Deep Q-Network
    Jin, Beibei
    Yang, Jianing
    Huang, Xiangsheng
    Khan, Dawar
    [J]. 2017 IEEE/WIC/ACM INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE (WI 2017), 2017, : 963 - 966
  • [10] Multi-objective optimization for autonomous driving strategy based on Deep Q Network
    Tianmeng Hu
    Biao Luo
    Chunhua Yang
    [J]. Discover Artificial Intelligence, 1 (1):