Proactive Maintenance Model Using Reinforcement Learning Algorithm in Rubber Industry

被引:8
|
作者
Senthil, Chandran [1 ]
Sudhakara Pandian, Ranjitharamasamy [2 ]
机构
[1] Anna Univ, SACS MAVMM Engn Coll, Dept Mech Engn, Madurai 625301, Tamil Nadu, India
[2] Vellore Inst Technol, Sch Mech Engn, Vellore 632014, Tamil Nadu, India
关键词
reinforcement learning algorithm; preventive maintenance; overall equipment efficiency; PREVENTIVE MAINTENANCE; AVAILABILITY;
D O I
10.3390/pr10020371
中图分类号
TQ [化学工业];
学科分类号
0817 ;
摘要
This paper presents an investigation into the enhancement of availability of a curing machine deployed in the rubber industry, located in Tamilnadu in India. Machine maintenance is a major task in the rubber industry, due to the demand for product. Critical component identification in curing machines is necessary to prevent rapid failure followed by subsequent repairs that extend curing machine downtime. A reward in the Reinforcement Learning Algorithm (RLA) prevents frequent downtime by improving the availability of the curing machine at time when unscheduled long-term maintenance would interfere with operation, due to the occurrence of unspecified failure to a critical component. Over time, depreciation and degradation of components in a machine are unavoidable, as is shown in the present investigation through intelligent assessment of the lifespan of components. So far, no effective methodology has been implemented in a real-time maintenance environment. RLAs seem to be a more effective application when it is based on intelligent assessment, which encompasses the failure and repair rate used to calculate availability in an automated environment. Training of RLAs is performed to evaluate overall equipment efficiency (OEE) in terms of availability. The availability of a curing machine in the form of state probability is modeled in first-order differential-difference equations. RLAs maximize the rate of availability of the machine. Preventive maintenance (PM) rate for four modules of 16 curing machines is expressed in a transition diagram, using transition rate. Transition rate indicates the degree of PM and unplanned maintenance rates that defines the total availability of the four modules. OEE is expressed in terms of the availability of curing machines, which is related to performance and quality. The results obtained by RLA are promising regarding short-term and long-term efficiencies of OEE, which are 95.19% and 83.37%, respectively.
引用
收藏
页数:18
相关论文
共 50 条
  • [31] RLNN: A force perception algorithm using reinforcement learning
    Zhao, Yangyang
    Zheng, Qingchun
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (21) : 60103 - 60115
  • [32] Quantum adiabatic algorithm design using reinforcement learning
    Lin, Jian
    Lai, Zhong Yuan
    Li, Xiaopeng
    [J]. PHYSICAL REVIEW A, 2020, 101 (05)
  • [33] Motor learning model using reinforcement learning with neural internal model
    Izawa, J
    Kondo, T
    Ito, K
    [J]. 2003 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3, PROCEEDINGS, 2003, : 3146 - 3151
  • [34] ABRAHAM: Machine Learning Backed Proactive Handover Algorithm Using SDN
    Zeljkovic, Ensar
    Slamnik-Krijestorac, Nina
    Latre, Steven
    Marquez-Barja, Johann M.
    [J]. IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2019, 16 (04): : 1522 - 1536
  • [35] Cooperative Proactive Eavesdropping Based on Deep Reinforcement Learning
    Yang, Yaxin
    Li, Baogang
    Zhang, Shue
    Zhao, Wei
    Zhang, Haijun
    [J]. IEEE WIRELESS COMMUNICATIONS LETTERS, 2021, 10 (09) : 1857 - 1861
  • [36] A Centralized Reinforcement Learning Approach for Proactive Scheduling in Manufacturing
    Qu, Shuhui
    Chu, Tianshu
    Wang, Jie
    Leckie, James
    Jian, Weiwen
    [J]. PROCEEDINGS OF 2015 IEEE 20TH CONFERENCE ON EMERGING TECHNOLOGIES & FACTORY AUTOMATION (ETFA), 2015,
  • [37] Optimal Dynamic Proactive Caching via Reinforcement Learning
    Sadeghi, Alireza
    Sheikholeslami, Fatemeh
    Giannakis, Georgios B.
    [J]. 2018 IEEE 19TH INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (SPAWC), 2018, : 66 - 70
  • [38] Proactive Handover Decision for UAVs with Deep Reinforcement Learning
    Jang, Younghoon
    Raza, Syed M.
    Kim, Moonseong
    Choo, Hyunseung
    [J]. SENSORS, 2022, 22 (03)
  • [39] GNOSIS: Proactive Image Placement Using Graph Neural Networks & Deep Reinforcement Learning
    Theodoropoulos, Theodoros
    Makris, Antonios
    Psomakelis, Evangelos
    Carlini, Emanuele
    Mordacchini, Matteo
    Dazzi, Patrizio
    Tserpes, Konstantinos
    [J]. 2023 IEEE 16TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING, CLOUD, 2023, : 120 - 128
  • [40] Autonomous order dispatching in the semiconductor industry using reinforcement learning
    Kuhnle, Andreas
    Roehrig, Nicole
    Lanza, Gisela
    [J]. 12TH CIRP CONFERENCE ON INTELLIGENT COMPUTATION IN MANUFACTURING ENGINEERING, 2019, 79 : 391 - 396