Application of the Nadaraya-Watson estimator based attention mechanism to the field of predictive maintenance

被引:2
|
作者
Siraskar, Rajesh [1 ,2 ]
Kumar, Satish [1 ,3 ]
Patil, Shruti [1 ,3 ]
Bongale, Arunkumar [1 ]
Kotecha, Ketan [1 ,3 ,4 ]
机构
[1] Symbiosis Int Deemed Univ, Symbiosis Inst Technol, Pune 412115, Maharashtra, India
[2] Birlasoft Ltd, CTO Off, Pune 411057, Maharashtra, India
[3] Symbiosis Int Deemed Univ, Symbiosis Ctr Appl Artificial Intelligence, Pune 412115, Maharashtra, India
[4] Peoples Friendship Univ Russia, 6 Miklukho Maklaya Str, Moscow 117198, Russia
关键词
Predictive maintenance; Attention mechanism; Nadaraya-Watson; Edge computing;
D O I
10.1016/j.mex.2024.102754
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Attention mechanism has recently gained immense importance in the natural language processing (NLP) world. This technique highlights parts of the input text that the NLP task (such as translation) must pay "attention " to. Inspired by this, some researchers have recently applied the NLP domain, deep -learning based, attention mechanism techniques to predictive maintenance. In contrast to the deep -learning based solutions, Industry 4.0 predictive maintenance solutions that often rely on edge -computing, demand lighter predictive models. With this objective, we have investigated the adaptation of a simpler, incredibly fast and compute -resource friendly, "NadarayaWatson estimator based " attention method. We develop a method to predict tool -wear of a milling machine using this attention mechanism and demonstrate, with the help of heat -maps, how the attention mechanism highlights regions that assist in predicting onset of tool -wear. We validate the effectiveness of this adaptation on the benchmark IEEEDataPort PHM Society dataset, by comparing against other comparatively "lighter " machine learning techniques - Bayesian Ridge, Gradient Boosting Regressor, SGD Regressor and Support Vector Regressor. Our experiments indicate that the proposed Nadaraya-Watson attention mechanism performed best with an MAE of 0.069, RMSE of 0.099 and R 2 of 83.40 %, when compared to the next best technique Gradient Boosting Regressor with figures of 0.100, 0.138, 66.51 % respectively. Additionally, it produced a lighter and faster model as well. center dot We propose a Nadaraya-Watson estimator based "attention mechanism ", applied to a predictive maintenance problem. center dot Unlike the deep -learning based attention mechanisms from the NLP domain, our method creates fast, light and high-performance models, suitable for edge computing devices and therefore supports the Industry 4.0 initiative. center dot Method validated on real tool -wear data of a milling machine.
引用
收藏
页数:18
相关论文
共 50 条