Collaborative Multi-Teacher Distillation for Multi-Task Fault Detection in Power Distribution Grid

被引:0
|
作者
Huang, Bingzheng [1 ]
Ni, Chengxin [1 ]
Song, Junjie [1 ]
Yin, Yifan [1 ]
Chen, Ningjiang [1 ]
机构
[1] Guangxi Univ, Sch Comp & Elect Informat, Nanning, Peoples R China
基金
中国国家自然科学基金;
关键词
knowledge distillation; multi-task collaboration; deep learning; fault detection;
D O I
10.1109/CSCWD61410.2024.10580632
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Under the background of complicated fault detection scenarios and diversified data in the current power distribution grid, it is a significant challenge to deploy high-performance fault detection models for efficient multi-task collaboration processing on lightweight edge intelligent devices. A collaborative multi-teacher distillation for multi-task fault detection in the power distribution grid(CMT-KD) is introduced, which enhances multi-task fault detection by redefining Residual Network (ResNet). We further implement model compression and acceleration by adopting the weight quantization method, while enhancing the performance of the multi-task fault detector via dynamically adapting the knowledge weights from multiple teachers. The proposed method allows for the detection of multiple power distribution grid faults with a single model, significantly reducing the number of model parameters, floating point operations, and training time, while also demonstrating favorable performance in terms of improving accuracy and reducing prediction errors.
引用
收藏
页码:2638 / 2643
页数:6
相关论文
共 50 条
  • [21] Multi-teacher knowledge distillation for debiasing recommendation with uniform data
    Yang, Xinxin
    Li, Xinwei
    Liu, Zhen
    Yuan, Yafan
    Wang, Yannan
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 273
  • [22] mKDNAD: A network flow anomaly detection method based on multi-teacher knowledge distillation
    Yang, Yang
    Liu, Dan
    2022 16TH IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP2022), VOL 1, 2022, : 314 - 319
  • [23] ATMKD: adaptive temperature guided multi-teacher knowledge distillation
    Lin, Yu-e
    Yin, Shuting
    Ding, Yifeng
    Liang, Xingzhu
    MULTIMEDIA SYSTEMS, 2024, 30 (05)
  • [24] Multi-Teacher Distillation With Single Model for Neural Machine Translation
    Liang, Xiaobo
    Wu, Lijun
    Li, Juntao
    Qin, Tao
    Zhang, Min
    Liu, Tie-Yan
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 992 - 1002
  • [25] Reinforced Multi-teacher Knowledge Distillation for Unsupervised Sentence Representation
    Wang, Xintao
    Jin, Rize
    Qi, Shibo
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT VII, 2024, 15022 : 320 - 332
  • [26] Learning Lightweight Object Detectors via Multi-Teacher Progressive Distillation
    Cao, Shengcao
    Li, Mengtian
    Hays, James
    Ramanan, Deva
    Wang, Yu-Xiong
    Gui, Liang-Yan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [27] MTKD: Multi-Teacher Knowledge Distillation for Image Super-Resolution
    Jiang, Yuxuan
    Feng, Chen
    Zhang, Fan
    Bull, David
    COMPUTER VISION - ECCV 2024, PT XXXIX, 2025, 15097 : 364 - 382
  • [28] Collaborative Multi-Teacher Knowledge Distillation for Learning Low Bit-width Deep Neural Networks
    Cuong Pham
    Tuan Hoang
    Thanh-Toan Do
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 6424 - 6432
  • [29] Multi-task learning for collaborative filtering
    Lianjie Long
    Faliang Huang
    Yunfei Yin
    Youquan Xu
    International Journal of Machine Learning and Cybernetics, 2022, 13 : 1355 - 1368
  • [30] Neural multi-task collaborative filtering
    Wang, SuHua
    Cheng, MingJun
    Ma, ZhiQiang
    Sun, XiaoXin
    EVOLUTIONARY INTELLIGENCE, 2022, 15 (04) : 2385 - 2393