Facial landmark points detection using knowledge distillation-based neural networks

被引:14
|
作者
Fard, Ali Pourramezan [1 ]
Mahoor, Mohammad H. [1 ]
机构
[1] Univ Denver, Dept Elect & Comp Engn, 2155 E Wesley Ave, Denver, CO 80208 USA
关键词
Deep learning; Face alignment; Facial landmark points detection; Knowledge distillation; FACE ALIGNMENT;
D O I
10.1016/j.cviu.2021.103316
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Facial landmark detection is a vital step for numerous facial image analysis applications. Although some deep learning-based methods have achieved good performances in this task, they are often not suitable for running on mobile devices. Such methods rely on networks with many parameters, which makes the training and inference time-consuming. Training lightweight neural networks such as MobileNets are often challenging, and the models might have low accuracy. Inspired by knowledge distillation (KD), this paper presents a novel loss function to train a lightweight Student network (e.g., MobileNetV2) for facial landmark detection. We use two Teacher networks, a Tolerant-Teacher and a Tough-Teacher in conjunction with the Student network. The Tolerant-Teacher is trained using Soft-landmarks created by active shape models, while the Tough-Teacher is trained using the ground truth (aka Hard-landmarks) landmark points. To utilize the facial landmark points predicted by the Teacher networks, we define an Assistive Loss (ALoss) for each Teacher network. Moreover, we define a loss function called KD-Loss that utilizes the facial landmark points predicted by the two pre-trained Teacher networks (EfficientNet-b3) to guide the lightweight Student network towards predicting the Hard landmarks. Our experimental results on three challenging facial datasets show that the proposed architecture will result in a better-trained Student network that can extract facial landmark points with high accuracy.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] FedMEKT: Distillation-based embedding knowledge transfer for multimodal federated learning
    Le, Huy Q.
    Nguyen, Minh N. H.
    Thwal, Chu Myaet
    Qiao, Yu
    Zhang, Chaoning
    Hong, Choong Seon
    NEURAL NETWORKS, 2025, 183
  • [32] Minifying photometric stereo via knowledge distillation-based feature translation
    Han, Seungoh
    Park, Jinsun
    Cho, Donghyeon
    OPTICS EXPRESS, 2022, 30 (21) : 38284 - 38297
  • [33] Knowledge distillation-based domain generalization enabling invariant feature distributions for damage detection of rotating machines and structures
    Wang, Xiaoyou
    Jiao, Jinyang
    Zhou, Xiaoqing
    Xia, Yong
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2025, 257
  • [34] Facial Smile Detection Using Convolutional Neural Networks
    Dinh Viet Sang
    Le Tran Bao Cuong
    Do Phan Thuan
    2017 9TH INTERNATIONAL CONFERENCE ON KNOWLEDGE AND SYSTEMS ENGINEERING (KSE 2017), 2017, : 136 - 141
  • [35] Knowledge distillation-based deep learning classification network for peripheral blood leukocytes
    Leng, Bing
    Leng, Min
    Ge, Mingfeng
    Dong, Wenfei
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2022, 75
  • [36] Knowledge Distillation-Based Zero-Shot Learning for Process Fault Diagnosis
    Liu, Yi
    Huang, Jiajun
    Jia, Mingwei
    ADVANCED INTELLIGENT SYSTEMS, 2024,
  • [37] Correction to: Robust facial landmark extraction scheme using multiple convolutional neural networks
    Hyungjoon Kim
    Jisoo Park
    HyeonWoo Kim
    Eenjun Hwang
    Seungmin Rho
    Multimedia Tools and Applications, 2019, 78 : 3239 - 3239
  • [38] Knowledge distillation-based performance transferring for LSTM-RNN model acceleration
    Ma, Hongbin
    Yang, Shuyuan
    Wu, Ruowu
    Hao, Xiaojun
    Long, Huimin
    He, Guangjun
    SIGNAL IMAGE AND VIDEO PROCESSING, 2022, 16 (06) : 1541 - 1548
  • [39] Knowledge Distillation-Based Robust UAV Swarm Communication Under Malicious Attacks
    Wu, Qirui
    Zhang, Yirun
    Yang, Zhaohui
    Shikh-Bahaei, Mohammad
    2024 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS, ICC WORKSHOPS 2024, 2024, : 1023 - 1029
  • [40] Knowledge distillation-based performance transferring for LSTM-RNN model acceleration
    Hongbin Ma
    Shuyuan Yang
    Ruowu Wu
    Xiaojun Hao
    Huimin Long
    Guangjun He
    Signal, Image and Video Processing, 2022, 16 : 1541 - 1548