Facial landmark points detection using knowledge distillation-based neural networks

被引:14
|
作者
Fard, Ali Pourramezan [1 ]
Mahoor, Mohammad H. [1 ]
机构
[1] Univ Denver, Dept Elect & Comp Engn, 2155 E Wesley Ave, Denver, CO 80208 USA
关键词
Deep learning; Face alignment; Facial landmark points detection; Knowledge distillation; FACE ALIGNMENT;
D O I
10.1016/j.cviu.2021.103316
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Facial landmark detection is a vital step for numerous facial image analysis applications. Although some deep learning-based methods have achieved good performances in this task, they are often not suitable for running on mobile devices. Such methods rely on networks with many parameters, which makes the training and inference time-consuming. Training lightweight neural networks such as MobileNets are often challenging, and the models might have low accuracy. Inspired by knowledge distillation (KD), this paper presents a novel loss function to train a lightweight Student network (e.g., MobileNetV2) for facial landmark detection. We use two Teacher networks, a Tolerant-Teacher and a Tough-Teacher in conjunction with the Student network. The Tolerant-Teacher is trained using Soft-landmarks created by active shape models, while the Tough-Teacher is trained using the ground truth (aka Hard-landmarks) landmark points. To utilize the facial landmark points predicted by the Teacher networks, we define an Assistive Loss (ALoss) for each Teacher network. Moreover, we define a loss function called KD-Loss that utilizes the facial landmark points predicted by the two pre-trained Teacher networks (EfficientNet-b3) to guide the lightweight Student network towards predicting the Hard landmarks. Our experimental results on three challenging facial datasets show that the proposed architecture will result in a better-trained Student network that can extract facial landmark points with high accuracy.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Knowledge Distillation-Based Domain-Invariant Representation Learning for Domain Generalization
    Niu, Ziwei
    Yuan, Junkun
    Ma, Xu
    Xu, Yingying
    Liu, Jing
    Chen, Yen-Wei
    Tong, Ruofeng
    Lin, Lanfen
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 245 - 255
  • [42] Knowledge Reverse Distillation Based Confidence Calibration for Deep Neural Networks
    Jiang, Xianhui
    Deng, Xiaogang
    NEURAL PROCESSING LETTERS, 2023, 55 (01) : 345 - 360
  • [43] Homogeneous teacher based buffer knowledge distillation for tiny neural networks
    Dai, Xinru
    Lu, Gang
    Shen, Jianhua
    Huang, Shuo
    Wei, Tongquan
    JOURNAL OF SYSTEMS ARCHITECTURE, 2024, 148
  • [44] Knowledge Reverse Distillation Based Confidence Calibration for Deep Neural Networks
    Xianhui Jiang
    Xiaogang Deng
    Neural Processing Letters, 2023, 55 : 345 - 360
  • [45] Feature Distribution-based Knowledge Distillation for Deep Neural Networks
    Hong, Hyeonseok
    Kim, Hyun
    2022 19TH INTERNATIONAL SOC DESIGN CONFERENCE (ISOCC), 2022, : 75 - 76
  • [46] Feature knowledge distillation-based model lightweight for prohibited item detection in X-ray security inspection images
    Ren, Yu
    Zhao, Lun
    Zhang, Yongtao
    Liu, Yiyao
    Yang, Jinfeng
    Zhang, Haigang
    Lei, Baiying
    ADVANCED ENGINEERING INFORMATICS, 2025, 65
  • [47] Knowledge distillation on neural networks for evolving graphs
    Antaris, Stefanos
    Rafailidis, Dimitrios
    Girdzijauskas, Sarunas
    SOCIAL NETWORK ANALYSIS AND MINING, 2021, 11 (01)
  • [48] On Representation Knowledge Distillation for Graph Neural Networks
    Joshi, Chaitanya K.
    Liu, Fayao
    Xun, Xu
    Lin, Jie
    Foo, Chuan Sheng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (04) : 4656 - 4667
  • [49] Knowledge distillation on neural networks for evolving graphs
    Stefanos Antaris
    Dimitrios Rafailidis
    Sarunas Girdzijauskas
    Social Network Analysis and Mining, 2021, 11
  • [50] Predicting Personal Traits from Facial Images Using Convolutional Neural Networks Augmented with Facial Landmark Information
    Lewenberg, Yoad
    Bachrach, Yoram
    Shankar, Sukrit
    Criminisi, Antonio
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 4365 - 4366