Fine-Tuning of Pre-Trained Deep Face Sketch Models Using Smart Switching Slime Mold Algorithm

被引:0
|
作者
Alhashash, Khaled Mohammad [1 ]
Samma, Hussein [2 ]
Suandi, Shahrel Azmin [1 ]
机构
[1] Univ Sains Malaysia, Sch Elect & Elect Engn, Intelligent Biometr Grp, USM Engn Campus, Nibong Tebal 14300, Penang, Malaysia
[2] King Fahd Univ Petr & Minerals, SDAIA KFUPM Joint Res Ctr Artificial Intelligence, Dhahran 31261, Saudi Arabia
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 08期
关键词
deep face sketch recognition; slime mold algorithm; fine-tuning; RECOGNITION;
D O I
10.3390/app13085102
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
There are many pre-trained deep learning-based face recognition models developed in the literature, such as FaceNet, ArcFace, VGG-Face, and DeepFace. However, performing transfer learning of these models for handling face sketch recognition is not applicable due to the challenge of limited sketch datasets (single sketch per subject). One promising solution to mitigate this issue is by using optimization algorithms, which will perform a fine-tuning and fitting of these models for the face sketch problem. Specifically, this research introduces an enhanced optimizer that will evolve these models by performing automatic weightage/fine-tuning of the generated feature vector guided by the recognition accuracy of the training data. The following are the key contributions to this work: (i) this paper introduces a novel Smart Switching Slime Mold Algorithm (S(2)SMA), which has been improved by embedding several search operations and control rules; (ii) the proposed S(2)SMA aims to fine-tune the pre-trained deep learning models in order to improve the accuracy of the face sketch recognition problem; and (iii) the proposed S(2)SMA makes simultaneous fine-tuning of multiple pre-trained deep learning models toward further improving the recognition accuracy of the face sketch problem. The performance of the S(2)SMA has been evaluated on two face sketch databases, which are XM2VTS and CUFSF, and on CEC's 2010 large-scale benchmark. In addition, the outcomes were compared to several variations of the SMA and related optimization techniques. The numerical results demonstrated that the improved optimizer obtained a higher level of fitness value as well as better face sketch recognition accuracy. The statistical data demonstrate that S(2)SMA significantly outperforms other optimization techniques with a rapid convergence curve.
引用
收藏
页数:36
相关论文
共 50 条
  • [1] Span Fine-tuning for Pre-trained Language Models
    Bao, Rongzhou
    Zhang, Zhuosheng
    Zhao, Hai
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 1970 - 1979
  • [2] Fine-Tuning Pre-Trained CodeBERT for Code Search in Smart Contract
    JIN Huan
    LI Qinying
    Wuhan University Journal of Natural Sciences, 2023, 28 (03) : 237 - 245
  • [3] Debiasing Pre-Trained Language Models via Efficient Fine-Tuning
    Gira, Michael
    Zhang, Ruisu
    Lee, Kangwook
    PROCEEDINGS OF THE SECOND WORKSHOP ON LANGUAGE TECHNOLOGY FOR EQUALITY, DIVERSITY AND INCLUSION (LTEDI 2022), 2022, : 59 - 69
  • [4] Exploiting Syntactic Information to Boost the Fine-tuning of Pre-trained Models
    Liu, Chaoming
    Zhu, Wenhao
    Zhang, Xiaoyu
    Zhai, Qiuhong
    2022 IEEE 46TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE (COMPSAC 2022), 2022, : 575 - 582
  • [5] Pruning Pre-trained Language ModelsWithout Fine-Tuning
    Jiang, Ting
    Wang, Deqing
    Zhuang, Fuzhen
    Xie, Ruobing
    Xia, Feng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 594 - 605
  • [6] Pathologies of Pre-trained Language Models in Few-shot Fine-tuning
    Chen, Hanjie
    Zheng, Guoqing
    Awadallah, Ahmed Hassan
    Ji, Yangfeng
    PROCEEDINGS OF THE THIRD WORKSHOP ON INSIGHTS FROM NEGATIVE RESULTS IN NLP (INSIGHTS 2022), 2022, : 144 - 153
  • [7] Fine-tuning the hyperparameters of pre-trained models for solving multiclass classification problems
    Kaibassova, D.
    Nurtay, M.
    Tau, A.
    Kissina, M.
    COMPUTER OPTICS, 2022, 46 (06) : 971 - 979
  • [8] Revisiting k-NN for Fine-Tuning Pre-trained Language Models
    Li, Lei
    Chen, Jing
    Tian, Botzhong
    Zhang, Ningyu
    CHINESE COMPUTATIONAL LINGUISTICS, CCL 2023, 2023, 14232 : 327 - 338
  • [9] Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively
    Zhang, Haojie
    Li, Ge
    Li, Jia
    Zhang, Zhongjin
    Zhu, Yuqi
    Jin, Zhi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [10] An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models
    Liu, Xueqing
    Wang, Chi
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 2286 - 2300