Device adaptation free-KDA based on multi-teacher knowledge distillation

被引:0
|
作者
Yang, Yafang [1 ]
Guo, Bin [1 ]
Liang, Yunji [1 ]
Zhao, Kaixing [1 ]
Yu, Zhiwen [1 ]
机构
[1] School of Computer Science, Northwestern Polytechnical University, 1 Dongxiang Road, Chang’an District, Shaanxi, Xi’an,710072, China
关键词
Authentication;
D O I
10.1007/s12652-024-04836-5
中图分类号
学科分类号
摘要
The keyboard, a major mean of interaction between human and internet devices, should beset right for good performance during authentication task. To guarantee that one legitimate user can interleave or simultaneously interact with two or more devices with protecting user privacy, it is essential to build device adaptation free-text keystroke dynamics authentication (free-KDA) model based on multi-teacher knowledge distillation methods. Some multi-teacher knowledge distillation methods have shown effective in C-way classification task. However, it is unreasonable for free-KDA model, since free-KDA model is one-class classification task. Instead of using soft-label to capture useful knowledge of source for target device, we propose a device adaptation free-KDA model. When one user builds the authentication model for target device with limited training samples, we propose a novel optimization objective by decreasing the distance discrepancy in Euclidean distance and cosine similarity between source and target device. And then, we adopt an adaptive confidence gate strategy to solve different correlation for each user between different source devices and target device. It is verified on two keystroke datasets with different types of keyboards, and compared its performance with the existing dominant multi-teacher knowledge distillation methods. Extensive experimental results demonstrate that AUC of target device reaches up to 95.17%, which is 15.28% superior to state-of-the-art multi-teacher knowledge distillation methods. © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2024.
引用
收藏
页码:3603 / 3615
页数:12
相关论文
共 50 条
  • [31] Adaptive multi-teacher softened relational knowledge distillation framework for payload mismatch in image steganalysis
    Yu, Lifang
    Li, Yunwei
    Weng, Shaowei
    Tian, Huawei
    Liu, Jing
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 95
  • [32] Unsupervised Domain Adaptation in Medical Image Segmentation via Fourier Feature Decoupling and Multi-teacher Distillation
    Hu, Wei
    Xu, Qiaozhi
    Qi, Xuanhao
    Yin, Yanjun
    Zhi, Min
    Lian, Zhe
    Yang, Na
    Duan, Wentao
    Yu, Lei
    [J]. ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT VI, ICIC 2024, 2024, 14867 : 98 - 110
  • [33] Learning Lightweight Object Detectors via Multi-Teacher Progressive Distillation
    Cao, Shengcao
    Li, Mengtian
    Hays, James
    Ramanan, Deva
    Wang, Yu-Xiong
    Gui, Liang-Yan
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [34] Multi-teacher Universal Distillation Based on Information Hiding for Defense Against Facial Manipulation
    Li, Xin
    Ni, Rongrong
    Zhao, Yao
    Ni, Yu
    Li, Haoliang
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, : 5293 - 5307
  • [35] Collaborative Multi-Teacher Knowledge Distillation for Learning Low Bit-width Deep Neural Networks
    Cuong Pham
    Tuan Hoang
    Thanh-Toan Do
    [J]. 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 6424 - 6432
  • [36] Model Compression with Two-stage Multi-teacher Knowledge Distillation for Web Question Answering System
    Yang, Ze
    Shou, Linjun
    Gong, Ming
    Lin, Wutao
    Jiang, Daxin
    [J]. PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM '20), 2020, : 690 - 698
  • [37] MTUW-GAN: A Multi-Teacher Knowledge Distillation Generative Adversarial Network for Underwater Image Enhancement
    Zhang, Tianchi
    Liu, Yuxuan
    Mase, Atsushi
    [J]. APPLIED SCIENCES-BASEL, 2024, 14 (02):
  • [38] Adversarial Multi-Teacher Distillation for Semi-Supervised Relation Extraction
    Li, Wanli
    Qian, Tieyun
    Li, Xuhui
    Zou, Lixin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (08) : 11291 - 11301
  • [39] Let All Be Whitened: Multi-Teacher Distillation for Efficient Visual Retrieval
    Ma, Zhe
    Dong, Jianfeng
    Ji, Shouling
    Liu, Zhenguang
    Zhang, Xuhong
    Wang, Zonghui
    He, Sifeng
    Qian, Feng
    Zhang, Xiaobo
    Yang, Lei
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 5, 2024, : 4126 - 4135
  • [40] Accurate and efficient protein embedding using multi-teacher distillation learning
    Shang, Jiayu
    Peng, Cheng
    Ji, Yongxin
    Guan, Jiaojiao
    Cai, Dehan
    Tang, Xubo
    Sun, Yanni
    [J]. BIOINFORMATICS, 2024, 40 (09)