Pattern unlocking guided multi-modal continuous authentication for smartphone with multi-branch context-aware representation learning and auto encoder

被引:0
|
作者
Yao, Muyan [1 ]
Jin, Zuodong [1 ]
Gao, Ruipeng [2 ]
Qi, Peng [1 ]
Tao, Dan [1 ]
机构
[1] Beijing Jiaotong Univ, Sch Elect & Informat Engn, Beijing, Peoples R China
[2] Beijing Jiaotong Univ, Sch Software Engn, Beijing, Peoples R China
关键词
Authentication;
D O I
10.1002/ett.4908
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Widely accepted explicit authentication protocols are vulnerable to a series of attacks, for example, shoulder surfing and smudge attacks, leaving users with the constant burden of periodic password changes. As such, we propose a novel framework for continuous authentication on smartphones. This approach is guided by pattern unlocking, which is widely used and will not cause learning cost. After collecting multi-modal data that describe both behavioral and contextual information, we employ a multi-branch context-aware attention network as the representation learner to perform feature extraction, then an auto encoder is then used for authentication. To overcome challenges, including cold-start and few-shot training, which is less discussed in other works, we incorporate transfer learning with a coarse-to-fine pre-training workflow. Additionally, we deploy a hierarchical approach to offload model tuning overhead from smartphones. Extensive experiments on more than 68 000 real-world recordings validate the effectiveness of the proposed method, with an EER (equal error rate) of 2.472% under mixed contexts, which consistently outperforms state-of-the-art approaches under both static and mixed contexts. Integration of context-aware representation learner and self-supervised auto encoder revolves continuous authentication performance. Transfer learning driven coarse-to-fine training addresses cold-start/few-shot problem and accelerates actual application. Test bed with more than 68k real-word samples shows our work achieves 2.472% EER under mixed contexts, outperforming state-of-the-art.image
引用
收藏
页数:18
相关论文
共 35 条
  • [31] HARWE: A multi-modal large-scale dataset for context-aware human activity recognition in smart working environments
    Esmaeilzehi, Alireza
    Khazaei, Ensieh
    Wang, Kai
    Kalsi, Navjot Kaur
    Ng, Pai Chet
    Liu, Huan
    Yu, Yuanhao
    Hatzinakos, Dimitrios
    Plataniotis, Konstantinos
    PATTERN RECOGNITION LETTERS, 2024, 184 : 126 - 132
  • [32] Reliable context-aware multi-attribute continuous authentication framework for secure energy utilization management in smart homes
    Premarathne, Uthpala Subodhani
    ENERGY, 2015, 93 : 1210 - 1221
  • [33] Context-aware Coordinated Anti-jamming Communications: A Multi-pattern Stochastic Learning Approach
    Xu, Yifan
    Xu, Yuhua
    Ren, Guochun
    Chen, Jin
    Yao, Changhua
    Jia, Luliang
    Liu, Dianxiong
    2021 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2021,
  • [34] Sparse Multi-Modal Graph Transformer with Shared-Context Processing for Representation Learning of Giga-pixel Images
    Nakhli, Ramin
    Moghadam, Puria Azadi
    Mi, Haoyang
    Farahani, Hossein
    Baras, Alexander
    Gilks, Blake
    Bashashati, Ali
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 11547 - 11557
  • [35] Multi-modal imaging genetics data fusion by deep auto-encoder and self-representation network for Alzheimer's disease diagnosis and biomarkers extraction
    Jiao, Cui-Na
    Gao, Ying-Lian
    Ge, Dao-Hui
    Shang, Junliang
    Liu, Jin-Xing
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 130