Pattern unlocking guided multi-modal continuous authentication for smartphone with multi-branch context-aware representation learning and auto encoder

被引:0
|
作者
Yao, Muyan [1 ]
Jin, Zuodong [1 ]
Gao, Ruipeng [2 ]
Qi, Peng [1 ]
Tao, Dan [1 ]
机构
[1] Beijing Jiaotong Univ, Sch Elect & Informat Engn, Beijing, Peoples R China
[2] Beijing Jiaotong Univ, Sch Software Engn, Beijing, Peoples R China
关键词
Authentication;
D O I
10.1002/ett.4908
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Widely accepted explicit authentication protocols are vulnerable to a series of attacks, for example, shoulder surfing and smudge attacks, leaving users with the constant burden of periodic password changes. As such, we propose a novel framework for continuous authentication on smartphones. This approach is guided by pattern unlocking, which is widely used and will not cause learning cost. After collecting multi-modal data that describe both behavioral and contextual information, we employ a multi-branch context-aware attention network as the representation learner to perform feature extraction, then an auto encoder is then used for authentication. To overcome challenges, including cold-start and few-shot training, which is less discussed in other works, we incorporate transfer learning with a coarse-to-fine pre-training workflow. Additionally, we deploy a hierarchical approach to offload model tuning overhead from smartphones. Extensive experiments on more than 68 000 real-world recordings validate the effectiveness of the proposed method, with an EER (equal error rate) of 2.472% under mixed contexts, which consistently outperforms state-of-the-art approaches under both static and mixed contexts. Integration of context-aware representation learner and self-supervised auto encoder revolves continuous authentication performance. Transfer learning driven coarse-to-fine training addresses cold-start/few-shot problem and accelerates actual application. Test bed with more than 68k real-word samples shows our work achieves 2.472% EER under mixed contexts, outperforming state-of-the-art.image
引用
收藏
页数:18
相关论文
共 35 条
  • [1] CONTEXT-AWARE DEEP LEARNING FOR MULTI-MODAL DEPRESSION DETECTION
    Lam, Genevieve
    Huang Dongyan
    Lin, Weisi
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3946 - 3950
  • [2] Multi-Branch Convolutional Network for Context-Aware Recommendation
    Guo, Wei
    Zhang, Can
    Guo, Huifeng
    Tang, Ruiming
    He, Xiuqiang
    PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 1709 - 1712
  • [3] Things that see: Context-aware multi-modal interaction
    Crowley, James L.
    COGNITIVE VISION SYSTEMS: SAMPLING THE SPECTRUM OF APPROACHERS, 2006, 3948 : 183 - 198
  • [4] MULTI-BRANCH CONTEXT-AWARE NETWORK FOR PERSON RE-IDENTIFICATION
    Zhu, Yingxin
    Guo, Xiaoqiang
    Liu, Jianlei
    Jiang, Zhuqing
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 712 - 717
  • [5] Multi-Modal Context-Aware reasoNer (CAN) at the Edge of IoT
    Rahman, Hasibur
    Rahmani, Rahim
    Kanter, Theo
    8TH INTERNATIONAL CONFERENCE ON AMBIENT SYSTEMS, NETWORKS AND TECHNOLOGIES (ANT-2017) AND THE 7TH INTERNATIONAL CONFERENCE ON SUSTAINABLE ENERGY INFORMATION TECHNOLOGY (SEIT 2017), 2017, 109 : 335 - 342
  • [6] SCATEAgent: Context-aware software agents for multi-modal travel
    Yin, M
    Griss, M
    APPLICATIONS OF AGENT TECHNOLOGY IN TRAFFIC AND TRANSPORTATION, 2005, : 69 - 84
  • [7] Adaptive Context-Aware Multi-Modal Network for Depth Completion
    Zhao, Shanshan
    Gong, Mingming
    Fu, Huan
    Tao, Dacheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 5264 - 5276
  • [8] Experiments with multi-modal interfaces in a context-aware city guide
    Bornträger, C
    Cheverst, K
    Davies, N
    Dix, A
    Friday, A
    Seitz, J
    HUMAN-COMPUTER INTERACTION WITH MOBILE DEVICES AND SERVICES, 2003, 2795 : 116 - 130
  • [9] Context-aware Interactive Attention for Multi-modal Sentiment and Emotion Analysis
    Chauhan, Dushyant Singh
    Akhtar, Md Shad
    Ekbal, Asif
    Bhattacharyya, Pushpak
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 5647 - 5657
  • [10] Hydra: A Personalized and Context-Aware Multi-Modal Transportation Recommendation System
    Liu, Hao
    Tong, Yongxin
    Zhang, Panpan
    Lu, Xinjiang
    Duan, Jianguo
    Xiong, Hui
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 2314 - 2324