Interpretable Probabilistic Password Strength Meters via Deep Learning

被引:9
|
作者
Pasquini, Dario [1 ,2 ,3 ]
Ateniese, Giuseppe [1 ]
Bernaschi, Massimo [3 ]
机构
[1] Stevens Inst Technol, Hoboken, NJ 07030 USA
[2] Sapienza Univ Rome, Rome, Italy
[3] CNR, Inst Appl Comp, Rome, Italy
来源
COMPUTER SECURITY - ESORICS 2020, PT I | 2020年 / 12308卷
关键词
Password security; Strength meters; Deep learning;
D O I
10.1007/978-3-030-58951-6_25
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Probabilistic password strength meters have been proved to be the most accurate tools to measure password strength. Unfortunately, by construction, they are limited to solely produce an opaque security estimation that fails to fully support the user during the password composition. In the present work, we move the first steps towards cracking the intelligibility barrier of this compelling class of meters. We show that probabilistic password meters inherently own the capability to describe the latent relation between password strength and password structure. In our approach, the security contribution of each character composing a password is disentangled and used to provide explicit fine-grained feedback for the user. Furthermore, unlike existing heuristic constructions, our method is free from any human bias, and, more importantly, its feedback has a clear probabilistic interpretation. In our contribution: (1) we formulate the theoretical foundations of interpretable probabilistic password strength meters; (2) we describe how they can be implemented via an efficient and lightweight deep learning framework suitable for client-side operability.
引用
收藏
页码:502 / 522
页数:21
相关论文
共 50 条
  • [41] Design high-entropy electrocatalyst via interpretable deep graph attention learning
    Zhang, Jun
    Wang, Chaohui
    Huang, Shasha
    Xiang, Xuepeng
    Xiong, Yaoxu
    Xu, Biao
    Ma, Shihua
    Fu, Haijun
    Kai, Jijung
    Kang, Xiongwu
    Zhao, Shijun
    JOULE, 2023, 7 (08) : 1832 - 1851
  • [42] Learning interpretable descriptors for the fatigue strength of steels
    He, Ning
    Ouyang, Runhai
    Qian, Quan
    AIP ADVANCES, 2021, 11 (03)
  • [43] Building Trust in Deep Learning Models via a Self-Interpretable Visual Architecture
    Zhao, Weimin
    Mahmoud, Qusay H.
    Alwidian, Sanaa
    2023 20TH ANNUAL INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY AND TRUST, PST, 2023, : 486 - 495
  • [44] A robust and interpretable deep learning framework for multi-modal registration via keypoints
    Wang, Alan Q.
    Yu, Evan M.
    V. Dalca, Adrian
    Sabuncu, Mert R.
    MEDICAL IMAGE ANALYSIS, 2023, 90
  • [45] Deep Staging: An Interpretable Deep Learning Framework for Disease Staging
    Yao, Liuyi
    Yao, Zijun
    Hu, Jianying
    Gao, Jing
    Sun, Zhaonan
    2021 IEEE 9TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2021), 2021, : 130 - 137
  • [46] Interpretable Deep Learning for Marble Tiles Sorting
    Ouzounis, Athanasios G.
    Sidiropoulos, George K.
    Papakostas, George A.
    Sarafis, Ilias T.
    Stamkos, Andreas
    Solakis, George
    PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON DEEP LEARNING THEORY AND APPLICATIONS (DELTA), 2021, : 101 - 108
  • [47] Interpretable Deep Learning for Surgical Tool Management
    Rodrigues, Mark
    Mayo, Michael
    Patros, Panos
    INTERPRETABILITY OF MACHINE INTELLIGENCE IN MEDICAL IMAGE COMPUTING, AND TOPOLOGICAL DATA ANALYSIS AND ITS APPLICATIONS FOR MEDICAL DATA, 2021, 12929 : 3 - 12
  • [48] Towards an interpretable deep learning model of cancer
    Nilsson, Avlant
    Meimetis, Nikolaos
    Lauffenburger, Douglas A.
    NPJ PRECISION ONCOLOGY, 2025, 9 (01)
  • [49] Interpretable Deep Learning for Monitoring Combustion Instability
    Gangopadhyay, Tryambak
    Tan, Sin Yong
    LoCurto, Anthony
    Michael, James B.
    Sarkar, Soumik
    IFAC PAPERSONLINE, 2020, 53 (02): : 832 - 837
  • [50] An interpretable ensemble method for deep representation learning
    Jiang, Kai
    Xiong, Zheli
    Yang, Qichong
    Chen, Jianpeng
    Chen, Gang
    ENGINEERING REPORTS, 2024, 6 (03)