Machine learning interpretability meets TLS fingerprinting

被引:0
|
作者
Mahdi Jafari Siavoshani
Amirhossein Khajehpour
Amirmohammad Ziaei Bideh
Amirali Gatmiri
Ali Taheri
机构
[1] Sharif University of Technology,Information, Network, and Learning Lab (INL) Computer Science and Engineering Department
来源
Soft Computing | 2023年 / 27卷
关键词
Web fingerprinting; Transport layer security (TLS); Information leakage; Deep learning; Model interpretation;
D O I
暂无
中图分类号
学科分类号
摘要
Protecting users’ privacy over the Internet is of great importance; however, it becomes harder and harder to maintain due to the increasing complexity of network protocols and components. Therefore, investigating and understanding how data are leaked from the information transmission platforms and protocols can lead us to a more secure environment. In this paper, we propose a framework to systematically find the most vulnerable information fields in a network protocol. To this end, focusing on the transport layer security (TLS) protocol, we perform different machine-learning-based fingerprinting attacks on the collected data from more than 70 domains (websites) to understand how and where this information leakage occurs in the TLS protocol. Then, by employing the interpretation techniques developed in the machine learning community and applying our framework, we find the most vulnerable information fields in the TLS protocol. Our findings demonstrate that the TLS handshake (which is mainly unencrypted), the TLS record length appearing in the TLS application data header, and the IV field are among the most critical leaker parts in this protocol, respectively.
引用
收藏
页码:7191 / 7208
页数:17
相关论文
共 50 条
  • [21] Interpretability of Machine Learning: Recent Advances and Future Prospects
    Gao, Lei
    Guan, Ling
    IEEE MULTIMEDIA, 2023, 30 (04) : 105 - 118
  • [22] Machine Learning Interpretability to Detect Fake Accounts in Instagram
    Sallah, Amine
    Alaoui, El Arbi Abdellaoui
    Agoujil, Said
    Nayyar, Anand
    INTERNATIONAL JOURNAL OF INFORMATION SECURITY AND PRIVACY, 2022, 16 (01)
  • [23] Video Quality Assessment and Machine Learning: Performance and Interpretability
    Sogaard, Jacob
    Forchhammer, Soren
    Korhonen, Jari
    2015 SEVENTH INTERNATIONAL WORKSHOP ON QUALITY OF MULTIMEDIA EXPERIENCE (QOMEX), 2015,
  • [24] Interpretability and Explainability of Machine Learning Models: Achievements and Challenges
    Henriques, J.
    Rocha, T.
    de Carvalho, P.
    Silva, C.
    Paredes, S.
    INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS 2022, ICBHI 2022, 2024, 108 : 81 - 94
  • [25] Philosophy of science at sea: Clarifying the interpretability of machine learning
    Beisbart, Claus
    Raz, Tim
    PHILOSOPHY COMPASS, 2022, 17 (06)
  • [26] The Importance of Interpretability and Validations of Machine-Learning Models
    Yamasawa, Daisuke
    Ozawa, Hideki
    Goto, Shinichi
    CIRCULATION JOURNAL, 2024, 88 (01) : 157 - 158
  • [27] Measuring Interpretability for Different Types of Machine Learning Models
    Zhou, Qing
    Liao, Fenglu
    Mou, Chao
    Wang, Ping
    TRENDS AND APPLICATIONS IN KNOWLEDGE DISCOVERY AND DATA MINING: PAKDD 2018 WORKSHOPS, 2018, 11154 : 295 - 308
  • [28] Interpretability and Fairness in Machine Learning: A Formal Methods Approach
    Ghosh, Bishwamittra
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 7083 - 7084
  • [29] Interpretability of Machine Learning Solutions in Industrial Decision Engineering
    Kolyshkina, Inna
    Simoff, Simeon
    DATA MINING, AUSDM 2019, 2019, 1127 : 156 - 170
  • [30] Explainable AI: A Review of Machine Learning Interpretability Methods
    Linardatos, Pantelis
    Papastefanopoulos, Vasilis
    Kotsiantis, Sotiris
    ENTROPY, 2021, 23 (01) : 1 - 45