Securing Networks Against Adversarial Domain Name System Tunneling Attacks Using Hybrid Neural Networks

被引:0
|
作者
Ness, Stephanie [1 ]
机构
[1] University of Vienna, Diplomatic Academy of Vienna, Wien,1010, Austria
关键词
D O I
10.1109/ACCESS.2025.3550853
中图分类号
学科分类号
摘要
Domain name system tunneling is one of the emerging threats that use Domain name system (DNS) to transfer unwanted material, and it is usually undetected by conventional detection systems. Thus, the current paper proposes a double-architecture deep learning system built upon Long short-term memory (LSTM) and Deep Neural Networks (DNN) to detect and categorize adversarial Domain name system tunneling assaults. Limitations in the current Domain name system traffic classification techniques are overcome in the proposed model through temporal sequence modelling and feature extraction to distinguish clearly between normal, attack, and adversarial traffic. Based on the experiments conducted on a broad data set, the application of the proposed hybrid model increased the classification accuracy up to 85.2%, which is higher compared with basic machine learning algorithms. Moreover, the ablation analysis showed that downstream components, such as the Long short-term memory layer and exact dropout rate, are critical to the performance of the proposed model against adversarial perturbation. This work offers a solution for identifying intricate threats in a big and live manner; as such, it has broad applicability in sensitive areas of activity like finance, health care, and administration. Further work includes applying this approach to other network-based threats and improving the effectiveness of applying it to oligopolistic adversaries’ tactics. © 2013 IEEE.
引用
收藏
页码:46697 / 46709
相关论文
共 50 条
  • [1] SENTINEL: Securing Indoor Localization Against Adversarial Attacks With Capsule Neural Networks
    Gufran, Danish
    Anandathirtha, Pooja
    Pasricha, Sudeep
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2024, 43 (11) : 4021 - 4032
  • [2] Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters
    El-Allami, Rida
    Marchisio, Alberto
    Shafique, Muhammad
    Alouani, Ihsen
    PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, : 774 - 779
  • [3] Robustness Against Adversarial Attacks in Neural Networks Using Incremental Dissipativity
    Aquino, Bernardo
    Rahnama, Arash
    Seiler, Peter
    Lin, Lizhen
    Gupta, Vijay
    IEEE CONTROL SYSTEMS LETTERS, 2022, 6 : 2341 - 2346
  • [4] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [5] A survey on the vulnerability of deep neural networks against adversarial attacks
    Andy Michel
    Sumit Kumar Jha
    Rickard Ewetz
    Progress in Artificial Intelligence, 2022, 11 : 131 - 141
  • [6] Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey
    Ozdag, Mesut
    CYBER PHYSICAL SYSTEMS AND DEEP LEARNING, 2018, 140 : 152 - 161
  • [7] Relative Robustness of Quantized Neural Networks Against Adversarial Attacks
    Duncan, Kirsty
    Komendantskaya, Ekaterina
    Stewart, Robert
    Lones, Michael
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [8] GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks
    Zhang, Xiang
    Zitnik, Marinka
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [9] Robust Heterogeneous Graph Neural Networks against Adversarial Attacks
    Zhang, Mengmei
    Wang, Xiao
    Zhu, Meiqi
    Shi, Chuan
    Zhang, Zhiqiang
    Zhou, Jun
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 4363 - 4370
  • [10] A survey on the vulnerability of deep neural networks against adversarial attacks
    Michel, Andy
    Jha, Sumit Kumar
    Ewetz, Rickard
    PROGRESS IN ARTIFICIAL INTELLIGENCE, 2022, 11 (02) : 131 - 141