SENTINEL: Securing Indoor Localization Against Adversarial Attacks With Capsule Neural Networks

被引:0
|
作者
Gufran, Danish [1 ]
Anandathirtha, Pooja [1 ]
Pasricha, Sudeep [1 ]
机构
[1] Colorado State Univ, Dept Elect & Comp Engn, Ft Collins, CO 80523 USA
基金
美国国家科学基金会;
关键词
Location awareness; Training; Fluctuations; Working environment noise; Neural networks; Fingerprint recognition; Real-time systems; Indoor environment; Wireless fidelity; Resilience; Adversarial attacks; adversarial training; capsule neural networks; device heterogeneity; evil twin attacks; man-in-the-middle attacks; rogue access points (APs); Wi-Fi received signal strength (RSS) fingerprinting; ALGORITHM;
D O I
10.1109/TCAD.2024.3446717
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the increasing demand for edge device-powered location-based services in indoor environments, Wi-Fi received signal strength (RSS) fingerprinting has become popular, given the unavailability of GPS indoors. However, achieving robust and efficient indoor localization faces several challenges, due to RSS fluctuations from dynamic changes in indoor environments and heterogeneity of edge devices, leading to diminished localization accuracy. While advances in machine learning (ML) have shown promise in mitigating these phenomena, it remains an open problem. Additionally, emerging threats from adversarial attacks on ML-enhanced indoor localization systems, especially those introduced by malicious or rogue access points (APs), can deceive ML models to further increase localization errors. To address these challenges, we present SENTINEL, a novel embedded ML framework utilizing modified capsule neural networks to bolster the resilience of indoor localization solutions against adversarial attacks, device heterogeneity, and dynamic RSS fluctuations. We also introduce RSSRogueLoc, a novel dataset capturing the effects of rogue APs from several real-world indoor environments. Experimental evaluations demonstrate that SENTINEL achieves significant improvements, with up to $3.5\times $ reduction in mean error and $3.4\times $ reduction in worst-case error compared to state-of-the-art frameworks using simulated adversarial attacks. SENTINEL also achieves improvements of up to $2.8\times $ in mean error and $2.7\times $ in worst-case error compared to state-of-the-art frameworks when evaluated with the real-world RSSRogueLoc dataset.
引用
收藏
页码:4021 / 4032
页数:12
相关论文
共 50 条
  • [1] Securing Networks Against Adversarial Domain Name System Tunneling Attacks Using Hybrid Neural Networks
    Ness, Stephanie
    IEEE Access, 2025, 13 : 46697 - 46709
  • [2] Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters
    El-Allami, Rida
    Marchisio, Alberto
    Shafique, Muhammad
    Alouani, Ihsen
    PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, : 774 - 779
  • [3] Secure Indoor Localization Against Adversarial Attacks Using DCGAN
    Yan, Qingli
    Xiong, Wang
    Wang, Hui-Ming
    IEEE COMMUNICATIONS LETTERS, 2025, 29 (01) : 130 - 134
  • [4] A fingerprint indoor localization method against adversarial sample attacks
    Zhang X.
    Bao J.
    He F.
    Gai J.
    Tian F.
    Huang H.
    Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, 2022, 48 (11): : 2087 - 2101
  • [5] TransGAN-Based Secure Indoor Localization Against Adversarial Attacks
    Yan, Qingli
    Xiong, Wang
    Wang, Hui-Ming
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (05): : 5918 - 5930
  • [6] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [7] SeVuc: A study on the Security Vulnerabilities of Capsule Networks against adversarial attacks
    Marchisio, Alberto
    Nanfa, Giorgio
    Khalid, Faiq
    Hanif, Muhammad Abdullah
    Martina, Maurizio
    Shafique, Muhammad
    MICROPROCESSORS AND MICROSYSTEMS, 2023, 96
  • [8] A survey on the vulnerability of deep neural networks against adversarial attacks
    Andy Michel
    Sumit Kumar Jha
    Rickard Ewetz
    Progress in Artificial Intelligence, 2022, 11 : 131 - 141
  • [9] Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey
    Ozdag, Mesut
    CYBER PHYSICAL SYSTEMS AND DEEP LEARNING, 2018, 140 : 152 - 161
  • [10] Relative Robustness of Quantized Neural Networks Against Adversarial Attacks
    Duncan, Kirsty
    Komendantskaya, Ekaterina
    Stewart, Robert
    Lones, Michael
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,