SENTINEL: Securing Indoor Localization Against Adversarial Attacks With Capsule Neural Networks

被引:0
|
作者
Gufran, Danish [1 ]
Anandathirtha, Pooja [1 ]
Pasricha, Sudeep [1 ]
机构
[1] Colorado State Univ, Dept Elect & Comp Engn, Ft Collins, CO 80523 USA
基金
美国国家科学基金会;
关键词
Location awareness; Training; Fluctuations; Working environment noise; Neural networks; Fingerprint recognition; Real-time systems; Indoor environment; Wireless fidelity; Resilience; Adversarial attacks; adversarial training; capsule neural networks; device heterogeneity; evil twin attacks; man-in-the-middle attacks; rogue access points (APs); Wi-Fi received signal strength (RSS) fingerprinting; ALGORITHM;
D O I
10.1109/TCAD.2024.3446717
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the increasing demand for edge device-powered location-based services in indoor environments, Wi-Fi received signal strength (RSS) fingerprinting has become popular, given the unavailability of GPS indoors. However, achieving robust and efficient indoor localization faces several challenges, due to RSS fluctuations from dynamic changes in indoor environments and heterogeneity of edge devices, leading to diminished localization accuracy. While advances in machine learning (ML) have shown promise in mitigating these phenomena, it remains an open problem. Additionally, emerging threats from adversarial attacks on ML-enhanced indoor localization systems, especially those introduced by malicious or rogue access points (APs), can deceive ML models to further increase localization errors. To address these challenges, we present SENTINEL, a novel embedded ML framework utilizing modified capsule neural networks to bolster the resilience of indoor localization solutions against adversarial attacks, device heterogeneity, and dynamic RSS fluctuations. We also introduce RSSRogueLoc, a novel dataset capturing the effects of rogue APs from several real-world indoor environments. Experimental evaluations demonstrate that SENTINEL achieves significant improvements, with up to $3.5\times $ reduction in mean error and $3.4\times $ reduction in worst-case error compared to state-of-the-art frameworks using simulated adversarial attacks. SENTINEL also achieves improvements of up to $2.8\times $ in mean error and $2.7\times $ in worst-case error compared to state-of-the-art frameworks when evaluated with the real-world RSSRogueLoc dataset.
引用
收藏
页码:4021 / 4032
页数:12
相关论文
共 50 条
  • [21] Evolving Hyperparameters for Training Deep Neural Networks against Adversarial Attacks
    Liu, Jia
    Jin, Yaochu
    2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 1778 - 1785
  • [22] Graph Structure Reshaping Against Adversarial Attacks on Graph Neural Networks
    Wang, Haibo
    Zhou, Chuan
    Chen, Xin
    Wu, Jia
    Pan, Shirui
    Li, Zhao
    Wang, Jilong
    Yu, Philip S.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) : 6344 - 6357
  • [23] Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?
    Siddique, Ayesha
    Hoque, Khaza Anuarul
    PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 364 - 369
  • [24] HeteroGuard: Defending Heterogeneous Graph Neural Networks against Adversarial Attacks
    Kumarasinghe, Udesh
    Nabeel, Mohamed
    De Zoysa, Kasun
    Gunawardana, Kasun
    Elvitigala, Charitha
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW, 2022, : 698 - 705
  • [25] MRobust: A Method for Robustness against Adversarial Attacks on Deep Neural Networks
    Liu, Yi-Ling
    Lomuscio, Alessio
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [26] Efficacy of Defending Deep Neural Networks against Adversarial Attacks with Randomization
    Zhou, Yan
    Kantarcioglu, Murat
    Xi, Bowei
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
  • [27] Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks
    Luo, Bo
    Liu, Yannan
    Wei, Lingxiao
    Xu, Qiang
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 1652 - 1659
  • [28] Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring
    Zoppi, Tommaso
    Ceccarelli, Andrea
    IEEE ACCESS, 2021, 9 : 150579 - 150591
  • [29] Robust convolutional neural networks against adversarial attacks on medical images
    Shi, Xiaoshuang
    Peng, Yifan
    Chen, Qingyu
    Keenan, Tiarnan
    Thavikulwat, Alisa T.
    Lee, Sungwon
    Tang, Yuxing
    Chew, Emily Y.
    Summers, Ronald M.
    Lu, Zhiyong
    PATTERN RECOGNITION, 2022, 132
  • [30] Centered-Ranking Learning Against Adversarial Attacks in Neural Networks
    Appiah, Benjamin
    Adu, Adolph S. Y.
    Osei, Isaac
    Assamah, Gabriel
    Hammond, Ebenezer N. A.
    International Journal of Network Security, 2023, 25 (05) : 814 - 820