Adversarial Attacks on Deep Temporal Point Process

被引:0
|
作者
Khorshidi, Samira [1 ]
Wang, Bao [2 ]
Mohler, George [3 ]
机构
[1] Indiana Univ Purdue Univ, Comp & Informat Sci, Indianapolis, IN 46202 USA
[2] Univ Utah, Dept Math, Salt Lake City, UT 84112 USA
[3] Boston Coll, Dept Comp Sci, Boston, MA USA
关键词
Point process; Adversarial attacks; Deep learning; Nonparametric modeling;
D O I
10.1109/ICMLA55696.2022.10102767
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
forecasting to modeling earthquake aftershocks sequences. Due to the flexibility and expressiveness of deep learning, neural network-based approaches have recently shown promise for modeling point process intensities. However, there is a lack of research on the robustness of such models in regards to adversarial attacks and natural shocks to systems. Precisely, while neural point processes may outperform simpler parametric models on in-sample tests, how these models perform when encountering adversarial examples or sharp non-stationary trends remains unknown. Current work proposes several white-box and blackbox adversarial attacks against temporal point processes modeled by deep neural networks. Extensive experiments confirm that predictive performance and parametric modeling of neural point processes are vulnerable to adversarial attacks. Additionally, we evaluate the vulnerability and performance of these models in the presence of non-stationary abrupt changes, using the crimes dataset, during the Covid-19 pandemic, as an example.
引用
收藏
页码:1807 / 1814
页数:8
相关论文
共 50 条
  • [31] Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning
    Ilahi I.
    Usama M.
    Qadir J.
    Janjua M.U.
    Al-Fuqaha A.
    Hoang D.T.
    Niyato D.
    IEEE Transactions on Artificial Intelligence, 2022, 3 (02): : 90 - 109
  • [32] ADVERSARIAL ATTACKS ON DEEP UNFOLDED NETWORKS FOR SPARSE CODING
    Wang, Yulu
    Wu, Kailun
    Zhang, Changshui
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 5974 - 5978
  • [33] Adversarial Attacks on Deep Models for Financial Transaction Records
    Fursov, Ivan
    Morozov, Matvey
    Kaploukhaya, Nina
    Kovtun, Elizaveta
    Rivera-Castro, Rodrigo
    Gusev, Gleb
    Babaev, Dmitry
    Kireev, Ivan
    Zaytsev, Alexey
    Burnaev, Evgeny
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 2868 - 2878
  • [34] Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons
    Pravin, Chandresh
    Martino, Ivan
    Nicosia, Giuseppe
    Ojha, Varun
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT I, 2021, 12891 : 16 - 28
  • [35] MASSIF: Interactive Interpretation of Adversarial Attacks on Deep Learning
    Das, Nilaksh
    Park, Haekyu
    Wang, Zijie J.
    Hohman, Fred
    Firstman, Robert
    Rogers, Emily
    Chau, Duen Horng
    CHI'20: EXTENDED ABSTRACTS OF THE 2020 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2020,
  • [36] Adversarial attacks on deep learning models in smart grids
    Hao, Jingbo
    Tao, Yang
    ENERGY REPORTS, 2022, 8 : 123 - 129
  • [37] Deep Learning Defense Method Against Adversarial Attacks
    Wang, Ling
    Zhang, Cheng
    Liu, Jie
    2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 3667 - 3671
  • [38] Defending Deep Learning Models Against Adversarial Attacks
    Mani, Nag
    Moh, Melody
    Moh, Teng-Sheng
    INTERNATIONAL JOURNAL OF SOFTWARE SCIENCE AND COMPUTATIONAL INTELLIGENCE-IJSSCI, 2021, 13 (01): : 72 - 89
  • [39] Robust Deep Object Tracking against Adversarial Attacks
    Jia, Shuai
    Ma, Chao
    Song, Yibing
    Yang, Xiaokang
    Yang, Ming-Hsuan
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (03) : 1238 - 1257
  • [40] Robustness and Security in Deep Learning: Adversarial Attacks and Countermeasures
    Kaur, Navjot
    Singh, Someet
    Deore, Shailesh Shivaji
    Vidhate, Deepak A.
    Haridas, Divya
    Kosuri, Gopala Varma
    Kolhe, Mohini Ravindra
    JOURNAL OF ELECTRICAL SYSTEMS, 2024, 20 (03) : 1250 - 1257