Adversarial Attacks on Deep Temporal Point Process

被引:0
|
作者
Khorshidi, Samira [1 ]
Wang, Bao [2 ]
Mohler, George [3 ]
机构
[1] Indiana Univ Purdue Univ, Comp & Informat Sci, Indianapolis, IN 46202 USA
[2] Univ Utah, Dept Math, Salt Lake City, UT 84112 USA
[3] Boston Coll, Dept Comp Sci, Boston, MA USA
关键词
Point process; Adversarial attacks; Deep learning; Nonparametric modeling;
D O I
10.1109/ICMLA55696.2022.10102767
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
forecasting to modeling earthquake aftershocks sequences. Due to the flexibility and expressiveness of deep learning, neural network-based approaches have recently shown promise for modeling point process intensities. However, there is a lack of research on the robustness of such models in regards to adversarial attacks and natural shocks to systems. Precisely, while neural point processes may outperform simpler parametric models on in-sample tests, how these models perform when encountering adversarial examples or sharp non-stationary trends remains unknown. Current work proposes several white-box and blackbox adversarial attacks against temporal point processes modeled by deep neural networks. Extensive experiments confirm that predictive performance and parametric modeling of neural point processes are vulnerable to adversarial attacks. Additionally, we evaluate the vulnerability and performance of these models in the presence of non-stationary abrupt changes, using the crimes dataset, during the Covid-19 pandemic, as an example.
引用
收藏
页码:1807 / 1814
页数:8
相关论文
共 50 条
  • [1] Temporal shuffling for defending deep action recognition models against adversarial attacks
    Hwang, Jaehui
    Zhang, Huan
    Choi, Jun-Ho
    Hsieh, Cho-Jui
    Lee, Jong-Seok
    NEURAL NETWORKS, 2024, 169 : 388 - 397
  • [2] Point Cloud Adversarial Perturbation Generation for Adversarial Attacks
    He, Fengmei
    Chen, Yihuai
    Chen, Ruidong
    Nie, Weizhi
    IEEE ACCESS, 2023, 11 : 2767 - 2774
  • [3] Adversarial Attacks and Defenses in Deep Learning
    Ren, Kui
    Zheng, Tianhang
    Qin, Zhan
    Liu, Xue
    ENGINEERING, 2020, 6 (03) : 346 - 360
  • [4] Adversarial Attacks on Deep Graph Matching
    Zhang, Zijie
    Zhang, Zeru
    Zhou, Yang
    Shen, Yelong
    Jin, Ruoming
    Dou, Dejing
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [5] Vulnerability of Deep Forest to Adversarial Attacks
    Ali, Ziad
    Mohammed, Ameer
    Ahmad, Imtiaz
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 5464 - 5475
  • [6] EXTENDING ADVERSARIAL ATTACKS AND DEFENSES TO DEEP 3D POINT CLOUD CLASSIFIERS
    Liu, Daniel
    Yu, Ronald
    Su, Hao
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 2279 - 2283
  • [7] Adversarial Audio Attacks that Evade Temporal Dependency
    Liu, Heng
    Ditzler, Gregory
    2020 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2020, : 639 - 646
  • [8] Adversarial Attacks on Gaussian Process Bandits
    Han, Eric
    Scarlett, Jonathan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [9] Adversarial Attacks Targeting Point-to-Point Wireless Networks
    Ghasemi, Ahmad
    Moradikia, Majid
    Zekavat, Seyed
    Pishro-Nik, Hossein
    2024 IEEE 99TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2024-SPRING, 2024,
  • [10] Adversarial Attacks on Deep Local Feature Matching Models of 3D Point Clouds
    Shi Y.
    Tang K.
    Peng W.
    Wu J.
    Gu Z.
    Fang M.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2022, 34 (09): : 1379 - 1390