Adversarial Attacks on Deep Temporal Point Process

被引:0
|
作者
Khorshidi, Samira [1 ]
Wang, Bao [2 ]
Mohler, George [3 ]
机构
[1] Indiana Univ Purdue Univ, Comp & Informat Sci, Indianapolis, IN 46202 USA
[2] Univ Utah, Dept Math, Salt Lake City, UT 84112 USA
[3] Boston Coll, Dept Comp Sci, Boston, MA USA
关键词
Point process; Adversarial attacks; Deep learning; Nonparametric modeling;
D O I
10.1109/ICMLA55696.2022.10102767
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
forecasting to modeling earthquake aftershocks sequences. Due to the flexibility and expressiveness of deep learning, neural network-based approaches have recently shown promise for modeling point process intensities. However, there is a lack of research on the robustness of such models in regards to adversarial attacks and natural shocks to systems. Precisely, while neural point processes may outperform simpler parametric models on in-sample tests, how these models perform when encountering adversarial examples or sharp non-stationary trends remains unknown. Current work proposes several white-box and blackbox adversarial attacks against temporal point processes modeled by deep neural networks. Extensive experiments confirm that predictive performance and parametric modeling of neural point processes are vulnerable to adversarial attacks. Additionally, we evaluate the vulnerability and performance of these models in the presence of non-stationary abrupt changes, using the crimes dataset, during the Covid-19 pandemic, as an example.
引用
收藏
页码:1807 / 1814
页数:8
相关论文
共 50 条
  • [21] Transcend Adversarial Examples: Diversified Adversarial Attacks to Test Deep Learning Model
    Kong, Wei
    2023 IEEE 41ST INTERNATIONAL CONFERENCE ON COMPUTER DESIGN, ICCD, 2023, : 13 - 20
  • [22] Adversarial attacks and adversarial training for burn image segmentation based on deep learning
    Chen, Luying
    Liang, Jiakai
    Wang, Chao
    Yue, Keqiang
    Li, Wenjun
    Fu, Zhihui
    MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2024, 62 (09) : 2717 - 2735
  • [23] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [24] A Survey on Adversarial Attacks and Defenses for Deep Reinforcement Learning
    Liu A.-S.
    Guo J.
    Li S.-M.
    Xiao Y.-S.
    Liu X.-L.
    Tao D.-C.
    Jisuanji Xuebao/Chinese Journal of Computers, 2023, 46 (08): : 1553 - 1576
  • [25] Threat of Adversarial Attacks within Deep Learning: Survey
    Ata-Us-samad
    Singh R.
    Recent Advances in Computer Science and Communications, 2023, 16 (07)
  • [26] On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses
    Chhabra, Anshuman
    Sekhari, Ashwin
    Mohapatra, Prasant
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [27] Adversarial Attacks On a Regression Based Deep Registration Network
    Li, F.
    Cai, W.
    He, X.
    Moran, J.
    Cervino, L.
    Li, T.
    Li, X.
    MEDICAL PHYSICS, 2022, 49 (06) : E157 - E157
  • [28] Understanding adversarial attacks on observations in deep reinforcement learning
    You, Qiaoben
    Ying, Chengyang
    Zhou, Xinning
    Su, Hang
    Zhu, Jun
    Zhang, Bo
    SCIENCE CHINA-INFORMATION SCIENCES, 2024, 67 (05)
  • [29] Mitigating the impact of adversarial attacks in very deep networks
    Hassanin, Mohammed
    Radwan, Ibrahim
    Moustafa, Nour
    Tahtali, Murat
    Kumar, Neeraj
    APPLIED SOFT COMPUTING, 2021, 105 (105)
  • [30] Understanding adversarial attacks on observations in deep reinforcement learning
    You QIAOBEN
    Chengyang YING
    Xinning ZHOU
    Hang SU
    Jun ZHU
    Bo ZHANG
    ScienceChina(InformationSciences), 2024, 67 (05) : 69 - 83