Adversarial Attacks on Deep Temporal Point Process

被引:0
|
作者
Khorshidi, Samira [1 ]
Wang, Bao [2 ]
Mohler, George [3 ]
机构
[1] Indiana Univ Purdue Univ, Comp & Informat Sci, Indianapolis, IN 46202 USA
[2] Univ Utah, Dept Math, Salt Lake City, UT 84112 USA
[3] Boston Coll, Dept Comp Sci, Boston, MA USA
关键词
Point process; Adversarial attacks; Deep learning; Nonparametric modeling;
D O I
10.1109/ICMLA55696.2022.10102767
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
forecasting to modeling earthquake aftershocks sequences. Due to the flexibility and expressiveness of deep learning, neural network-based approaches have recently shown promise for modeling point process intensities. However, there is a lack of research on the robustness of such models in regards to adversarial attacks and natural shocks to systems. Precisely, while neural point processes may outperform simpler parametric models on in-sample tests, how these models perform when encountering adversarial examples or sharp non-stationary trends remains unknown. Current work proposes several white-box and blackbox adversarial attacks against temporal point processes modeled by deep neural networks. Extensive experiments confirm that predictive performance and parametric modeling of neural point processes are vulnerable to adversarial attacks. Additionally, we evaluate the vulnerability and performance of these models in the presence of non-stationary abrupt changes, using the crimes dataset, during the Covid-19 pandemic, as an example.
引用
收藏
页码:1807 / 1814
页数:8
相关论文
共 50 条
  • [41] Transferable Adversarial Attacks for Deep Scene Text Detection
    Wu, Shudeng
    Dai, Tao
    Meng, Guanghao
    Chen, Bin
    Lu, Jian
    Xia, Shu-Tao
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 8945 - 8951
  • [42] Detecting adversarial example attacks to deep neural networks
    Carrara, Fabio
    Falchi, Fabrizio
    Caldelli, Roberto
    Amato, Giuseppe
    Fumarola, Roberta
    Becarelli, Rudy
    PROCEEDINGS OF THE 15TH INTERNATIONAL WORKSHOP ON CONTENT-BASED MULTIMEDIA INDEXING (CBMI), 2017,
  • [43] ATPF: An Adaptive Temporal Perturbation Framework for Adversarial Attacks on Temporal Knowledge Graph
    Liao, Longquan
    Zheng, Linjiang
    Shang, Jiaxing
    Li, Xu
    Chen, Fengwen
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2025, 37 (03) : 1091 - 1104
  • [44] Rethinking Perturbation Directions for Imperceptible Adversarial Attacks on Point Clouds
    Tang, Keke
    Shi, Yawen
    Lou, Tianrui
    Peng, Weilong
    He, Xu
    Zhu, Peican
    Gu, Zhaoquan
    Tian, Zhihong
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (06) : 5158 - 5169
  • [45] 3D adversarial attacks beyond point cloud
    Zhang, Jinlai
    Chen, Lyujie
    Liu, Binbin
    Ouyang, Bo
    Xie, Qizhi
    Zhu, Jihong
    Li, Weiming
    Meng, Yanmei
    INFORMATION SCIENCES, 2023, 633 : 491 - 503
  • [46] Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks
    Guo, Haoqiang
    Peng, Lu
    Zhang, Jian
    Qi, Fang
    Duan, Lide
    2019 TENTH INTERNATIONAL GREEN AND SUSTAINABLE COMPUTING CONFERENCE (IGSC), 2019,
  • [47] Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks
    Das, Nilaksh
    Park, Haekyu
    Wang, Zijie J.
    Hohman, Fred
    Firstman, Robert
    Rogers, Emily
    Chau, Duen Horng
    2020 IEEE VISUALIZATION CONFERENCE - SHORT PAPERS (VIS 2020), 2020, : 271 - 275
  • [48] Stealthy and Efficient Adversarial Attacks against Deep Reinforcement Learning
    Sun, Jianwen
    Zhang, Tianwei
    Xie, Xiaofei
    Ma, Lei
    Zheng, Yan
    Chen, Kangjie
    Liu, Yang
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 5883 - 5891
  • [49] Adversarial Attacks and Defenses in Deep Learning: From a Perspective of Cybersecurity
    Zhou, Shuai
    Liu, Chi
    Ye, Dayong
    Zhu, Tianqing
    Zhou, Wanlei
    Yu, Philip S.
    ACM COMPUTING SURVEYS, 2023, 55 (08)
  • [50] Adversarial Attacks on Featureless Deep Learning Malicious URLs Detection
    Rasheed, Bader
    Khan, Adil
    Kazmi, S. M. Ahsan
    Hussain, Rasheed
    Piran, Md Jalil
    Suh, Doug Young
    CMC-COMPUTERS MATERIALS & CONTINUA, 2021, 68 (01): : 921 - 939