共 50 条
- [43] Detecting Adversarial Examples in Deep Neural Networks using Normalizing Filters [J]. PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE (ICAART), VOL 2, 2019, : 164 - 173
- [44] Natural Scene Statistics for Detecting Adversarial Examples in Deep Neural Networks [J]. 2020 IEEE 22ND INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 2020,
- [45] Digital Watermark Perturbation for Adversarial Examples to Fool Deep Neural Networks [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
- [46] Towards Robust Detection of Adversarial Examples [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
- [47] Towards the Development of Robust Deep Neural Networks in Adversarial Settings [J]. 2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 419 - 424
- [48] Exploring misclassifications of robust neural networks to enhance adversarial attacks [J]. Applied Intelligence, 2023, 53 : 19843 - 19859
- [49] Fast Training of Deep Neural Networks Robust to Adversarial Perturbations [J]. 2020 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2020,