共 50 条
- [1] Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks [J]. 2018 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD) DIGEST OF TECHNICAL PAPERS, 2018,
- [2] Defending Against Adversarial Attacks in Deep Neural Networks [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
- [3] A survey on the vulnerability of deep neural networks against adversarial attacks [J]. Progress in Artificial Intelligence, 2022, 11 : 131 - 141
- [4] Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey [J]. CYBER PHYSICAL SYSTEMS AND DEEP LEARNING, 2018, 140 : 152 - 161
- [6] Evolving Hyperparameters for Training Deep Neural Networks against Adversarial Attacks [J]. 2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 1778 - 1785
- [7] MRobust: A Method for Robustness against Adversarial Attacks on Deep Neural Networks [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
- [8] Efficacy of Defending Deep Neural Networks against Adversarial Attacks with Randomization [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
- [9] Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring [J]. IEEE ACCESS, 2021, 9 : 150579 - 150591
- [10] Watermarking-based Defense against Adversarial Attacks on Deep Neural Networks [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,