Segmentation Fault: A Cheap Defense Against Adversarial Machine Learning

被引:1
|
作者
Al Bared, Doha [1 ]
Nassar, Mohamed [2 ]
机构
[1] Amer Univ Beirut AUB, Dept Comp Sci, Beirut, Lebanon
[2] Univ New Haven, Dept Comp Sci, West Haven, CT USA
关键词
Machine Learning; Adversarial ML; Neural Networks; Computer Vision;
D O I
10.1109/MENACOMM50742.2021.9678308
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently published attacks against deep neural networks (DNNs) have stressed the importance of methodologies and tools to assess the security risks of using this technology in critical systems. Efficient techniques for detecting adversarial machine learning helps establishing trust and boost the adoption of deep learning in sensitive and security systems. In this paper, we propose a new technique for defending deep neural network classifiers, and convolutional ones in particular. Our defense is cheap in the sense that it requires less computation power despite a small cost to pay in terms of detection accuracy. The work refers to a recently published technique called ML-LOO. We replace the costly pixel by pixel leave-one-out approach of ML-LOO by adopting coarse-grained leave-one-out. We evaluate and compare the efficiency of different segmentation algorithms for this task. Our results show that a large gain in efficiency is possible, even though penalized by a marginal decrease in detection accuracy.
引用
收藏
页码:37 / 42
页数:6
相关论文
共 50 条
  • [31] Deblurring as a Defense against Adversarial Attacks
    Duckworth, William, III
    Liao, Weixian
    Yu, Wei
    [J]. 2023 IEEE 12TH INTERNATIONAL CONFERENCE ON CLOUD NETWORKING, CLOUDNET, 2023, : 61 - 67
  • [32] Defense against Universal Adversarial Perturbations
    Akhtar, Naveed
    Liu, Jian
    Mian, Ajmal
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3389 - 3398
  • [33] ASCL: Adversarial supervised contrastive learning for defense against word substitution attacks
    Shi, Jiahui
    Li, Linjing
    Zeng, Daniel
    [J]. NEUROCOMPUTING, 2022, 510 : 59 - 68
  • [34] Instance-based defense against adversarial attacks in Deep Reinforcement Learning
    Garcia, Javier
    Sagredo, Ismael
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 107
  • [35] Defense Strategies Against Adversarial Jamming Attacks via Deep Reinforcement Learning
    Wang, Feng
    Zhong, Chen
    Gursoy, M. Cenk
    Velipasalar, Senem
    [J]. 2020 54TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2020, : 336 - 341
  • [36] Defense against Adversarial Patch Attacks for Aerial Image Semantic Segmentation by Robust Feature Extraction
    Wang, Zhen
    Wang, Buhong
    Zhang, Chuanlei
    Liu, Yaohui
    [J]. REMOTE SENSING, 2023, 15 (06)
  • [37] Adversarial Training Against Adversarial Attacks for Machine Learning-Based Intrusion Detection Systems
    Haroon, Muhammad Shahzad
    Ali, Husnain Mansoor
    [J]. CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 73 (02): : 3513 - 3527
  • [38] Discretization Based Solutions for Secure Machine Learning Against Adversarial Attacks
    Panda, Priyadarshini
    Chakraborty, Indranil
    Roy, Kaushik
    [J]. IEEE ACCESS, 2019, 7 : 70157 - 70168
  • [39] Privacy Risks of Securing Machine Learning Models against Adversarial Examples
    Song, Liwei
    Shokri, Reza
    Mittal, Prateek
    [J]. PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 241 - 257
  • [40] Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks
    Gurel, Nezihe Merve
    Qi, Xiangyu
    Rimanic, Luka
    Zhang, Ce
    Li, Bo
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139