Segmentation Fault: A Cheap Defense Against Adversarial Machine Learning

被引:1
|
作者
Al Bared, Doha [1 ]
Nassar, Mohamed [2 ]
机构
[1] Amer Univ Beirut AUB, Dept Comp Sci, Beirut, Lebanon
[2] Univ New Haven, Dept Comp Sci, West Haven, CT USA
关键词
Machine Learning; Adversarial ML; Neural Networks; Computer Vision;
D O I
10.1109/MENACOMM50742.2021.9678308
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently published attacks against deep neural networks (DNNs) have stressed the importance of methodologies and tools to assess the security risks of using this technology in critical systems. Efficient techniques for detecting adversarial machine learning helps establishing trust and boost the adoption of deep learning in sensitive and security systems. In this paper, we propose a new technique for defending deep neural network classifiers, and convolutional ones in particular. Our defense is cheap in the sense that it requires less computation power despite a small cost to pay in terms of detection accuracy. The work refers to a recently published technique called ML-LOO. We replace the costly pixel by pixel leave-one-out approach of ML-LOO by adopting coarse-grained leave-one-out. We evaluate and compare the efficiency of different segmentation algorithms for this task. Our results show that a large gain in efficiency is possible, even though penalized by a marginal decrease in detection accuracy.
引用
收藏
页码:37 / 42
页数:6
相关论文
共 50 条
  • [41] Privacy Risks of Securing Machine Learning Models against Adversarial Examples
    Song, Liwei
    Shokri, Reza
    Mittal, Prateek
    [J]. PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 241 - 257
  • [42] Detection of sensors used for adversarial examples against machine learning models
    Kurniawan, Ade
    Ohsita, Yuichi
    Murata, Masayuki
    [J]. Results in Engineering, 2024, 24
  • [43] Discretization Based Solutions for Secure Machine Learning Against Adversarial Attacks
    Panda, Priyadarshini
    Chakraborty, Indranil
    Roy, Kaushik
    [J]. IEEE ACCESS, 2019, 7 : 70157 - 70168
  • [44] An Adversarial Machine Learning Model Against Android Malware Evasion Attacks
    Chen, Lingwei
    Hou, Shifu
    Ye, Yanfang
    Chen, Lifei
    [J]. WEB AND BIG DATA, 2017, 10612 : 43 - 55
  • [45] Adversarial Machine Learning Attacks Against Video Anomaly Detection Systems
    Mumcu, Furkan
    Doshi, Keval
    Yilmaz, Yasin
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 205 - 212
  • [46] LEGISLATING AUTONOMOUS VEHICLES AGAINST THE BACKDROP OF ADVERSARIAL MACHINE LEARNING FINDINGS
    Van Uytsel, Steven
    [J]. 2019 8TH IEEE INTERNATIONAL CONFERENCE ON CONNECTED VEHICLES AND EXPO (IIEEE CCVE), 2019,
  • [47] Adversarial Machine Learning in Malware Detection: Arms Race between Evasion Attack and Defense
    Chen, Lingwei
    Ye, Yanfang
    Bourlai, Thirimachos
    [J]. 2017 EUROPEAN INTELLIGENCE AND SECURITY INFORMATICS CONFERENCE (EISIC), 2017, : 99 - 106
  • [48] Adversarial Deep Learning approach detection and defense against DDoS attacks in SDN environments
    Novaes, Matheus P.
    Carvalho, Luiz F.
    Lloret, Jaime
    Proenca, Mario Lemes, Jr.
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2021, 125 : 156 - 167
  • [49] Background Class Defense Against Adversarial Examples
    McCoyd, Michael
    Wagner, David
    [J]. 2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 96 - 102
  • [50] Defense against Adversarial Attacks with an Induced Class
    Xu, Zhi
    Wang, Jun
    Pu, Jian
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,