Handling the adversarial attacks: A machine learning's perspective

被引:4
|
作者
Cao, Ning [1 ]
Li, Guofu [2 ]
Zhu, Pengjia [3 ]
Sun, Qian [4 ]
Wang, Yingying [1 ]
Li, Jing [5 ]
Yan, Maoling [6 ]
Zhao, Yongbin [7 ]
机构
[1] Qingdao Binhai Univ, Coll Informat Engn, Qingdao, Shandong, Peoples R China
[2] Univ Shanghai Sci & Technol, Coll Commun & Art Design, Shanghai, Peoples R China
[3] Accenture AI Lab, Shanghai, Peoples R China
[4] Beijing Technol & Business Univ, Sch Comp & Informat Engn, Beijing, Peoples R China
[5] Beijing Union Univ, Coll Intellectualized City, Beijing, Peoples R China
[6] Shandong Agr Univ, Coll Informat Sci & Engn, Tai An, Shandong, Peoples R China
[7] Shijiazhuang Tiedao Univ, Sch Informat Sci & Technol, Shijiazhuang, Hebei, Peoples R China
关键词
Security; Deep learning; Adversarial; Robustness; CLOUD; COMMUNICATION; ENCRYPTION; TAXONOMY; SYSTEMS;
D O I
10.1007/s12652-018-0714-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The i.i.d assumption is the corner stone of most conventional machine learning algorithms. However, reducing the bias and variance of the learning model on the i.i.d dataset may not help the model to prevent from their failure on the adversarial samples, which are intentionally generated by either the malicious users or its rival programs. This paper gives a brief introduction of machine learning and adversarial learning, discussing the research frontier of the adversarial issues noticed by both the machine learning and network security field. We argue that one key reason of the adversarial issue is that the learning algorithms may not exploit the input feature set enough, so that the attackers can focus on a small set of features to trick the model. To address this issue, we consider two important classes of classifiers. For random forest, we propose a type of random forest called Weighted Random Forest (WRF) to encourage the model to give even credits to the input features. This approach can be further improved by careful selection of a subset of trees based on the clustering analysis during the run time. For neural networks, we propose to introduce extra soft constraints based on the weight variance to the objective function, such that the model would base the classification decision on more evenly distributed feature impact. Empirical experiments show that these approaches can effectively improve the robustness of the learnt model against their baseline systems.
引用
收藏
页码:2929 / 2943
页数:15
相关论文
共 50 条
  • [1] Handling the adversarial attacksA machine learning’s perspective
    Ning Cao
    Guofu Li
    Pengjia Zhu
    Qian Sun
    Yingying Wang
    Jing Li
    Maoling Yan
    Yongbin Zhao
    [J]. Journal of Ambient Intelligence and Humanized Computing, 2019, 10 : 2929 - 2943
  • [2] Adversarial attacks on medical machine learning
    Finlayson, Samuel G.
    Bowers, John D.
    Ito, Joichi
    Zittrain, Jonathan L.
    Beam, Andrew L.
    Kohane, Isaac S.
    [J]. SCIENCE, 2019, 363 (6433) : 1287 - 1289
  • [3] Enablers Of Adversarial Attacks in Machine Learning
    Izmailov, Rauf
    Sugrim, Shridatt
    Chadha, Ritu
    McDaniel, Patrick
    Swami, Ananthram
    [J]. 2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 425 - 430
  • [4] Detection of adversarial attacks on machine learning systems
    Judah, Matthew
    Sierchio, Jen
    Planer, Michael
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V, 2023, 12538
  • [5] Safe Machine Learning and Defeating Adversarial Attacks
    Rouhani, Bita Darvish
    Samragh, Mohammad
    Javidi, Tara
    Koushanfar, Farinaz
    [J]. IEEE SECURITY & PRIVACY, 2019, 17 (02) : 31 - 38
  • [6] Adversarial Machine Learning Attacks in Internet of Things Systems
    Kone, Rachida
    Toutsop, Otily
    Thierry, Ketchiozo Wandji
    Kornegay, Kevin
    Falaye, Joy
    [J]. 2022 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP, AIPR, 2022,
  • [7] Adversarial attacks on machine learning-aided visualizations
    Fujiwara, Takanori
    Kucher, Kostiantyn
    Wang, Junpeng
    Martins, Rafael M.
    Kerren, Andreas
    Ynnerman, Anders
    [J]. JOURNAL OF VISUALIZATION, 2024,
  • [8] Robust in practice: Adversarial attacks on quantum machine learning
    Liao, Haoran
    Convy, Ian
    Huggins, William J.
    Whaley, K. Birgitta
    [J]. PHYSICAL REVIEW A, 2021, 103 (04)
  • [9] The Vulnerability of UAVs: An Adversarial Machine Learning Perspective
    Doyle, Michael
    Harguess, Joshua
    Manville, Keith
    Rodriguez, Mikel
    [J]. GEOSPATIAL INFORMATICS XI, 2021, 11733
  • [10] Trojan Attacks on Wireless Signal Classification with Adversarial Machine Learning
    Davaslioglu, Kemal
    Sagduyu, Yalin E.
    [J]. 2019 IEEE INTERNATIONAL SYMPOSIUM ON DYNAMIC SPECTRUM ACCESS NETWORKS (DYSPAN), 2019, : 515 - 520