Fine Tuning Lasso in an Adversarial Environment Against Gradient Attacks

被引:0
|
作者
Ditzler, Gregory [1 ]
Prater, Ashley [2 ]
机构
[1] Univ Arizona, Dept Elect & Comp Engn, Tucson, AZ 85721 USA
[2] US Air Force, Res Lab, Informat Directorate, Rome, NY 13441 USA
关键词
Feature Selection; Adversarial Machine Learning; Supervised Learning; FEATURE-SELECTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning and data mining algorithms typically assume that the training and testing data are sampled from the same fixed probability distribution; however, this violation is often violated in practice. The field of domain adaptation addresses the situation where this assumption of a fixed probability between the two domains is violated; however, the difference between the two domains (training/source and testing/target) may not be known a priori. There has been a recent thrust in addressing the problem of learning in the presence of an adversary, which we formulate as a problem of domain adaption to build a more robust classifier. This is because the overall security of classifiers and their preprocessing stages have been called into question with the recent findings of adversaries in a learning setting. Adversarial training (and testing) data pose a serious threat to scenarios where an attacker has the opportunity to "poison" the training or "evade" on the testing data set(s) in order to achieve something that is not in the best interest of the classifier. Recent work has begun to show the impact of adversarial data on several classifiers; however, the impact of the adversary on aspects related to preprocessing of data (i.e., dimensionality reduction or feature selection) has widely been ignored in the revamp of adversarial learning research. Furthermore, variable selection, which is a vital component to any data analysis, has been shown to be particularly susceptible under an attacker that has knowledge of the task. In this work, we explore avenues for learning resilient classification models in the adversarial learning setting by considering the effects of adversarial data and how to mitigate its effects through optimization. Our model forms a single convex optimization problem that uses the labeled training data from the source domain and known weaknesses of the model for an adversarial component. We benchmark the proposed approach on synthetic data and show the trade-off between classification accuracy and skew-insensitive statistics.
引用
收藏
页码:1828 / 1834
页数:7
相关论文
共 50 条
  • [31] Diversified Adversarial Attacks based on Conjugate Gradient Method
    Yamamura, Keiichiro
    Sato, Haruiki
    Tateiwa, Nariaki
    Hata, Nozomi
    Mitsutake, Toru
    Oe, Issa
    Ishikura, Hiroki
    Fujisawa, Katsuki
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [32] Protecting JPEG Images Against Adversarial Attacks
    Prakash, Aaditya
    Moran, Nick
    Garber, Solomon
    DiLillo, Antonella
    Storer, James
    2018 DATA COMPRESSION CONFERENCE (DCC 2018), 2018, : 137 - 146
  • [33] On the robustness of skeleton detection against adversarial attacks
    Bai, Xiuxiu
    Yang, Ming
    Liu, Zhe
    NEURAL NETWORKS, 2020, 132 : 416 - 427
  • [34] ADVERSARIAL ATTACKS AGAINST AUDIO SURVEILLANCE SYSTEMS
    Ntalampiras, Stavros
    European Signal Processing Conference, 2022, 2022-August : 284 - 288
  • [35] A Defense Method Against Facial Adversarial Attacks
    Sadu, Chiranjeevi
    Das, Pradip K.
    2021 IEEE REGION 10 CONFERENCE (TENCON 2021), 2021, : 459 - 463
  • [36] On the Defense of Spoofing Countermeasures Against Adversarial Attacks
    Nguyen-Vu, Long
    Doan, Thien-Phuc
    Bui, Mai
    Hong, Kihun
    Jung, Souhwan
    IEEE ACCESS, 2023, 11 : 94563 - 94574
  • [37] Adversarial Sampling Attacks Against Phishing Detection
    Shirazi, Hossein
    Bezawada, Bruhadeshwar
    Ray, Indrakshi
    Anderson, Charles
    DATA AND APPLICATIONS SECURITY AND PRIVACY XXXIII, 2019, 11559 : 83 - 101
  • [38] Stochastic Computing as a Defence Against Adversarial Attacks
    Neugebauer, Florian
    Vekariya, Vivek
    Polian, Ilia
    Hayes, John P.
    2023 53RD ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS WORKSHOPS, DSN-W, 2023, : 191 - 194
  • [39] ADVERSARIAL ATTACKS AGAINST AUDIO SURVEILLANCE SYSTEMS
    Ntalampiras, Stavros
    2022 30TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2022), 2022, : 284 - 288
  • [40] Defense against Adversarial Attacks with an Induced Class
    Xu, Zhi
    Wang, Jun
    Pu, Jian
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,