Fine Tuning Lasso in an Adversarial Environment Against Gradient Attacks

被引:0
|
作者
Ditzler, Gregory [1 ]
Prater, Ashley [2 ]
机构
[1] Univ Arizona, Dept Elect & Comp Engn, Tucson, AZ 85721 USA
[2] US Air Force, Res Lab, Informat Directorate, Rome, NY 13441 USA
关键词
Feature Selection; Adversarial Machine Learning; Supervised Learning; FEATURE-SELECTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning and data mining algorithms typically assume that the training and testing data are sampled from the same fixed probability distribution; however, this violation is often violated in practice. The field of domain adaptation addresses the situation where this assumption of a fixed probability between the two domains is violated; however, the difference between the two domains (training/source and testing/target) may not be known a priori. There has been a recent thrust in addressing the problem of learning in the presence of an adversary, which we formulate as a problem of domain adaption to build a more robust classifier. This is because the overall security of classifiers and their preprocessing stages have been called into question with the recent findings of adversaries in a learning setting. Adversarial training (and testing) data pose a serious threat to scenarios where an attacker has the opportunity to "poison" the training or "evade" on the testing data set(s) in order to achieve something that is not in the best interest of the classifier. Recent work has begun to show the impact of adversarial data on several classifiers; however, the impact of the adversary on aspects related to preprocessing of data (i.e., dimensionality reduction or feature selection) has widely been ignored in the revamp of adversarial learning research. Furthermore, variable selection, which is a vital component to any data analysis, has been shown to be particularly susceptible under an attacker that has knowledge of the task. In this work, we explore avenues for learning resilient classification models in the adversarial learning setting by considering the effects of adversarial data and how to mitigate its effects through optimization. Our model forms a single convex optimization problem that uses the labeled training data from the source domain and known weaknesses of the model for an adversarial component. We benchmark the proposed approach on synthetic data and show the trade-off between classification accuracy and skew-insensitive statistics.
引用
收藏
页码:1828 / 1834
页数:7
相关论文
共 50 条
  • [41] Robust Trajectory Prediction against Adversarial Attacks
    Cao, Yulong
    Xu, Danfei
    Weng, Xinshuo
    Mao, Z. Morley
    Anandkumar, Anima
    Xiao, Chaowei
    Pavone, Marco
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 128 - 137
  • [42] Adversarial Feature Selection Against Evasion Attacks
    Zhang, Fei
    Chan, Patrick P. K.
    Biggio, Battista
    Yeung, Daniel S.
    Roli, Fabio
    IEEE TRANSACTIONS ON CYBERNETICS, 2016, 46 (03) : 766 - 777
  • [43] ROBUSTNESS OF SAAK TRANSFORM AGAINST ADVERSARIAL ATTACKS
    Ramanathan, Thiyagarajan
    Manimaran, Abinaya
    You, Suya
    Kuo, C-C Jay
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 2531 - 2535
  • [44] Defense against adversarial attacks using DRAGAN
    ArjomandBigdeli, Ali
    Amirmazlaghani, Maryam
    Khalooei, Mohammad
    2020 6TH IRANIAN CONFERENCE ON SIGNAL PROCESSING AND INTELLIGENT SYSTEMS (ICSPIS), 2020,
  • [45] Defense Against Adversarial Attacks in Deep Learning
    Li, Yuancheng
    Wang, Yimeng
    APPLIED SCIENCES-BASEL, 2019, 9 (01):
  • [46] Optimal Transport as a Defense Against Adversarial Attacks
    Bouniot, Quentin
    Audigier, Romaric
    Loesch, Angelique
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 5044 - 5051
  • [47] Defending Distributed Systems Against Adversarial Attacks
    Su L.
    Performance Evaluation Review, 2020, 47 (03): : 24 - 27
  • [48] Online Alternate Generator Against Adversarial Attacks
    Li, Haofeng
    Zeng, Yirui
    Li, Guanbin
    Lin, Liang
    Yu, Yizhou
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 9305 - 9315
  • [49] Binary thresholding defense against adversarial attacks
    Wang, Yutong
    Zhang, Wenwen
    Shen, Tianyu
    Yu, Hui
    Wang, Fei-Yue
    NEUROCOMPUTING, 2021, 445 : 61 - 71
  • [50] Adversarial Attacks Against Binary Similarity Systems
    Capozzi, Gianluca
    D'elia, Daniele Cono
    Di Luna, Giuseppe Antonio
    Querzoni, Leonardo
    IEEE ACCESS, 2024, 12 : 161247 - 161269