Fine Tuning Lasso in an Adversarial Environment Against Gradient Attacks

被引:0
|
作者
Ditzler, Gregory [1 ]
Prater, Ashley [2 ]
机构
[1] Univ Arizona, Dept Elect & Comp Engn, Tucson, AZ 85721 USA
[2] US Air Force, Res Lab, Informat Directorate, Rome, NY 13441 USA
关键词
Feature Selection; Adversarial Machine Learning; Supervised Learning; FEATURE-SELECTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning and data mining algorithms typically assume that the training and testing data are sampled from the same fixed probability distribution; however, this violation is often violated in practice. The field of domain adaptation addresses the situation where this assumption of a fixed probability between the two domains is violated; however, the difference between the two domains (training/source and testing/target) may not be known a priori. There has been a recent thrust in addressing the problem of learning in the presence of an adversary, which we formulate as a problem of domain adaption to build a more robust classifier. This is because the overall security of classifiers and their preprocessing stages have been called into question with the recent findings of adversaries in a learning setting. Adversarial training (and testing) data pose a serious threat to scenarios where an attacker has the opportunity to "poison" the training or "evade" on the testing data set(s) in order to achieve something that is not in the best interest of the classifier. Recent work has begun to show the impact of adversarial data on several classifiers; however, the impact of the adversary on aspects related to preprocessing of data (i.e., dimensionality reduction or feature selection) has widely been ignored in the revamp of adversarial learning research. Furthermore, variable selection, which is a vital component to any data analysis, has been shown to be particularly susceptible under an attacker that has knowledge of the task. In this work, we explore avenues for learning resilient classification models in the adversarial learning setting by considering the effects of adversarial data and how to mitigate its effects through optimization. Our model forms a single convex optimization problem that uses the labeled training data from the source domain and known weaknesses of the model for an adversarial component. We benchmark the proposed approach on synthetic data and show the trade-off between classification accuracy and skew-insensitive statistics.
引用
收藏
页码:1828 / 1834
页数:7
相关论文
共 50 条
  • [21] Resilience of GANs against Adversarial Attacks
    Rudayskyy, Kyrylo
    Miri, Ali
    SECRYPT : PROCEEDINGS OF THE 19TH INTERNATIONAL CONFERENCE ON SECURITY AND CRYPTOGRAPHY, 2022, : 390 - 397
  • [22] Transferable Adversarial Attacks Against ASR
    Gao, Xiaoxue
    Li, Zexin
    Chen, Yiming
    Liu, Cong
    Li, Haizhou
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 2200 - 2204
  • [23] Adversarial mRMR against Evasion Attacks
    Wu, Miaomiao
    Li, Yun
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [24] Enhancing the Transferability of Adversarial Attacks through Variance Tuning
    Wang, Xiaosen
    He, Kun
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 1924 - 1933
  • [25] Adversarial Training with Fast Gradient Projection Method against Synonym Substitution Based Text Attacks
    Wang, Xiaosen
    Yang, Yichen
    Deng, Yihe
    He, Kun
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 13997 - 14005
  • [26] WASSERTRAIN: AN ADVERSARIAL TRAINING FRAMEWORK AGAINST WASSERSTEIN ADVERSARIAL ATTACKS
    Zhao, Qingye
    Chen, Xin
    Zhao, Zhuoyu
    Tang, Enyi
    Li, Xuandong
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2734 - 2738
  • [27] Adversarial attacks on videos based on the conjugate gradient method
    Dai, Yang
    Feng, Yanghe
    Huang, Jincai
    Gongcheng Kexue Xuebao/Chinese Journal of Engineering, 2024, 46 (09): : 1630 - 1637
  • [28] Gradient Correction for White-Box Adversarial Attacks
    Liu, Hongying
    Ge, Zhijin
    Zhou, Zhenyu
    Shang, Fanhua
    Liu, Yuanyuan
    Jiao, Licheng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 35 (12) : 1 - 12
  • [29] Adversarial Attacks on Regression Systems via Gradient Optimization
    Kong, Xiangyin
    Ge, Zhiqiang
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2023, 53 (12): : 7827 - 7839
  • [30] FROM GRADIENT LEAKAGE TO ADVERSARIAL ATTACKS IN FEDERATED LEARNING
    Lim, Jia Qi
    Chan, Chee Seng
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3602 - 3606