Fine Tuning Lasso in an Adversarial Environment Against Gradient Attacks

被引:0
|
作者
Ditzler, Gregory [1 ]
Prater, Ashley [2 ]
机构
[1] Univ Arizona, Dept Elect & Comp Engn, Tucson, AZ 85721 USA
[2] US Air Force, Res Lab, Informat Directorate, Rome, NY 13441 USA
关键词
Feature Selection; Adversarial Machine Learning; Supervised Learning; FEATURE-SELECTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning and data mining algorithms typically assume that the training and testing data are sampled from the same fixed probability distribution; however, this violation is often violated in practice. The field of domain adaptation addresses the situation where this assumption of a fixed probability between the two domains is violated; however, the difference between the two domains (training/source and testing/target) may not be known a priori. There has been a recent thrust in addressing the problem of learning in the presence of an adversary, which we formulate as a problem of domain adaption to build a more robust classifier. This is because the overall security of classifiers and their preprocessing stages have been called into question with the recent findings of adversaries in a learning setting. Adversarial training (and testing) data pose a serious threat to scenarios where an attacker has the opportunity to "poison" the training or "evade" on the testing data set(s) in order to achieve something that is not in the best interest of the classifier. Recent work has begun to show the impact of adversarial data on several classifiers; however, the impact of the adversary on aspects related to preprocessing of data (i.e., dimensionality reduction or feature selection) has widely been ignored in the revamp of adversarial learning research. Furthermore, variable selection, which is a vital component to any data analysis, has been shown to be particularly susceptible under an attacker that has knowledge of the task. In this work, we explore avenues for learning resilient classification models in the adversarial learning setting by considering the effects of adversarial data and how to mitigate its effects through optimization. Our model forms a single convex optimization problem that uses the labeled training data from the source domain and known weaknesses of the model for an adversarial component. We benchmark the proposed approach on synthetic data and show the trade-off between classification accuracy and skew-insensitive statistics.
引用
收藏
页码:1828 / 1834
页数:7
相关论文
共 50 条
  • [1] Defense against Adversarial Attacks on Hybrid Speech Recognition using Joint Adversarial Fine-tuning with Denoiser
    Joshi, Sonal
    Kataria, Saurabh
    Shao, Yiwen
    Zelasko, Piotr
    Villalba, Jesus
    Khudanpur, Sanjeev
    Dehak, Najim
    INTERSPEECH 2022, 2022, : 5035 - 5039
  • [2] Transferable adversarial attacks against face recognition using surrogate model fine-tuning
    Khedr, Yasmeen M.
    Liu, Xin
    Lu, Haobo
    He, Kun
    APPLIED SOFT COMPUTING, 2025, 174
  • [3] Robustness Against Gradient based Attacks through Cost Effective Network Fine-Tuning
    Agarwal, Akshay
    Ratha, Nalini
    Singh, Richa
    Vatsa, Mayank
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW, 2023, : 28 - 37
  • [4] Gradient-based Adversarial Attacks against Text Transformers
    Guo, Chuan
    Sablayrolles, Alexandre
    Jegou, Herve
    Kiela, Douwe
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 5747 - 5757
  • [5] A Survey of Attacks Against Twitter Spam Detectors in an Adversarial Environment
    Imam, Niddal H.
    Vassilakis, Vassilios G.
    ROBOTICS, 2019, 8 (03)
  • [6] Defending Against Local Adversarial Attacks through Empirical Gradient Optimization
    Sun, Boyang
    Ma, Xiaoxuan
    Wang, Hengyou
    TEHNICKI VJESNIK-TECHNICAL GAZETTE, 2023, 30 (06): : 1888 - 1898
  • [7] Gradient-Based Adversarial Attacks Against Malware Detection by Instruction Replacement
    Zhao, Jiapeng
    Liu, Zhongjin
    Zhang, Xiaoling
    Huang, Jintao
    Shi, Zhiqiang
    Lv, Shichao
    Li, Hong
    Sun, Limin
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS (WASA 2022), PT I, 2022, 13471 : 603 - 612
  • [8] Boosting adversarial attacks with transformed gradient
    He, Zhengyun
    Duan, Yexin
    Zhang, Wu
    Zou, Junhua
    He, Zhengfang
    Wang, Yunyun
    Pan, Zhisong
    COMPUTERS & SECURITY, 2022, 118
  • [9] Towards Understanding the Fragility of Multilingual LLMs against Fine-Tuning Attacks
    Meta, United States
    不详
    不详
    不详
    不详
    arXiv,
  • [10] Text Adversarial Purification as Defense against Adversarial Attacks
    Li, Linyang
    Song, Demin
    Qiu, Xipeng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 338 - 350