Defending non-Bayesian learning against adversarial attacks

被引:0
|
作者
Lili Su
Nitin H. Vaidya
机构
[1] Massachusetts Institute of Technology,
[2] University of Illinois at Urbana-Champaign,undefined
来源
Distributed Computing | 2019年 / 32卷
关键词
Distributed learning; Byzantine agreement; Fault-tolerance; Adversary attacks; Security;
D O I
暂无
中图分类号
学科分类号
摘要
This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state out of m alternatives. We focus on the impact of adversarial agents on the performance of consensus-based non-Bayesian learning, where non-faulty agents combine local learning updates with consensus primitives. In particular, we consider the scenario where an unknown subset of agents suffer Byzantine faults—agents suffering Byzantine faults behave arbitrarily. We propose two learning rules. In our learning rules, each non-faulty agent keeps a local variable which is a stochastic vector over the m possible states. Entries of this stochastic vector can be viewed as the scores assigned to the corresponding states by that agent. We say a non-faulty agent learns the underlying truth if it assigns one to the true state and zeros to the wrong states asymptotically.In our first update rule, each agent updates its local score vector as (up to normalization) the product of (1) the likelihood of the cumulative private signals and (2) the weighted geometric average of the score vectors of its incoming neighbors and itself. Under reasonable assumptions on the underlying network structure and the global identifiability of the network, we show that all the non-faulty agents asymptotically learn the true state almost surely.We propose a modified variant of our first learning rule whose complexity per iteration per agent is O(m2nlogn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(m^2 n \log n)$$\end{document}, where n is the number of agents in the network. In addition, we show that this modified learning rule works under a less restrictive network identifiability condition.
引用
收藏
页码:277 / 289
页数:12
相关论文
共 50 条
  • [1] Defending non-Bayesian learning against adversarial attacks
    Su, Lili
    Vaidya, Nitin H.
    [J]. DISTRIBUTED COMPUTING, 2019, 32 (04) : 277 - 289
  • [2] Defending Deep Learning Models Against Adversarial Attacks
    Mani, Nag
    Moh, Melody
    Moh, Teng-Sheng
    [J]. INTERNATIONAL JOURNAL OF SOFTWARE SCIENCE AND COMPUTATIONAL INTELLIGENCE-IJSSCI, 2021, 13 (01): : 72 - 89
  • [3] Defending against Adversarial Attacks in Federated Learning on Metric Learning Model
    Gu, Zhipin
    Shi, Jiangyong
    Yang, Yuexiang
    He, Liangzhong
    [J]. 2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 197 - 206
  • [4] Non-Bayesian Learning
    Epstein, Larry G.
    Noor, Jawwad
    Sandroni, Alvaro
    [J]. B E JOURNAL OF THEORETICAL ECONOMICS, 2010, 10 (01):
  • [5] Defending against adversarial attacks by randomized diversification
    Taran, Olga
    Rezaeifar, Shideh
    Holotyak, Taras
    Voloshynovskiy, Slava
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 11218 - 11225
  • [6] Defending Distributed Systems Against Adversarial Attacks
    Su, Lili
    [J]. Performance Evaluation Review, 2020, 47 (03): : 24 - 27
  • [7] Defending against Membership Inference Attacks in Federated learning via Adversarial Example
    Xie, Yuanyuan
    Chen, Bing
    Zhang, Jiale
    Wu, Di
    [J]. 2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 153 - 160
  • [8] Non-Bayesian social learning
    Jadbabaie, Ali
    Molavi, Pooya
    Sandroni, Alvaro
    Tahbaz-Salehi, Alireza
    [J]. GAMES AND ECONOMIC BEHAVIOR, 2012, 76 (01) : 210 - 225
  • [9] ShieldNets: Defending Against Adversarial Attacks Using Probabilistic Adversarial Robustness
    Theagarajan, Rajkumar
    Chen, Ming
    Bhanu, Bir
    Zhang, Jing
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6981 - 6989
  • [10] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006