Distributed Stochastic Gradient Descent with Cost-Sensitive and Strategic Agents

被引:1
|
作者
Akbay, Abdullah Basar [1 ]
Tepedelenlioglu, Cihan [1 ]
机构
[1] Arizona State Univ, Sch Elect Comp & Energy Engn, Tempe, AZ 85281 USA
关键词
D O I
10.1109/IEEECONF56349.2022.10051928
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This study considers a federated learning setup where cost-sensitive and strategic agents train a learning model with a server. During each round, each agent samples a minibatch of training data and sends his gradient update. As an increasing function of his minibatch size choice, the agent incurs a cost associated with the data collection, gradient computation and communication. The agents have the freedom to choose their minibatch size and may even opt out from training. To reduce his cost, an agent may diminish his minibatch size, which may also cause an increase in the noise level of the gradient update. The server can offer rewards to compensate the agents for their costs and to incentivize their participation but she lacks the capability of validating the true minibatch sizes of the agents. To tackle this challenge, the proposed reward mechanism evaluates the quality of each agent's gradient according to the its distance to a reference which is constructed from the gradients provided by other agents. It is shown that the proposed reward mechanism has a cooperative Nash equilibrium in which the agents determine the minibatch size choices according to the requests of the server.
引用
收藏
页码:1238 / 1242
页数:5
相关论文
共 50 条
  • [21] Fast Convergence for Stochastic and Distributed Gradient Descent in the Interpolation Limit
    Mitra, Partha P.
    2018 26TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2018, : 1890 - 1894
  • [22] CONTROLLING STOCHASTIC GRADIENT DESCENT USING STOCHASTIC APPROXIMATION FOR ROBUST DISTRIBUTED OPTIMIZATION
    Jain, Adit
    Krishnamurthy, Vikram
    NUMERICAL ALGEBRA CONTROL AND OPTIMIZATION, 2025, 15 (01): : 173 - 195
  • [23] Cost-Sensitive Boosting
    Masnadi-Shirazi, Hamed
    Vasconcelos, Nuno
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (02) : 294 - 309
  • [24] Cost-Sensitive Learning
    Zhou, Zlii-Hua
    MODELING DECISIONS FOR ARTIFICIAL INTELLIGENCE, MDAI 2011, 2011, 6820 : 17 - 18
  • [25] Distributed stochastic gradient descent for link prediction in signed social networks
    Zhang, Han
    Wu, Gang
    Ling, Qing
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING, 2019, 2019 (1)
  • [26] Distributed Stochastic Gradient Descent: Nonconvexity, Nonsmoothness, and Convergence to Local Minima
    Swenson, Brian
    Murray, Ryan
    Poor, H. Vincent
    Kar, Soummya
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [27] Privacy-Preserving Stochastic Gradient Descent with Multiple Distributed Trainers
    Le Trieu Phong
    NETWORK AND SYSTEM SECURITY, 2017, 10394 : 510 - 518
  • [28] Adaptive Distributed Stochastic Gradient Descent for Minimizing Delay in the Presence of Stragglers
    Hanna, Serge Kas
    Bitar, Rawad
    Parag, Parimal
    Dasari, Venkat
    El Rouayheb, Salim
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 4262 - 4266
  • [29] ColumnSGD: A Column-oriented Framework for Distributed Stochastic Gradient Descent
    Zhang, Zhipeng
    Wu, Wentao
    Jiang, Jiawei
    Yu, Lele
    Cui, Bin
    Zhang, Ce
    2020 IEEE 36TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2020), 2020, : 1513 - 1524
  • [30] A DAG Model of Synchronous Stochastic Gradient Descent in Distributed Deep Learning
    Shi, Shaohuai
    Wang, Qiang
    Chu, Xiaowen
    Li, Bo
    2018 IEEE 24TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS 2018), 2018, : 425 - 432