Lower Bounds on the Rate of Learning in Social Networks

被引:15
|
作者
Lobel, Ilan [1 ]
Acemoglu, Daron [1 ]
Dahleh, Munther [1 ]
Ozdaglar, Asuman [1 ]
机构
[1] MIT, Ctr Operat Res, Cambridge, MA 02139 USA
关键词
D O I
10.1109/ACC.2009.5160660
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We study the rate of convergence of Bayesian learning in social networks. Each individual receives a signal about the underlying state of the world, observes a subset of past actions and chooses one of two possible actions. Our previous work [1] established that when signals generate unbounded likelihood ratios, there will be asymptotic learning under mild conditions on the social network topology-in the sense that beliefs and decisions converge (in probability) to the correct beliefs and action. The question of the speed of learning has not been investigated, however. In this paper, we provide estimates of the speed of learning (the rate at which the probability of the incorrect action converges to zero). We focus on a special class of topologies in which individuals observe either a random action from the past or the most recent action. We show that convergence to the correct action is faster than a polynomial rate when individuals observe the most recent action and is at a logarithmic rate when they sample a random action from the past. This suggests that communication in social networks that lead to repeated sampling of the same individuals lead to slower aggregation of information.
引用
收藏
页码:2825 / +
页数:2
相关论文
共 50 条
  • [21] Lower bounds for clear transmissions in radio networks
    Farach-Colton, M
    Fernandes, RJ
    Mosteiro, MA
    LATIN 2006: THEORETICAL INFORMATICS, 2006, 3887 : 447 - 454
  • [22] Lower Bounds for Structuring Unreliable Radio Networks
    Newport, Calvin
    DISTRIBUTED COMPUTING (DISC 2014), 2014, 8784 : 318 - 332
  • [23] Lower Bounds on Complexity of Shallow Perceptron Networks
    Kurkova, Vera
    ENGINEERING APPLICATIONS OF NEURAL NETWORKS, EANN 2016, 2016, 629 : 283 - 294
  • [24] Superpolynomial Lower Bounds for Learning One-Layer Neural Networks using Gradient Descent
    Goel, Surbhi
    Gollakota, Aravind
    Jin, Zhihan
    Karmalkar, Sushrut
    Klivans, Adam
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [25] Minimax Lower Bounds for Transfer Learning with Linear and One-hidden Layer Neural Networks
    Kalan, Seyed Mohammadreza Mousavi
    Fabian, Zalan
    Avestimehr, Salman
    Soltanolkotabi, Mahdi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [26] Superpolynomial Lower Bounds for Learning One-Layer Neural Networks using Gradient Descent
    Goel, Surbhi
    Gollakota, Aravind
    Jin, Zhihan
    Karmalkar, Sushrut
    Klivans, Adam
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [27] New lower bounds for statistical query learning
    Yang, K
    JOURNAL OF COMPUTER AND SYSTEM SCIENCES, 2005, 70 (04) : 485 - 509
  • [28] Unconditional lower bounds for learning intersections of halfspaces
    Adam R. Klivans
    Alexander A. Sherstov
    Machine Learning, 2007, 69 : 97 - 114
  • [29] Unconditional lower bounds for learning intersections of halfspaces
    Klivans, Adam R.
    Sherstov, Alexander A.
    MACHINE LEARNING, 2007, 69 (2-3) : 97 - 114
  • [30] Lower bounds on learning decision lists and trees
    Hancock, T
    Jiang, T
    Li, M
    Tromp, J
    INFORMATION AND COMPUTATION, 1996, 126 (02) : 114 - 122