Social Learning in non-stationary environments

被引:0
|
作者
Boursier, Etienne [1 ,5 ]
Perchet, Vianney [2 ,3 ]
Scarsini, Marco [4 ]
机构
[1] Ecole Polytech Fed Lausanne, TML, Lausanne, Switzerland
[2] ENSAE Paris, CREST, Palaiseau, France
[3] CRITEO AI Lab, Palaiseau, France
[4] LUISS Univ, Rome, Italy
[5] ENS Paris Saclay, Ctr Borelli, Gif Sur Yvette, France
关键词
Social Learning; Bayesian Estimation; Non-Stationary Environment; Change-Point Model;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Potential buyers of a product or service, before making their decisions, tend to read reviews written by previous consumers. We consider Bayesian consumers with heterogeneous preferences, who sequentially decide whether to buy an item of unknown quality, based on previous buyers' reviews. The quality is multi-dimensional and may occasionally vary over time; the reviews are also multi-dimensional. In the simple uni-dimensional and static setting, beliefs about the quality are known to converge to its true value. Our paper extends this result in several ways. First, a multi-dimensional quality is considered, second, rates of convergence are provided, third, a dynamical Markovian model with varying quality is studied. In this dynamical setting the cost of learning is shown to be small.
引用
收藏
页数:2
相关论文
共 50 条
  • [1] Learning User Preferences in Non-Stationary Environments
    Huleihel, Wasim
    Pal, Soumyabrata
    Shayevitz, Ofer
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [2] Towards Reinforcement Learning for Non-stationary Environments
    Dal Toe, Sebastian Gregory
    Tiddeman, Bernard
    Mac Parthalain, Neil
    ADVANCES IN COMPUTATIONAL INTELLIGENCE SYSTEMS, UKCI 2023, 2024, 1453 : 41 - 52
  • [3] Reinforcement learning algorithm for non-stationary environments
    Sindhu Padakandla
    Prabuchandran K. J.
    Shalabh Bhatnagar
    Applied Intelligence, 2020, 50 : 3590 - 3606
  • [4] Reinforcement learning algorithm for non-stationary environments
    Padakandla, Sindhu
    Prabuchandran, K. J.
    Bhatnagar, Shalabh
    APPLIED INTELLIGENCE, 2020, 50 (11) : 3590 - 3606
  • [5] Learning to negotiate optimally in non-stationary environments
    Narayanan, Vidya
    Jennings, Nicholas R.
    COOPERATIVE INFORMATION AGENTS X, PROCEEDINGS, 2006, 4149 : 288 - 300
  • [6] A robust incremental learning method for non-stationary environments
    Martinez-Rego, David
    Perez-Sanchez, Beatriz
    Fontenla-Romero, Oscar
    Alonso-Betanzos, Amparo
    NEUROCOMPUTING, 2011, 74 (11) : 1800 - 1808
  • [7] Learning Optimal Behavior in Environments with Non-stationary Observations
    Boone, Ilio
    Rens, Gavin
    ICAART: PROCEEDINGS OF THE 14TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 3, 2022, : 729 - 736
  • [8] A heterogeneous online learning ensemble for non-stationary environments
    Idrees, Mobin M.
    Minku, Leandro L.
    Stahl, Frederic
    Badii, Atta
    KNOWLEDGE-BASED SYSTEMS, 2020, 188
  • [9] Reinforcement learning in episodic non-stationary Markovian environments
    Choi, SPM
    Zhang, NL
    Yeung, DY
    IC-AI '04 & MLMTA'04 , VOL 1 AND 2, PROCEEDINGS, 2004, : 752 - 758
  • [10] Learning spectrum opportunities in non-stationary radio environments
    Oksanen, Jan
    Koivunen, Visa
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 2447 - 2451