Learning to Collaborate in Markov Decision Processes

被引:0
|
作者
Radanovic, Goran [1 ]
Devidze, Rati [2 ]
Parkes, David C. [1 ]
Singla, Adish [2 ]
机构
[1] Harvard Univ, Cambridge, MA 02138 USA
[2] Max Planck Inst Software Syst MPI SWS, Saarbrucken, Germany
关键词
PARITY;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider a two-agent MDP framework where agents repeatedly solve a task in a collaborative setting. We study the problem of designing a learning algorithm for the first agent (A(1)) that facilitates successful collaboration even in cases when the second agent (A(2)) is adapting its policy in an unknown way. The key challenge in our setting is that the first agent faces non-stationarity in rewards and transitions because of the adaptive behavior of the second agent. We design novel online learning algorithms for agent A(1) whose regret decays as O(Tmax{1-3/7.alpha 1/4}), for T learning episodes, provided that the magnitude in the change in agent A(2)'s policy between any two consecutive episodes is upper bounded by O(T-alpha). Here, the parameter a is assumed to be strictly greater than 0, and we show that this assumption is necessary provided that the learning parity with noise problem is computationally hard. We show that sublinear regret of agent A(1) further implies nearoptimality of the agents' joint return for MDPs that manifest the properties of a smooth game.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Recursive learning automata approach to Markov decision processes
    Chang, Hyeong Soo
    Fu, Michael C.
    Hu, Jiaqiao
    Marcus, Steven I.
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2007, 52 (07) : 1349 - 1355
  • [22] Learning and Planning with Timing Information in Markov Decision Processes
    Bacon, Pierre-Luc
    Balle, Borja
    Precup, Doina
    [J]. UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2015, : 111 - 120
  • [23] Learning algorithms or Markov decision processes with average cost
    Abounadi, J
    Bertsekas, D
    Borkar, VS
    [J]. SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2001, 40 (03) : 681 - 698
  • [24] Concurrent Markov decision processes for robot team learning
    Girard, Justin
    Emami, M. Reza
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2015, 39 : 223 - 234
  • [25] Learning Policies for Markov Decision Processes From Data
    Hanawal, Manjesh Kumar
    Liu, Hao
    Zhu, Henghui
    Paschalidis, Ioannis Ch.
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2019, 64 (06) : 2298 - 2309
  • [26] Active learning in partially observable Markov decision processes
    Jaulmes, R
    Pineau, J
    Precup, D
    [J]. MACHINE LEARNING: ECML 2005, PROCEEDINGS, 2005, 3720 : 601 - 608
  • [27] A sensitivity view of Markov decision processes and reinforcement learning
    Cao, XR
    [J]. MODELING, CONTROL AND OPTIMIZATION OF COMPLEX SYSTEMS: IN HONOR OF PROFESSOR YU-CHI HO, 2003, 14 : 261 - 283
  • [28] Combining Learning Algorithms: An Approach to Markov Decision Processes
    Ribeiro, Richardson
    Favarim, Fabio
    Barbosa, Marco A. C.
    Koerich, Alessandro L.
    Enembreck, Fabricio
    [J]. ENTERPRISE INFORMATION SYSTEMS, ICEIS 2012, 2013, 141 : 172 - 188
  • [29] Online Learning in Markov Decision Processes with Continuous Actions
    Hong, Yi-Te
    Lu, Chi-Jen
    [J]. ALGORITHMIC LEARNING THEORY, ALT 2015, 2015, 9355 : 302 - 316
  • [30] Markov decision processes
    White, D.J.
    [J]. Journal of the Operational Research Society, 1995, 46 (06):