Learning to Collaborate in Markov Decision Processes

被引:0
|
作者
Radanovic, Goran [1 ]
Devidze, Rati [2 ]
Parkes, David C. [1 ]
Singla, Adish [2 ]
机构
[1] Harvard Univ, Cambridge, MA 02138 USA
[2] Max Planck Inst Software Syst MPI SWS, Saarbrucken, Germany
关键词
PARITY;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider a two-agent MDP framework where agents repeatedly solve a task in a collaborative setting. We study the problem of designing a learning algorithm for the first agent (A(1)) that facilitates successful collaboration even in cases when the second agent (A(2)) is adapting its policy in an unknown way. The key challenge in our setting is that the first agent faces non-stationarity in rewards and transitions because of the adaptive behavior of the second agent. We design novel online learning algorithms for agent A(1) whose regret decays as O(Tmax{1-3/7.alpha 1/4}), for T learning episodes, provided that the magnitude in the change in agent A(2)'s policy between any two consecutive episodes is upper bounded by O(T-alpha). Here, the parameter a is assumed to be strictly greater than 0, and we show that this assumption is necessary provided that the learning parity with noise problem is computationally hard. We show that sublinear regret of agent A(1) further implies nearoptimality of the agents' joint return for MDPs that manifest the properties of a smooth game.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Learning in Constrained Markov Decision Processes
    Singh, Rahul
    Gupta, Abhishek
    Shroff, Ness B.
    [J]. IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2023, 10 (01): : 441 - 453
  • [2] Blackwell Online Learning for Markov Decision Processes
    Li, Tao
    Peng, Guanze
    Zhu, Quanyan
    [J]. 2021 55TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2021,
  • [3] Online Learning in Kernelized Markov Decision Processes
    Chowdhury, Sayak Ray
    Gopalan, Aditya
    [J]. 22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [4] Learning Factored Markov Decision Processes with Unawareness
    Innes, Craig
    Lascarides, Alex
    [J]. 35TH UNCERTAINTY IN ARTIFICIAL INTELLIGENCE CONFERENCE (UAI 2019), 2020, 115 : 123 - 133
  • [5] Bayesian Learning of Noisy Markov Decision Processes
    Singh, Sumeetpal S.
    Chopin, Nicolas
    Whiteley, Nick
    [J]. ACM TRANSACTIONS ON MODELING AND COMPUTER SIMULATION, 2013, 23 (01):
  • [6] Learning Factored Markov Decision Processes with Unawareness
    Innes, Craig
    Lascarides, Alex
    [J]. AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 2030 - 2032
  • [7] HIERARCHICAL REPRESENTATION LEARNING FOR MARKOV DECISION PROCESSES
    Steccanella, Lorenzo
    Jonsson, Anders
    Totaro, Simone
    [J]. CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 232, 2023, 232 : 568 - 585
  • [8] Episodic task learning in Markov decision processes
    Yong Lin
    Fillia Makedon
    Yurong Xu
    [J]. Artificial Intelligence Review, 2011, 36 : 87 - 98
  • [9] Robust Anytime Learning of Markov Decision Processes
    Suilen, Marnix
    Simao, Thiago D.
    Parker, David
    Jansen, Nils
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [10] Learning Markov Decision Processes for Model Checking
    Mao, Hua
    Chen, Yingke
    Jaeger, Manfred
    Nielsen, Thomas D.
    Larsen, Kim G.
    Nielsen, Brian
    [J]. ELECTRONIC PROCEEDINGS IN THEORETICAL COMPUTER SCIENCE, 2012, (103): : 49 - 63