Learning to Collaborate in Markov Decision Processes

被引:0
|
作者
Radanovic, Goran [1 ]
Devidze, Rati [2 ]
Parkes, David C. [1 ]
Singla, Adish [2 ]
机构
[1] Harvard Univ, Cambridge, MA 02138 USA
[2] Max Planck Inst Software Syst MPI SWS, Saarbrucken, Germany
关键词
PARITY;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider a two-agent MDP framework where agents repeatedly solve a task in a collaborative setting. We study the problem of designing a learning algorithm for the first agent (A(1)) that facilitates successful collaboration even in cases when the second agent (A(2)) is adapting its policy in an unknown way. The key challenge in our setting is that the first agent faces non-stationarity in rewards and transitions because of the adaptive behavior of the second agent. We design novel online learning algorithms for agent A(1) whose regret decays as O(Tmax{1-3/7.alpha 1/4}), for T learning episodes, provided that the magnitude in the change in agent A(2)'s policy between any two consecutive episodes is upper bounded by O(T-alpha). Here, the parameter a is assumed to be strictly greater than 0, and we show that this assumption is necessary provided that the learning parity with noise problem is computationally hard. We show that sublinear regret of agent A(1) further implies nearoptimality of the agents' joint return for MDPs that manifest the properties of a smooth game.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Learning Representation and Control in Markov Decision Processes: New Frontiers
    Mahadevan, Sridhar
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2009, 1 (04): : 403 - 565
  • [42] Permissive Supervisor Synthesis for Markov Decision Processes Through Learning
    Wu, Bo
    Zhang, Xiaobin
    Lin, Hai
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2019, 64 (08) : 3332 - 3338
  • [43] PAC learning for Markov decision processes and dynamic. games
    Jain, R
    Varaiya, PP
    [J]. 2004 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, PROCEEDINGS, 2004, : 468 - 468
  • [44] Learning Weighted Assumptions for Compositional Verification of Markov Decision Processes
    He, Fei
    Gao, Xiaowei
    Wang, Miaofei
    Wang, Bow-Yaw
    Zhang, Lijun
    [J]. ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2016, 25 (03)
  • [45] Learning algorithms for finite horizon constrained markov decision processes
    Mittal, A.
    Hemachandra, N.
    [J]. JOURNAL OF INDUSTRIAL AND MANAGEMENT OPTIMIZATION, 2007, 3 (03) : 429 - 444
  • [46] Reinforcement Learning for Cost-Aware Markov Decision Processes
    Suttle, Wesley A.
    Zhang, Kaiqing
    Yang, Zhuoran
    Kraemer, David N.
    Liu, Ji
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [47] From perturbation analysis to Markov decision processes and reinforcement learning
    Cao, XR
    [J]. DISCRETE EVENT DYNAMIC SYSTEMS-THEORY AND APPLICATIONS, 2003, 13 (1-2): : 9 - 39
  • [48] Learning Parameterized Policies for Markov Decision Processes through Demonstrations
    Hanawal, Manjesh K.
    Liu, Hao
    Zhu, Henghui
    Paschalidis, Ioannis Ch.
    [J]. 2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC), 2016, : 7087 - 7092
  • [49] Toward Implicit Learning for the Compositional Verification of Markov Decision Processes
    Bouchekir, Redouane
    Boukala, Mohand Cherif
    [J]. VERIFICATION AND EVALUATION OF COMPUTER AND COMMUNICATION SYSTEMS, 2018, 11181 : 200 - 217
  • [50] Learning factored representations for partially observable Markov decision processes
    Sallans, B
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 12, 2000, 12 : 1050 - 1056