Q-LEARNING

被引:100
|
作者
WATKINS, CJCH [1 ]
DAYAN, P [1 ]
机构
[1] UNIV EDINBURGH,CTR COGNIT SCI,EDINBURGH EH8 9EH,SCOTLAND
关键词
Q-LEARNING; REINFORCEMENT LEARNING; TEMPORAL DIFFERENCES; ASYNCHRONOUS DYNAMIC PROGRAMMING;
D O I
10.1023/A:1022676722315
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Q-learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states. This paper presents and proves in detail a convergence theorem for Q-learning based on that outlined in Watkins (1989). We show that Q-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where many Q values can be changed each iteration, rather than just one.
引用
收藏
页码:279 / 292
页数:14
相关论文
共 50 条
  • [41] Is Q-learning Provably Efficient?
    Jin, Chi
    Allen-Zhu, Zeyuan
    Bubeck, Sebastien
    Jordan, Michael I.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [42] Selectively Decentralized Q-Learning
    Thanh Nguyen
    Mukhopadhyay, Snehasis
    2017 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2017, : 328 - 333
  • [43] Generalized Speedy Q-Learning
    John, Indu
    Kamanchi, Chandramouli
    Bhatnagar, Shalabh
    IEEE CONTROL SYSTEMS LETTERS, 2020, 4 (03): : 524 - 529
  • [44] Route Optimization with Q-learning
    Demircan, Semiye
    Aydin, Musa
    Durduran, S. Savas
    PROCEEDINGS OF THE 8TH WSEAS INTERNATIONAL CONFERENCE ON APPLIED COMPUTER SCIENCE (ACS'08): RECENT ADVANCES ON APPLIED COMPUTER SCIENCE, 2008, : 416 - +
  • [45] Weighted Double Q-learning
    Zhang, Zongzhang
    Pan, Zhiyuan
    Kochenderfer, Mykel J.
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3455 - 3461
  • [46] Multi Q-Table Q-Learning
    Kantasewi, Nitchakun
    Marukatat, Sanparith
    Thainimit, Somying
    Manabu, Okumura
    2019 10TH INTERNATIONAL CONFERENCE OF INFORMATION AND COMMUNICATION TECHNOLOGY FOR EMBEDDED SYSTEMS (IC-ICTES), 2019,
  • [47] Bias-Corrected Q-Learning to Control Max-Operator Bias in Q-Learning
    Lee, Donghun
    Defourny, Boris
    Powell, Warren B.
    PROCEEDINGS OF THE 2013 IEEE SYMPOSIUM ON ADAPTIVE DYNAMIC PROGRAMMING AND REINFORCEMENT LEARNING (ADPRL), 2013, : 93 - 99
  • [48] Fuzzy Q-Learning for generalization of reinforcement learning
    Berenji, HR
    FUZZ-IEEE '96 - PROCEEDINGS OF THE FIFTH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1-3, 1996, : 2208 - 2214
  • [49] Learning mixed behaviours with parallel Q-Learning
    Laurent, GJ
    Piat, E
    2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-3, PROCEEDINGS, 2002, : 1002 - 1007
  • [50] Cooperative Q-Learning Based on Learning Automata
    Yang, Mao
    Tian, Yantao
    Qi, Xinyue
    2009 IEEE INTERNATIONAL CONFERENCE ON AUTOMATION AND LOGISTICS ( ICAL 2009), VOLS 1-3, 2009, : 1972 - 1977