Verifiable Reinforcement Learning via Policy Extraction

被引:0
|
作者
Bastani, Osbert [1 ]
Pu, Yewen [1 ]
Solar-Lezama, Armando [1 ]
机构
[1] MIT, Cambridge, MA 02139 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While deep reinforcement learning has successfully solved many challenging control tasks, its real-world applicability has been limited by the inability to ensure the safety of learned policies. We propose an approach to verifiable reinforcement learning by training decision tree policies, which can represent complex policies (since they are nonparametric), yet can be efficiently verified using existing techniques (since they are highly structured). The challenge is that decision tree policies are difficult to train. We propose VIPER, an algorithm that combines ideas from model compression and imitation learning to learn decision tree policies guided by a DNN policy (called the oracle) and its Q-function, and show that it substantially outperforms two baselines. We use VIPER to (i) learn a provably robust decision tree policy for a variant of Atari Pong with a symbolic state space, (ii) learn a decision tree policy for a toy game based on Pong that provably never loses, and (iii) learn a provably stable decision tree policy for cart-pole. In each case, the decision tree policy achieves performance equal to that of the original DNN policy.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Interpretable, Verifiable, and Robust Reinforcement Learning via Program Synthesis
    Bastani, Osbert
    Inala, Jeevana Priya
    Solar-Lezama, Armando
    [J]. XXAI - BEYOND EXPLAINABLE AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers, 2022, 13200 : 207 - 228
  • [2] Quantum reinforcement learning via policy iteration
    El Amine Cherrat
    Iordanis Kerenidis
    Anupam Prakash
    [J]. Quantum Machine Intelligence, 2023, 5
  • [3] Quantum reinforcement learning via policy iteration
    Cherrat, El Amine
    Kerenidis, Iordanis
    Prakash, Anupam
    [J]. QUANTUM MACHINE INTELLIGENCE, 2023, 5 (02)
  • [4] An Inductive Synthesis Framework for Verifiable Reinforcement Learning
    Zhu, He
    Xiong, Zikang
    Magill, Stephen
    Jagannathan, Suresh
    [J]. PROCEEDINGS OF THE 40TH ACM SIGPLAN CONFERENCE ON PROGRAMMING LANGUAGE DESIGN AND IMPLEMENTATION (PLDI '19), 2019, : 686 - 701
  • [5] Efficient relation extraction via quantum reinforcement learning
    Zhu, Xianchao
    Mu, Yashuang
    Wang, Xuetao
    Zhu, William
    [J]. COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (03) : 4009 - 4018
  • [6] Semantic Extraction for Sentence Representation via Reinforcement Learning
    Yu, Fengying
    Tao, Dewei
    Wang, Jianzong
    Hui, Yanfei
    Cheng, Ning
    Xiao, Jing
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [7] Verifiable and Interpretable Reinforcement Learning through Program Synthesis
    Verma, Abhinav
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 9902 - 9903
  • [8] Efficient multiple biomedical events extraction via reinforcement learning
    Zhao, Weizhong
    Zhao, Yao
    Jiang, Xingpeng
    He, Tingting
    Liu, Fan
    Li, Ning
    [J]. BIOINFORMATICS, 2021, 37 (13) : 1891 - 1899
  • [9] Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks
    Rakhsha, Amin
    Radanovic, Goran
    Devidze, Rati
    Zhu, Xiaojin
    Singla, Adish
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 22
  • [10] Jamming Policy Generation via Heuristic Programming Reinforcement Learning
    Zhang, Yujie
    Huo, Weibo
    Huang, Yulin
    Zhang, Cui
    Pei, Jifang
    Zhang, Yin
    Yang, Jianyu
    [J]. IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2023, 59 (06) : 8782 - 8799