Artificial Conversational Agent using Robust Adversarial Reinforcement Learning

被引:0
|
作者
Wadekar, Isha [1 ]
机构
[1] Vidyavardhinis Coll Engn & Technol, BE Comp Engn, Mumbai, Maharashtra, India
关键词
Reinforcement Learning; Conversational Agent; Long Short Term Memory (LSTM); Seq2Seq Model;
D O I
10.1109/ICCCI50826.2021.9402336
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Reinforcement learning (R.L.) is an effective and practical means for resolving problems where the broker possesses no information or knowledge about the environment. The agent acquires knowledge that is conditioned on two components: trial-and-error and rewards. An R.L. agent determines an effective approach by interacting directly with the setting and acquiring information regarding the circumstances. However, many modern R.L.-based strategies neglect to theorise considering there is an enormous rift within the simulation and the physical world due to which policy-learning tactics displease that stretches from simulation to physical world Even if design learning is achieved in the physical world, the knowledge inadequacy leads to failed generalization policies from suiting to test circumstances. The intention of robust adversarial reinforcement learning(RARL) is where an agent is instructed to perform in the presence of a destabilizing opponent(adversary agent) that connects impedance to the system. The combined trained adversary is reinforced so that the actual agent i.e. the protagonist is equipped rigorously.
引用
下载
收藏
页数:7
相关论文
共 50 条
  • [1] Robust Adversarial Reinforcement Learning
    Pinto, Lerrel
    Davidson, James
    Sukthankar, Rahul
    Gupta, Abhinav
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [2] Enhancing Conversational Model With Deep Reinforcement Learning and Adversarial Learning
    Tran, Quoc-Dai Luong
    Le, Anh-Cuong
    Huynh, Van-Nam
    IEEE ACCESS, 2023, 11 : 75955 - 75970
  • [3] Risk Averse Robust Adversarial Reinforcement Learning
    Pan, Xinlei
    Seita, Daniel
    Gao, Yang
    Canny, John
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 8522 - 8528
  • [4] Curriculum Adversarial Training for Robust Reinforcement Learning
    Sheng, Junru
    Zhai, Peng
    Dong, Zhiyan
    Kang, Xiaoyang
    Chen, Chixiao
    Zhang, Lihua
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [5] Developing multi-agent adversarial environment using reinforcement learning and imitation learning
    Ziyao Han
    Yupeng Liang
    Kazuhiro Ohkura
    Artificial Life and Robotics, 2023, 28 : 703 - 709
  • [6] Developing multi-agent adversarial environment using reinforcement learning and imitation learning
    Han, Ziyao
    Liang, Yupeng
    Ohkura, Kazuhiro
    ARTIFICIAL LIFE AND ROBOTICS, 2023, 28 (04) : 703 - 709
  • [7] Robust Safe Reinforcement Learning under Adversarial Disturbances
    Li, Zeyang
    Hu, Chuxiong
    Li, Shengbo Eben
    Cheng, Jia
    Wang, Yunan
    2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 334 - 341
  • [8] Robust Market Making via Adversarial Reinforcement Learning
    Spooner, Thomas
    Savani, Rahul
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 4590 - 4596
  • [9] Multi-Agent Adversarial Inverse Reinforcement Learning
    Yu, Lantao
    Song, Jiaming
    Ermon, Stefano
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [10] Robust Adversarial Reinforcement Learning with Dissipation Inequation Constraint
    Zhai, Peng
    Luo, Jie
    Dong, Zhiyan
    Zhang, Lihua
    Wang, Shunli
    Yang, Dingkang
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 5431 - 5439