Learning Tasks in Intelligent Environments via Inverse Reinforcement Learning

被引:3
|
作者
Shah, Syed Ihtesham Hussain [1 ]
Coronato, Antonio [2 ]
机构
[1] Univ Napoli Parthenope, Dept ICT & Engn, Naples, Italy
[2] Natl Res Council Italy CNR, Inst High Performance Comp & Networking ICAR, Naples, Italy
关键词
Intelligent Environment (IE); Inverse Reinforcement Learning (IRL); SYSTEM;
D O I
10.1109/IE51775.2021.9486594
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One of the most common objective for an Intelligent Environment (IE), especially in case of applications of Ambient Assisted Living, is to support a user in everyday tasks. A complex task can be effectively represented as a workflow; i.e., a composition of simpler activities. This research activity aims at developing methodologies and tools able to make a robotic system learns a task from demonstrations in an IE. The large number of accessible sensors in the intelligent environment, as well as the availability of advanced services such as those for activity recognition and situation awareness, can facilitate either the recognition of the action to be executed at the ith step of the workflow and the verification of the correct execution of the task. In this paper, we present an hybrid approach based on Inverse Reinforcement Learning (IRL) to learn from observations of the behavior of an expert and on forward Reinforcement Learning to correct and improve the learned behavior. We also present the high-level architecture of the IE that supports the learning process.
引用
收藏
页数:4
相关论文
共 50 条
  • [1] Learning Variable Impedance Control via Inverse Reinforcement Learning for Force-Related Tasks
    Zhang, Xiang
    Sun, Liting
    Kuang, Zhian
    Tomizuka, Masayoshi
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02) : 2225 - 2232
  • [2] Inverse Reinforcement Learning in Partially Observable Environments
    Choi, Jaedeug
    Kim, Kee-Eung
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2011, 12 : 691 - 730
  • [3] Inverse Reinforcement Learning in Partially Observable Environments
    Choi, Jaedeug
    Kim, Kee-Eung
    [J]. 21ST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-09), PROCEEDINGS, 2009, : 1028 - 1033
  • [4] Machining sequence learning via inverse reinforcement learning
    Sugisawa, Yasutomo
    Takasugi, Keigo
    Asakawa, Naoki
    [J]. PRECISION ENGINEERING-JOURNAL OF THE INTERNATIONAL SOCIETIES FOR PRECISION ENGINEERING AND NANOTECHNOLOGY, 2022, 73 : 477 - 487
  • [5] Dynamic QoS Prediction With Intelligent Route Estimation Via Inverse Reinforcement Learning
    Li, Jiahui
    Wu, Hao
    He, Qiang
    Zhao, Yiji
    Wang, Xin
    [J]. IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (02) : 509 - 523
  • [6] Towards Interpretable Deep Reinforcement Learning Models via Inverse Reinforcement Learning
    Xie, Yuansheng
    Vosoughi, Soroush
    Hassanpour, Saeed
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 5067 - 5074
  • [7] Learning Fairness from Demonstrations via Inverse Reinforcement Learning
    Blandin, Jack
    Kash, Ian
    [J]. PROCEEDINGS OF THE 2024 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, ACM FACCT 2024, 2024, : 51 - 61
  • [8] Methodologies for Imitation Learning via Inverse Reinforcement Learning: A Review
    Zhang, Kaifeng
    Yu, Yang
    [J]. Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2019, 56 (02): : 254 - 261
  • [9] Learning to Navigate in Human Environments via Deep Reinforcement Learning
    Gao, Xingyuan
    Sun, Shiying
    Zhao, Xiaoguang
    Tan, Min
    [J]. NEURAL INFORMATION PROCESSING (ICONIP 2019), PT I, 2019, 11953 : 418 - 429
  • [10] Haptic Assistance via Inverse Reinforcement Learning
    Scobee, Dexter R. R.
    Royo, Vicenc Rubies
    Tomlin, Claire J.
    Sastry, S. Shankar
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2018, : 1510 - 1517