Belief revision with reinforcement learning for interactive object recognition

被引:2
|
作者
Leopold, Thomas [1 ]
Kern-Isberner, Gabriele [1 ]
Peters, Gabriele
机构
[1] Univ Technol Dortmund, Dortmund, Germany
来源
ECAI 2008, PROCEEDINGS | 2008年 / 178卷
关键词
D O I
10.3233/978-1-58603-891-5-65
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
From a conceptual point of view, belief revision and learning are quite similar. Both methods change the belief state of an intelligent agent by processing incoming information. However, for learning, the focus in on the exploitation of data to extract and assimilate useful knowledge, whereas belief revision is more concerned with the adaption of prior beliefs to new information for the purpose of reasoning. In this paper, we propose a hybrid learning method called SPHINX that combines low-level, non-cognitive reinforcement learning with high-level epistemic belief revision, similar to human learning. The former represents knowledge in a sub-symbolic, numerical way, while the latter is based on symbolic, non-monotonic logics and allows reasoning. Beyond the theoretical appeal of linking methods of very different disciplines of artificial intelligence, we will illustrate the usefulness of our approach by employing SPHINX in the area of computer vision for object recognition tasks. The SPHINX agent interacts with its environment by rotating objects depending on past experiences and newly acquired generic knowledge to choose those views which are most advantageous for recognition.
引用
收藏
页码:65 / +
页数:2
相关论文
共 50 条
  • [31] Object Exchangeability in Reinforcement Learning
    Mern, John
    Sadigh, Dorsa
    Kochenderfer, Mykel
    [J]. AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 2126 - 2128
  • [32] LEARNING WITHOUT REINFORCEMENT - CRITICAL REVISION
    CROAKE, JW
    [J]. INTERAMERICAN JOURNAL OF PSYCHOLOGY, 1973, 7 (1-2): : 17 - 32
  • [33] Reinforcement learning using approximate belief states
    Rodríguez, A
    Parr, R
    Koller, D
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 12, 2000, 12 : 1036 - 1042
  • [34] INTERACTIVE LEARNING OF A MULTIPLE-ATTRIBUTE HASH TABLE CLASSIFIER FOR FAST OBJECT RECOGNITION
    GREWE, L
    KAK, AC
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 1995, 61 (03) : 387 - 416
  • [35] Structured World Belief for Reinforcement Learning in POMDP
    Singh, Gautam
    Peri, Skand
    Kim, Junghyun
    Kim, Hyunseok
    Ahn, Sungjin
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [36] An Interactive Open-Ended Learning Approach for 3D Object Recognition
    Kasaei, S. Hamidreza
    Oliveira, Miguel
    Lim, Gi Hyun
    Lopes, Luis Seabra
    Tome, Ana Maria
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC), 2014, : 47 - 52
  • [37] Online learning of color transformation for interactive object recognition under various lighting conditions
    Makihara, Y
    Shirai, Y
    Shimada, N
    [J]. PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 3, 2004, : 161 - 164
  • [38] SEEING BY HAPTIC GLANCE: REINFORCEMENT LEARNING BASED 3D OBJECT RECOGNITION
    Riou, Kevin
    Ling, Suiyi
    Gallot, Guillaume
    Le Callet, Patrick
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3637 - 3641
  • [39] Interactive Reinforcement Learning with Inaccurate Feedback
    Faulkner, Thylor A. Kessler
    Short, Elaine Schaertl
    Thomaz, Andrea L.
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 7498 - 7504
  • [40] An experimental study on interactive reinforcement learning
    Nakashima, Tomoharu
    Nakamura, Yosuke
    Uenishi, Takesuke
    Narimoto, Yosuke
    [J]. PROCEEDINGS OF THE SIXTEENTH INTERNATIONAL SYMPOSIUM ON ARTIFICIAL LIFE AND ROBOTICS (AROB 16TH '11), 2011, : 735 - 740