An acquisition of the relation between vision and action using self-organizing map and reinforcement learning

被引:0
|
作者
Terada, K [1 ]
Takeda, H [1 ]
Nishida, T [1 ]
机构
[1] Nara Inst Sci & Technol, Grad Sch Informat Sci, Nara 63001, Japan
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
An agent must acquire internal representation appropriate for its task, environment, sensors. As a learning algorithm, reinforcement learning is often utilized to acquire the relation between sensory input and action. Learning agents in the real world using visual sensors is often confronted with critical problem how to build necessary and sufficient state space for the agent to execute the task. fn this paper, we propose acquisition of relation between vision and action using Visual State-Action Map (VSAM). VSAM is the application of Self-Organizing Map (SOM). Input image data is mapped on the node of learned VSAM. Then VSAM outputs the appropriate action for the state, We applied VSAM to real robot. The experimental result shows that a real robot avoids the wall while moving around the environment.
引用
收藏
页码:429 / 434
页数:6
相关论文
共 50 条
  • [1] Acquisition of the relation between vision and action using Self-Organizing Map and reinforcement learning
    Terada, Kazunori
    Takeda, Hideaki
    Nishida, Toyoaki
    International Conference on Knowledge-Based Intelligent Electronic Systems, Proceedings, KES, 1998, 1 : 429 - 434
  • [2] A teaching method using a self-organizing map for reinforcement learning
    Takeshi Tateyama
    Seiichi Kawata
    Toshiki Oguchi
    Artificial Life and Robotics, 2004, 7 (4) : 193 - 197
  • [3] A teaching method by using Self-Organizing Map for reinforcement learning
    Tateyama, Takeshi
    Kawata, Seiichi
    Oguchi, Toshiki
    Nippon Kikai Gakkai Ronbunshu, C Hen/Transactions of the Japan Society of Mechanical Engineers, Part C, 2004, 70 (06): : 1722 - 1729
  • [4] Continuous state/action reinforcement learning: A growing self-organizing map approach
    Montazeri, Hesam
    Moradi, Sajjad
    Safabakhsh, Reza
    NEUROCOMPUTING, 2011, 74 (07) : 1069 - 1082
  • [5] Word Learning Using a Self-Organizing Map
    Li, Lishu
    Chen, Qinghua
    Cui, Jiaxin
    Fang, Fukang
    2008 INTERNATIONAL SYMPOSIUM ON INTELLIGENT INFORMATION TECHNOLOGY APPLICATION, VOL II, PROCEEDINGS, 2008, : 336 - +
  • [6] Knowledge Acquisition of Self-Organizing Systems With Deep Multiagent Reinforcement Learning
    Ji, Hao
    Jin, Yan
    JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING, 2022, 22 (02)
  • [7] SELF-ORGANIZING SYNCHRONICITY AND DESYNCHRONICITY USING REINFORCEMENT LEARNING
    Mihaylov, Mihail
    Le Borgne, Yann-Ael
    Nowe, Ann
    Tuyls, Karl
    ICAART 2011: PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, VOL 2, 2011, : 94 - 103
  • [8] Self-organizing map models of language acquisition
    Li, Ping
    Zhao, Xiaowei
    FRONTIERS IN PSYCHOLOGY, 2013, 4
  • [9] Self-Organizing Reinforcement Learning Model
    Uang, Chang-Hsian
    Liou, Jiun-Wei
    Liou, Cheng-Yuan
    INTELLIGENT INFORMATION AND DATABASE SYSTEMS (ACIIDS 2012), PT I, 2012, 7196 : 218 - 227
  • [10] Comparative Study of Self-Organizing Map and Deep Self-Organizing Map using MATLAB
    Kumar, Indra D.
    Kounte, Manjunath R.
    2016 INTERNATIONAL CONFERENCE ON COMMUNICATION AND SIGNAL PROCESSING (ICCSP), VOL. 1, 2016, : 1020 - 1023