A computational model of attentional learning in a cognitive agent

被引:11
|
作者
Faghihi, Usef [1 ]
McCall, Ryan [1 ]
Franklin, Stan [1 ]
机构
[1] Univ Memphis, Dept Comp Sci, Memphis, TN 38152 USA
关键词
Attention; Attentional learning; Cognitive agents; LIDA cognitive architecture; Global Workspace Theory; Alarms; VISUAL-ATTENTION; TOP-DOWN; CONSCIOUSNESS; EMOTION; NEUROSCIENCE; BEHAVIOR; SYSTEM;
D O I
10.1016/j.bica.2012.07.003
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Biologically inspired cognitive architectures should faithfully model the high-level modules and processes of cognitive neuroscience. Also, they are expected to contribute to the BICA "challenge of creating a real-life computational equivalent of the human mind". One important component of the mind is attention and attentional learning. In this paper, we describe conceptual and computational model of attention and attentional learning for intelligent software agents in the context of the broad-based biologically inspired cognitive architecture, LIDA. In LIDA attention is defined as the process of bringing content to consciousness. Implementing Global Workspace Theory, the mechanism of consciousness consists of a continuing sequence of broadcasts of the most salient current contents to all of cognition. We argue that the term attention describes the selection of conscious contents and should be distinguished from mechanism of consciousness itself. Attentional learning, the learning of to what to attend, has been relatively little studied by memory researchers. Here we describe a mechanism for attentional learning using the LIDA architecture. A basic implementation of such an attentional learning mechanism in a LIDA-based agent is presented. The agent performs a psychological attention experiment and produces results comparable to human subjects. The agent's contribution in determining internal parameters for the LIDA architecture is also described. Our model of attentional learning distinguishes different aspects of selectionist and instructionalist learning. Attentional learning has not received its deserved attention in cognitive architecture research. This work represents a first step toward implementing the full range of cognitive faculties associated with attention and attentional learning in the LIDA cognitive architecture. (C) 2012 Elsevier B.V. All rights reserved.
引用
收藏
页码:25 / 36
页数:12
相关论文
共 50 条
  • [1] Striatal dopamine in attentional learning: A computational model
    Salum, C
    da Silva, AR
    Pickering, A
    [J]. NEUROCOMPUTING, 1999, 26-7 : 845 - 854
  • [2] Striatal dopamine in attentional learning: A computational model
    Salum, C
    da Silva, AR
    Pickering, A
    [J]. COMPUTATIONA L NEUROSCIENCE: TRENDS IN RESEARCH 1999, 1999, : 845 - 854
  • [3] A computational model for causal learning in cognitive agents
    Faghihi, Usef
    Fournier-Viger, Philippe
    Nkambou, Roger
    [J]. KNOWLEDGE-BASED SYSTEMS, 2012, 30 : 48 - 56
  • [4] A computational model of attentional networks
    Wang, H
    Fan, J
    Liang, HL
    [J]. SECOND JOINT EMBS-BMES CONFERENCE 2002, VOLS 1-3, CONFERENCE PROCEEDINGS: BIOENGINEERING - INTEGRATIVE METHODOLOGIES, NEW TECHNOLOGIES, 2002, : 1988 - 1989
  • [5] A Computational Agent Model for Hebbian Learning of Social Interaction
    Treur, Jan
    [J]. NEURAL INFORMATION PROCESSING, PT I, 2011, 7062 : 9 - +
  • [6] Learning Emotion Regulation Strategies: a Cognitive Agent Model
    Bosse, Tibor
    Gerritsen, Charlotte
    de Man, Jeroen
    Treur, Jan
    [J]. 2013 IEEE/WIC/ACM INTERNATIONAL CONFERENCE ON INTELLIGENT AGENT TECHNOLOGY (IAT 2013), 2013, : 245 - 252
  • [7] Sentence-based attentional mechanisms in word learning: evidence from a computational model
    Alishahi, Afra
    Fazly, Afsaneh
    Koehne, Judith
    Crocker, Matthew W.
    [J]. FRONTIERS IN PSYCHOLOGY, 2012, 3
  • [8] ACM: Learning Dynamic Multi-agent Cooperation via Attentional Communication Model
    Han, Xue
    Yan, Hongping
    Zhang, Junge
    Wang, Lingfeng
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT II, 2018, 11140 : 219 - 229
  • [9] Learning Attentional Communication for Multi-Agent Cooperation
    Jiang, Jiechuan
    Lu, Zongqing
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [10] Attentional Factorized Q-Learning for Many-Agent Learning
    Wang, Xiaoqiang
    Ke, Liangjun
    Fu, Qiang
    [J]. IEEE ACCESS, 2022, 10 : 108775 - 108784