Grounding humanoid visually guided walking: From action-independent to action-oriented knowledge

被引:5
|
作者
Chame, Hendry Ferreira [1 ]
Chevallereau, Christine [1 ]
机构
[1] CNRS, Ecole Cent Nantes, IRCCyN, Nantes, France
关键词
Cognitive robotics; Embodied cognition; Humanoid robotics; Ego-localization; Top-down visual attention; Robot vision; EMBODIED COGNITION; MODEL;
D O I
10.1016/j.ins.2016.02.053
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the context of humanoid and service robotics, it is essential that the agent can be positioned with respect to objects of interest in the environment. By relying mostly on the cognitivist conception in artificial intelligence, the research on visually guided walking has tended to overlook the characteristics of the context in which behavior occurs. Consequently, considerable efforts have been directed to define action-independent explicit models of the solution, often resulting in high computational requirements. In this study, inspired by the embodied cognition research, our interest has focused on the analysis of the sensory-motor coupling. Notably, on the relation between embodiment, information, and action-oriented representation. Hence, by mimicking human walking, a behavior scheme is proposed and endowed the agent with the skill of approaching stimuli. A significant contribution to object discrimination was obtained by proposing an efficient visual attention mechanism, that exploits the redundancies and the statistical regularities induced in the sensory-motor coordination, thus the information flow is anticipated from the fusion of visual and proprioceptive features in a Bayesian network. The solution was implemented on the humanoid platform Nao, where the task was accomplished in an unstructured scenario. (C) 2016 Elsevier Inc. All rights reserved.
引用
收藏
页码:79 / 97
页数:19
相关论文
共 38 条