Denoising Auto-encoders for Learning of Objects and Tools Affordances in Continuous Space

被引:0
|
作者
Dehban, Atabak [1 ]
Jamone, Lorenzo [1 ]
Kampff, Adam R. [2 ,3 ]
Santos-Victor, Jose [1 ]
机构
[1] Univ Lisbon, Inst Super Tecn, Inst Syst & Robot, Lisbon, Portugal
[2] Champalimaud Ctr Unknown, Champalimaud Neurosci Programme, Lisbon, Portugal
[3] Sainsbury Wellcome Ctr Neural Circuits & Behav SW, London, England
关键词
ROBOT;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The concept of affordances facilitates the encoding of relations between actions and effects in an environment centered around the agent. Such an interpretation has important impacts on several cognitive capabilities and manifestations of intelligence, such as prediction and planning. In this paper, a new framework based on denoising Auto-encoders (dA) is proposed which allows an agent to explore its environment and actively learn the affordances of objects and tools by observing the consequences of acting on them. The dA serves as a unified framework to fuse multi-modal data and retrieve an entire missing modality or a feature within a modality given information about other modalities. This work has two major contributions. First, since training the dA is done in continuous space, there will be no need to discretize the dataset and higher accuracies in inference can be achieved with respect to approaches in which data discretization is required (e.g. Bayesian networks). Second, by fixing the structure of the dA, knowledge can be added incrementally making the architecture particularly useful in online learning scenarios. Evaluation scores of real and simulated robotic experiments show improvements over previous approaches while the new model can be applied in a wider range of domains.
引用
下载
收藏
页码:4866 / 4871
页数:6
相关论文
共 50 条
  • [21] Unsupervised representation learning with Laplacian pyramid auto-encoders
    Zhao Qilu
    Li Zongmin
    Dong Junyu
    APPLIED SOFT COMPUTING, 2019, 85
  • [22] Stacked Convolutional Sparse Auto-Encoders for Representation Learning
    Zhu, Yi
    Li, Lei
    Wu, Xindong
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2021, 15 (02)
  • [23] Nonparametric Variational Auto-encoders for Hierarchical Representation Learning
    Goyal, Prasoon
    Hu, Zhiting
    Liang, Xiaodan
    Wang, Chenyu
    Xing, Eric P.
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 5104 - 5112
  • [24] Explicit guiding auto-encoders for learning meaningful representation
    Sun, Yanan
    Mao, Hua
    Sang, Yongsheng
    Yi, Zhang
    NEURAL COMPUTING & APPLICATIONS, 2017, 28 (03): : 429 - 436
  • [25] Coherent and Consistent Relational Transfer Learning with Auto-encoders
    Stromfelt, Harald
    Dickens, Luke
    Garcez, Artur d'Avila
    Russo, Alessandra
    NESY 2021: NEURAL-SYMBOLIC LEARNING AND REASONING, 2021, 2986 : 176 - 192
  • [26] Transfer learning with deep manifold regularized auto-encoders
    Zhu, Yi
    Wu, Xindong
    Li, Peipei
    Zhang, Yuhong
    Hu, Xuegang
    NEUROCOMPUTING, 2019, 369 : 145 - 154
  • [27] Process operating performance assessment based on stacked supervised denoising auto-encoders
    Liu Y.
    Gong S.
    Wang F.
    Ma Z.
    Yi Qi Yi Biao Xue Bao/Chinese Journal of Scientific Instrument, 2022, 43 (04): : 271 - 281
  • [28] Stacked Denoising Auto-Encoders for Short-Term Time Series Forecasting
    Romeu, Pablo
    Zamora-Martinez, Francisco
    Botella-Rocamora, Paloma
    Pardo, Juan
    ARTIFICIAL NEURAL NETWORKS, 2015, : 463 - 486
  • [29] Automatic Radar Waveform Recognition Based on Deep Convolutional Denoising Auto-encoders
    Zhou, Zhiwen
    Huang, Gaoming
    Chen, Haiyang
    Gao, Jun
    CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2018, 37 (09) : 4034 - 4048
  • [30] A Recommendation Algorithm For Collaborative Denoising Auto-Encoders Based On User Preference Diffusion
    Wang, Xiu
    Liu, Xuejun
    Xu, Xinyan
    PROCEEDINGS OF 2017 8TH IEEE INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING AND SERVICE SCIENCE (ICSESS 2017), 2017, : 447 - 450