Coherent and Consistent Relational Transfer Learning with Auto-encoders

被引:0
|
作者
Stromfelt, Harald [1 ,3 ]
Dickens, Luke [2 ]
Garcez, Artur d'Avila [3 ]
Russo, Alessandra [1 ]
机构
[1] Imperial Coll London, Exhibit Rd, London SW7 2BX, England
[2] UCL, Gower St, London WC1E 6BT, England
[3] City Univ London, Northampton Sq, London EC1V 0HB, England
关键词
Representation Learning; Relation Learning; Variational AutoEncoders; Concept Learning;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human defined concepts are inherently transferable, but it is not clear under what conditions they can be modelled effectively by non-symbolic artificial learners. This paper argues that for a transferable concept to be learned, the system of relations that define it must be coherent across domains and properties. That is, they should be consistent with respect to relational constraints, and this consistency must extend beyond the representations encountered in the source domain. Further, where relations are modelled by differentiable functions, their gradients must conform - the functions must at times move together to preserve consistency. We propose a Partial Relation Transfer (PRT) task which exposes how well relation-decoders model these properties, and exemplify this with ordinality prediction transfer task, including a new data set for the transfer domain. We evaluate this on existing relation-decoder models, as well as a novel model designed around the principles of consistency and gradient conformity. Results show that consistency across broad regions of input space indicates good transfer performance, and that good gradient conformity facilitates consistency.
引用
收藏
页码:176 / 192
页数:17
相关论文
共 50 条
  • [1] Transfer learning with deep manifold regularized auto-encoders
    Zhu, Yi
    Wu, Xindong
    Li, Peipei
    Zhang, Yuhong
    Hu, Xuegang
    [J]. NEUROCOMPUTING, 2019, 369 : 145 - 154
  • [2] UNDERSTANDING LINEAR STYLE TRANSFER AUTO-ENCODERS
    Pradhan, Ian
    Lyu, Siwei
    [J]. 2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,
  • [3] A hybrid learning model based on auto-encoders
    Zhou, Ju
    Ju, Li
    Zhang, Xiaolong
    [J]. PROCEEDINGS OF THE 2017 12TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), 2017, : 522 - 528
  • [4] Fisher Auto-Encoders
    Elkhalil, Khalil
    Hasan, Ali
    Ding, Jie
    Farsiu, Sina
    Tarokh, Vahid
    [J]. 24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130 : 352 - 360
  • [5] Ornstein Auto-Encoders
    Choi, Youngwon
    Won, Joong-Ho
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2172 - 2178
  • [6] Semi-Supervised Representation Learning: Transfer Learning with Manifold Regularized Auto-encoders
    Zhu, Yi
    Hu, Xuegang
    Zhang, Yuhong
    Li, Peipei
    [J]. 2018 9TH IEEE INTERNATIONAL CONFERENCE ON BIG KNOWLEDGE (ICBK), 2018, : 83 - 90
  • [7] Transfer Learning Fusion and Stacked Auto-encoders for Viral Lung Disease Classification
    Ketfi, Meryem
    Belahcene, Mebarka
    Bourennane, Salah
    [J]. NEW GENERATION COMPUTING, 2024, 42 (04) : 651 - 684
  • [8] Heterogeneous transfer learning based on Stack Sparse Auto-Encoders for fault diagnosis
    Wang Chunfeng
    Lv Zheng
    Zhao Jun
    Wang Wei
    [J]. 2018 CHINESE AUTOMATION CONGRESS (CAC), 2018, : 4277 - 4281
  • [9] Transforming Auto-Encoders
    Hinton, Geoffrey E.
    Krizhevsky, Alex
    Wang, Sida D.
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2011, PT I, 2011, 6791 : 44 - 51
  • [10] An Enquiry on similarities between Renormalization Group and Auto-Encoders using Transfer Learning
    Shukla, Mohak
    Thakur, Ajay D.
    [J]. PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2022, 608