UNDERSTANDING LINEAR STYLE TRANSFER AUTO-ENCODERS

被引:0
|
作者
Pradhan, Ian [1 ]
Lyu, Siwei [1 ]
机构
[1] Univ Buffalo State Univ New York, Comp Sci & Engn, Buffalo, NY 14260 USA
基金
美国国家科学基金会;
关键词
style transfer; autoencoder; optimization; SUM;
D O I
10.1109/MLSP52302.2021.9596412
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Style transfer auto-encoder has recently been shown to be highly effective in synthesizing images with styles transferred from another image. In this work, we aim to provide an answer to this question by studying a simpler variant of STAE, namely, the linear style transfer auto-encoders (LinSTAEs), where the encoder and decoders are all linear models. We show that the objective function of LinSTAE, under the l(2) loss, affords a simple form, and the optimal solutions reveal the mechanism how the encoder capture joint characteristics from the input and the target domain, and the decoders restore their idiosyncrasies. We further show that at least for the linear case, the cycle reconstruction loss is not necessary - the vanilla LinSTAE objective function is already effective. We use numerical experiments on the synthetic and the MNIST dataset to showcase our findings.
引用
收藏
页数:5
相关论文
共 50 条
  • [1] Fisher Auto-Encoders
    Elkhalil, Khalil
    Hasan, Ali
    Ding, Jie
    Farsiu, Sina
    Tarokh, Vahid
    [J]. 24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130 : 352 - 360
  • [2] Ornstein Auto-Encoders
    Choi, Youngwon
    Won, Joong-Ho
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2172 - 2178
  • [3] Transforming Auto-Encoders
    Hinton, Geoffrey E.
    Krizhevsky, Alex
    Wang, Sida D.
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2011, PT I, 2011, 6791 : 44 - 51
  • [4] Coherent and Consistent Relational Transfer Learning with Auto-encoders
    Stromfelt, Harald
    Dickens, Luke
    Garcez, Artur d'Avila
    Russo, Alessandra
    [J]. NESY 2021: NEURAL-SYMBOLIC LEARNING AND REASONING, 2021, 2986 : 176 - 192
  • [5] Transfer learning with deep manifold regularized auto-encoders
    Zhu, Yi
    Wu, Xindong
    Li, Peipei
    Zhang, Yuhong
    Hu, Xuegang
    [J]. NEUROCOMPUTING, 2019, 369 : 145 - 154
  • [6] Understanding Instance-based Interpretability of Variational Auto-Encoders
    Kong, Zhifeng
    Chaudhuri, Kamalika
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [7] Correlated Variational Auto-Encoders
    Tang, Da
    Liang, Dawen
    Jebara, Tony
    Ruozzi, Nicholas
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [8] Hyperspherical Variational Auto-Encoders
    Davidson, Tim R.
    Falorsi, Luca
    De Cao, Nicola
    Kipf, Thomas
    Tomczak, Jakub M.
    [J]. UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2018, : 856 - 865
  • [9] Graph Attention Auto-Encoders
    Salehi, Amin
    Davulcu, Hasan
    [J]. 2020 IEEE 32ND INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), 2020, : 989 - 996
  • [10] Directed Graph Auto-Encoders
    Kollias, Georgios
    Kalantzis, Vasileios
    Ide, Tsuyoshi
    Lozano, Aurelie
    Abe, Naoki
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 7211 - 7219