A Multi-Implicit Neural Representation for Fonts

被引:0
|
作者
Reddy, Pradyumna [1 ]
Zhang, Zhifei [2 ]
Wang, Zhaowen [2 ]
Fisher, Matthew [2 ]
Jin, Hailin [2 ]
Mitra, Niloy J. [2 ]
机构
[1] UCL, London, England
[2] Adobe Res, San Jose, CA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Fonts are ubiquitous across documents and come in a variety of styles. They are either represented in a native vector format or rasterized to produce fixed resolution images. In the first case, the non-standard representation prevents benefiting from latest network architectures for neural representations; while, in the latter case, the rasterized representation, when encoded via networks, results in loss of data fidelity, as font-specific discontinuities like edges and corners are difficult to represent using neural networks. Based on the observation that complex fonts can be represented by a superposition of a set of simpler occupancy functions, we introduce multi-implicits to represent fonts as a permutation-invariant set of learned implicit functions, without losing features (e.g., edges and corners). However, while multi-implicits locally preserve font features, obtaining supervision in the form of ground truth multi-channel signals is a problem in itself. Instead, we propose how to train such a representation with only local supervision, while the proposed neural architecture directly finds globally consistent multi-implicits for font families. We extensively evaluate the proposed representation for various tasks including reconstruction, interpolation, and synthesis to demonstrate clear advantages with existing alternatives. Additionally, the representation naturally enables glyph completion, wherein a single characteristic font is used to synthesize a whole font family in the target style.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Joint Implicit Neural Representation for High-fidelity and Compact Vector Fonts
    Chen, Chia-Hao
    Liu, Ying-Tian
    Zhang, Zhifei
    Guo, Yuan-Chen
    Zhang, Song-Hai
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 5515 - 5525
  • [2] Multi-Implicit Methods with Automatic Error Control in Applications with Chemical Reactions
    Vasilev, E., I
    Vasilyeva, T. A.
    COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS, 2019, 59 (09) : 1508 - 1517
  • [3] Multi-Implicit Methods with Automatic Error Control in Applications with Chemical Reactions
    E. I. Vasilev
    T. A. Vasilyeva
    Computational Mathematics and Mathematical Physics, 2019, 59 : 1508 - 1517
  • [4] MULTI-HEAD RELU IMPLICIT NEURAL REPRESENTATION NETWORKS
    Aftab, Arya
    Morsali, Alireza
    Ghaemmaghami, Shahrokh
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2510 - 2514
  • [5] Conservative multi-implicit spectral deferred correction methods for reacting gas dynamics
    Layton, AT
    Minion, ML
    JOURNAL OF COMPUTATIONAL PHYSICS, 2004, 194 (02) : 697 - 715
  • [6] Deep Neural Implicit Representation of Accessibility for Multi-Axis Manufacturing
    Harabin, George
    Mirzendehdel, Amir M.
    Behandish, Morad
    COMPUTER-AIDED DESIGN, 2023, 163
  • [7] Superpixel-Informed Implicit Neural Representation for Multi-dimensional Data
    Li, Jiayi
    Zhao, Xile
    Wang, Jianli
    Wane, Chao
    Wane, Min
    COMPUTER VISION - ECCV 2024, PT II, 2025, 15060 : 258 - 276
  • [8] Multi-Implicit Peer Two-Step W-Methods for Parallel Time Integration
    Bernhard A. Schmitt
    Rüdiger Weiner
    Helmut Podhaisky
    BIT Numerical Mathematics, 2005, 45 : 197 - 217
  • [9] Neural explicit and implicit knowledge representation
    Neagu, CD
    Palade, V
    KES'2000: FOURTH INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED INTELLIGENT ENGINEERING SYSTEMS & ALLIED TECHNOLOGIES, VOLS 1 AND 2, PROCEEDINGS, 2000, : 213 - 216
  • [10] Regularize implicit neural representation by itself
    Li, Zhemin
    Wang, Hongxia
    Meng, Deyu
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 10280 - 10288