Face Inpainting with Pre-trained Image Transformers

被引:0
|
作者
Gonc, Kaan [1 ]
Saglam, Baturay [2 ]
Kozat, Suleyman S. [2 ]
Dibeklioglu, Hamdi [1 ]
机构
[1] Bilkent Univ, Bilgisayar Muhendisligi Bolumu, Ankara, Turkey
[2] Bilkent Univ, Elekt & Elekt Muhendisligi Bolumu, Ankara, Turkey
关键词
image inpainting; transformers; deep generative models;
D O I
10.1109/SIU55565.2022.9864676
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Image inpainting is an underdetermined inverse problem that allows various contents to fill in the missing or damaged regions realistically. Convolutional neural networks (CNNs) are commonly used to create aesthetically pleasing content, yet CNNs have restricted perception fields for collecting global characteristics. Transformers enable long-range relationships to be modeled and different content generated with autoregressive modeling of pixel-sequence distributions using image-level attention mechanism. However, the current approaches to inpainting with transformers are limited to task-specific datasets and require larger-scale data. We introduce an approach to image inpainting by leveraging pre-trained vision transformers to remedy this issue. Experiments show that our approach can outperform CNN-based approaches and have a remarkable performance closer to the task-specific transformer methods.
引用
收藏
页数:4
相关论文
共 50 条
  • [11] GENERATIVE PRE-TRAINED TRANSFORMERS FOR BIOLOGICALLY INSPIRED DESIGN
    Zhu, Qihao
    Zhang, Xinyu
    Luo, Jianxi
    PROCEEDINGS OF ASME 2022 INTERNATIONAL DESIGN ENGINEERING TECHNICAL CONFERENCES AND COMPUTERS AND INFORMATION IN ENGINEERING CONFERENCE, IDETC-CIE2022, VOL 6, 2022,
  • [12] Pre-Trained Image Processing Transformer
    Chen, Hanting
    Wang, Yunhe
    Guo, Tianyu
    Xu, Chang
    Deng, Yiping
    Liu, Zhenhua
    Ma, Siwei
    Xu, Chunjing
    Xu, Chao
    Gao, Wen
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 12294 - 12305
  • [13] Using BERT pre-trained image transformers to identify potential parametric Wafer Map Defects
    Jen, En
    Ting, Yiju
    Chen, Boris
    Jan, C. H.
    Huang, Lester
    Lin, ChingYu
    Wu, Milton
    Feng, Anna
    Wen, Charless
    Chen, H. W.
    Yeh, Jason
    Lai, Citi
    2024 35TH ANNUAL SEMI ADVANCED SEMICONDUCTOR MANUFACTURING CONFERENCE, ASMC, 2024,
  • [14] Classifying microfossil radiolarians on fractal pre-trained vision transformers
    Mimura, Kazuhide
    Itaki, Takuya
    Kataoka, Hirokatsu
    Miyakawa, Ayumu
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [15] Towards Summarizing Code Snippets Using Pre-Trained Transformers
    Mastropaolo, Antonio
    Ciniselli, Matteo
    Pascarella, Luca
    Tufano, Rosalia
    Aghajani, Emad
    Bavota, Gabriele
    PROCEEDINGS 2024 32ND IEEE/ACM INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION, ICPC 2024, 2024, : 1 - 12
  • [16] Quantifying Valence and Arousal in Text with Multilingual Pre-trained Transformers
    Mendes, Goncalo Azevedo
    Martins, Bruno
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2023, PT I, 2023, 13980 : 84 - 100
  • [17] Introducing pre-trained transformers for high entropy alloy informatics
    Kamnis, Spyros
    MATERIALS LETTERS, 2024, 358
  • [18] Grounding Dialogue History: Strengths and Weaknesses of Pre-trained Transformers
    Greco, Claudio
    Testoni, Alberto
    Bernardi, Raffaella
    AIXIA 2020 - ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 12414 : 263 - 279
  • [19] Introducing pre-trained transformers for high entropy alloy informatics
    Kamnis, Spyros
    Materials Letters, 2024, 358
  • [20] Sparse Pairwise Re-ranking with Pre-trained Transformers
    Gienapp, Lukas
    Froebe, Maik
    Hagen, Matthias
    Potthast, Martin
    PROCEEDINGS OF THE 2022 ACM SIGIR INTERNATIONAL CONFERENCE ON THE THEORY OF INFORMATION RETRIEVAL, ICTIR 2022, 2022, : 250 - 258