TAPE: Task-Agnostic Prior Embedding for Image Restoration

被引:18
|
作者
Liu, Lin [1 ]
Xie, Lingxi [3 ]
Zhang, Xiaopeng [3 ]
Yuan, Shanxin [4 ]
Chen, Xiangyu [5 ,6 ]
Zhou, Wengang [1 ,2 ]
Li, Houqiang [1 ,2 ]
Tian, Qi [3 ]
机构
[1] Univ Sci & Technol China, EEIS Dept, CAS Key Lab Technol GIPAS, Hefei, Peoples R China
[2] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei, Peoples R China
[3] Huawei Cloud BU, Shenzhen, Peoples R China
[4] Huawei Noahs Ark Lab, London, England
[5] Univ Macau, Zhuhai, Peoples R China
[6] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
REMOVAL; NETWORK;
D O I
10.1007/978-3-031-19797-0_26
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning a generalized prior for natural image restoration is an important yet challenging task. Early methods mostly involved handcrafted priors including normalized sparsity,l(0) gradients, dark channel priors, etc. Recently, deep neural networks have been used to learn various image priors but do not guarantee to generalize. In this paper, we propose a novel approach that embeds a task-agnostic prior into a transformer. Our approach, named Task-Agnostic Prior Embedding (TAPE), consists of two stages, namely, task-agnostic pre-training and task-specific fine-tuning, where the first stage embeds prior knowledge about natural images into the transformer and the second stage extracts the knowledge to assist downstream image restoration. Experiments on various types of degradation validate the effectiveness of TAPE. The image restoration performance in terms of PSNR is improved by as much as 1.45 dB and even outperforms task-specific algorithms. More importantly, TAPE shows the ability of disentangling generalized image priors from degraded images, which enjoys favorable transfer ability to unknown downstream tasks.
引用
收藏
页码:447 / 464
页数:18
相关论文
共 50 条
  • [21] DJMix: Unsupervised Task-agnostic Image Augmentation for Improving Robustness of Convolutional Neural Networks
    Hataya, Ryuichiro
    Nakayama, Hideki
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [22] Task-Agnostic Evolution of Diverse Repertoires of Swarm Behaviours
    Gomes, Jorge
    Christensen, Anders Lyhne
    SWARM INTELLIGENCE (ANTS 2018), 2018, 11172 : 225 - 238
  • [23] Learning Task-Agnostic Action Spaces for Movement Optimization
    Babadi, Amin
    van de Panne, Michiel
    Liu, C. Karen
    Hamalainen, Perttu
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2022, 28 (12) : 4700 - 4712
  • [24] Latent Plans for Task-Agnostic Offline Reinforcement Learning
    Rosete-Beas, Erick
    Mees, Oier
    Kalweit, Gabriel
    Boedecker, Joschka
    Burgard, Wolfram
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 1838 - 1849
  • [25] EViLBERT: Learning Task-Agnostic Multimodal Sense Embeddings
    Calabrese, Agostina
    Bevilacqua, Michele
    Navigli, Roberto
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 481 - 487
  • [26] Task-Agnostic Object Recognition for Mobile Robots through Few-Shot Image Matching
    Chiatti, Agnese
    Bardaro, Gianluca
    Bastianelli, Emanuele
    Tiddi, Ilaria
    Mitra, Prasenjit
    Motta, Enrico
    ELECTRONICS, 2020, 9 (03)
  • [27] Task-agnostic feature extractors for incremental learning at the edge
    Loomis, Lisa
    Wise, David
    Inkawhich, Nathan
    Thiem, Clare
    McDonald, Nathan
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS VI, 2024, 13051
  • [28] Task-Agnostic Dynamics Priors for Deep Reinforcement Learning
    Du, Yilun
    Narasimhan, Karthik
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [29] Task-Agnostic Amortized Inference of Gaussian Process Hyperparameters
    Liu, Sulin
    Sun, Xingyuan
    Ramadge, Peter J.
    Adams, Ryan P.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [30] TADA: Efficient Task-Agnostic Domain Adaptation for Transformers
    Hung, Chia-Chien
    Lange, Lukas
    Stroetgen, Jannik
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 487 - 503