Underwater Image Enhancement Using Pre-trained Transformer

被引:8
|
作者
Boudiaf, Abderrahmene [1 ]
Guo, Yuhang [1 ]
Ghimire, Adarsh [1 ]
Werghi, Naoufel [1 ]
De Masi, Giulia [1 ,2 ]
Javed, Sajid [1 ]
Dias, Jorge [1 ]
机构
[1] Khalifa Univ, Abu Dhabi, U Arab Emirates
[2] Technol Innovat Inst, ARRC, Abu Dhabi, U Arab Emirates
关键词
Vision transformer; Underwater imaging; Image enhancement;
D O I
10.1007/978-3-031-06433-3_41
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The goal of this work is to apply a denoising image transformer to remove the distortion from underwater images and compare it with other similar approaches. Automatic restoration of underwater images plays an important role since it allows to increase the quality of the images, without the need for more expensive equipment. This is a critical example of the important role of the machine learning algorithms to support marine exploration and monitoring, reducing the need for human intervention like the manual processing of the images, thus saving time, effort, and cost. This paper is the first application of the image transformer-based approach called "Pre-Trained Image Processing Transformer" to underwater images. This approach is tested on the UFO-120 dataset, containing 1500 images with the corresponding clean images.
引用
收藏
页码:480 / 488
页数:9
相关论文
共 50 条
  • [1] Pre-trained low-light image enhancement transformer
    Zhang, Jingyao
    Hao, Shijie
    Rao, Yuan
    [J]. IET IMAGE PROCESSING, 2024, 18 (08) : 1967 - 1984
  • [2] Pre-Trained Image Processing Transformer
    Chen, Hanting
    Wang, Yunhe
    Guo, Tianyu
    Xu, Chang
    Deng, Yiping
    Liu, Zhenhua
    Ma, Siwei
    Xu, Chunjing
    Xu, Chao
    Gao, Wen
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 12294 - 12305
  • [3] Diabetic Retinopathy Classification with pre-trained Image Enhancement Model
    Mudaser, Wahidullah
    Padungweang, Praisan
    Mongkolnam, Pornchai
    Lavangnananda, Patcharaporn
    [J]. 2021 IEEE 12TH ANNUAL UBIQUITOUS COMPUTING, ELECTRONICS & MOBILE COMMUNICATION CONFERENCE (UEMCON), 2021, : 629 - 632
  • [4] Chemformer: a pre-trained transformer for computational chemistry
    Irwin, Ross
    Dimitriadis, Spyridon
    He, Jiazhen
    Bjerrum, Esben Jannik
    [J]. MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2022, 3 (01):
  • [5] PART: Pre-trained Authorship Representation Transformer
    Huertas-Tato, Javier
    Martin, Alejandro
    Camacho, David
    [J]. HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES, 2024, 14
  • [6] Integrally Pre-Trained Transformer Pyramid Networks
    Tian, Yunjie
    Xie, Lingxi
    Wang, Zhaozhi
    Wei, Longhui
    Zhang, Xiaopeng
    Jiao, Jianbin
    Wang, Yaowei
    Tian, Qi
    Ye, Qixiang
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 18610 - 18620
  • [7] StyleAutoEncoder for Manipulating Image Attributes Using Pre-trained StyleGAN
    Bedychaj, Andrzej
    Tabor, Jacek
    Smieja, Marek
    [J]. ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PT II, PAKDD 2024, 2024, 14646 : 118 - 130
  • [8] Pre-trained Diffusion Models for Plug-and-Play Medical Image Enhancement
    Ma, Jun
    Zhu, Yuanzhi
    You, Chenyu
    Wang, Bo
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT III, 2023, 14222 : 3 - 13
  • [9] Generative Pre-Trained Transformer for Cardiac Abnormality Detection
    Gaudilliere, Pierre Louis
    Sigurthorsdottir, Halla
    Aguet, Clementine
    Van Zaen, Jerome
    Lemay, Mathieu
    Delgado-Gonzalo, Ricard
    [J]. 2021 COMPUTING IN CARDIOLOGY (CINC), 2021,
  • [10] OMPGPT: A Generative Pre-trained Transformer Model for OpenMP
    Chen, Le
    Bhattacharjee, Arijit
    Ahmed, Nesreen
    Hasabnis, Niranjan
    Oren, Gal
    Vo, Vy
    Jannesari, Ali
    [J]. EURO-PAR 2024: PARALLEL PROCESSING, PT I, EURO-PAR 2024, 2024, 14801 : 121 - 134