A two-branch encoder-decoder network for image tampering localization

被引:0
|
作者
Luo, Yuling [1 ,2 ]
Liang, Ce [1 ,2 ]
Qin, Sheng [1 ,2 ]
Liu, Junxiu [1 ,2 ]
Fu, Qiang [1 ,2 ]
Yang, Su [3 ]
机构
[1] Guangxi Normal Univ, Sch Elect & Informat Engn, Guangxi Key Lab Brain inspired Comp & Intelligent, Guilin 541004, Peoples R China
[2] Guangxi Normal Univ, Educ Dept Guangxi Zhuang Autonomous Reg, Key Lab Nonlinear Circuits & Opt Commun, Guilin, Peoples R China
[3] Swansea Univ, Dept Comp Sci, Swansea, Wales
关键词
Image tampering localization; Image forensics; Encoding-decoding; Attention mechanism;
D O I
10.1016/j.asoc.2024.111992
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Tampered images with false information can mislead viewers and pose security issues. Tampering traces in images are difficult to detect. To locate tampering traces effectively, a dual-domain deep-learning-based image tampering localization method based on RGB and frequency stream branches is proposed in this work. The former branch learns and extracts tampered features on the image and content features of the tampered region. The latter branch extracts tampered features from the frequency domain to complement the RGB stream branch. In addition, an attention mechanism is used to integrate the features from both branches at the fusion stage. In the experiments, the F1 score of the proposed method outperformed those of the baselines on the NIST16 dataset (with a 15.3% improvement), and the AUC score outperformed those of the baselines on the NIST16 and COVERAGE datasets (improvements of 3.9% and 4.7%, respectively). This study provides a beneficial alternative to image tampering localization techniques.
引用
收藏
页数:11
相关论文
共 50 条
  • [11] A Coupled Encoder-Decoder Network for Joint Face Detection and Landmark Localization
    Wang, Lezi
    Yu, Xiang
    Metaxas, Dimitris N.
    2017 12TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2017), 2017, : 251 - 257
  • [12] Alleviating the Burden of Labeling: Sentence Generation by Attention Branch Encoder-Decoder Network
    Ogura, Tadashi
    Magassouba, Aly
    Sugiura, Komei
    Hirakawa, Tsubasa
    Yamashita, Takayoshi
    Fujiyoshi, Hironobu
    Kawai, Hisashi
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (04): : 5945 - 5952
  • [13] TBFormer: Two-Branch Transformer for Image Forgery Localization
    Liu, Yaqi
    Lv, Binbin
    Jin, Xin
    Chen, Xiaoyu
    Zhang, Xiaokun
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 623 - 627
  • [14] Attentive U-recurrent encoder-decoder network for image dehazing
    Yin, Shibai
    Wang, Yibin
    Yang, Yee-Hong
    NEUROCOMPUTING, 2021, 437 : 143 - 156
  • [15] RESIDUAL ENCODER-DECODER NETWORK INTRODUCED FOR MULTISOURCE SAR IMAGE DESPECKLING
    Gu, Feng
    Zhang, Hong
    Wang, Chao
    Zhang, Bo
    PROCEEDINGS OF 2017 SAR IN BIG DATA ERA: MODELS, METHODS AND APPLICATIONS (BIGSARDATA), 2017,
  • [16] Underwater Image Enhancement Using Encoder-Decoder Scale Attention Network
    Lee, Ka-Ki
    Hsieh, Jun-Wei
    Hsieh, Yi-Kuan
    Hsieh, An-Ting
    2024 6TH INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATION AND THE INTERNET, ICCCI 2024, 2024, : 101 - 106
  • [17] Iterative Deep Convolutional Encoder-Decoder Network for Medical Image Segmentation
    Kim, Jung Uk
    Kim, Hak Gu
    Ro, Yong Man
    2017 39TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2017, : 685 - 688
  • [18] Image Denoising Using a Deep Encoder-Decoder Network with Skip Connections
    Couturier, Raphael
    Perrot, Gilles
    Salomon, Michel
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT VI, 2018, 11306 : 554 - 565
  • [19] A Method of CT Image Denoising Based on Residual Encoder-Decoder Network
    Liu, Yali
    JOURNAL OF HEALTHCARE ENGINEERING, 2021, 2021 : 2384493
  • [20] Robust Image Watermarking Framework Powered by Convolutional Encoder-Decoder Network
    Thien Huynh-The
    Hua, Cam-Hao
    Nguyen Anh Tu
    Kim, Dong-Seong
    2019 DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA), 2019, : 552 - 558