TransU2-Net: A Hybrid Transformer Architecture for Image Splicing Forgery Detection

被引:2
|
作者
Yan, Caiping [1 ]
Li, Shuyuan [1 ]
Li, Hong [2 ]
机构
[1] Hangzhou Normal Univ, Dept Comp Sci, Hangzhou 311121, Peoples R China
[2] Hangzhou InsVis Technol Co Ltd, Hangzhou 311121, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Forgery; Semantics; Decoding; Splicing; Location awareness; Streaming media; Convolutional neural networks; Image splicing forgery detection; tampered region localization; convolutional neural network; self-attention; cross-attention; LOCALIZATION;
D O I
10.1109/ACCESS.2023.3264014
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, various convolutional neural network (CNN) based frameworks have been presented to detect forged regions in images. However, most of the existing models can not obtain satisfactory performance due to tampered areas with various sizes, especially for objects with large-scale. In order to obtain an accurate object-level forgery localization result, we propose a novel hybrid transformer architecture, which exhibits both advantages of spatial dependencies and contextual information from different scales, namely, TransU2-Net. Specifically, long-range semantic dependencies are captured by the last block of encoder to locate large-scale tampered areas more completely. Meanwhile, non-semantic features are filtered out by enhancing low-level features under the guidance of high-level semantic information in the skip connections to achieve more refined spatial recovery. Therefore, our hybrid model can locate spliced forgeries with various sizes without requiring large data set pre-training. Experimental results on the Casia2.0 and Columbia datasets show that our framework achieves better performance over state-of-the-art methods. On the Casia 2.0 dataset, F-measure improve by 8.4% compared to the previous method.
引用
收藏
页码:33313 / 33323
页数:11
相关论文
共 50 条
  • [1] TransU2-Net: An Effective Medical Image Segmentation Framework Based on Transformer and U2-Net
    Li, Xiang
    Fang, Xianjin
    Yang, Gaoming
    Su, Shuzhi
    Zhu, Li
    Yu, Zekuan
    IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE, 2023, 11 : 441 - 450
  • [2] RRU-Net: The Ringed Residual U-Net for Image Splicing Forgery Detection
    Bi, Xiuli
    Wei, Yang
    Xiao, Bin
    Li, Weisheng
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 30 - 39
  • [3] The Circular U-Net with Attention Gate for Image Splicing Forgery Detection
    Peng, Jin
    Li, Yinghao
    Liu, Chengming
    Gao, Xiaomeng
    ELECTRONICS, 2023, 12 (06)
  • [4] Image splicing forgery detection: A review
    Kumari, Ritesh
    Garg, Hitendra
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 84 (8) : 4163 - 4201
  • [5] Fusing Multi-scale Attention and Transformer for Detection and Localization of Image Splicing Forgery
    Xu, Yanzhi
    Zheng, Jiangbin
    Shao, Chenyu
    ADVANCES IN BRAIN INSPIRED COGNITIVE SYSTEMS, BICS 2023, 2024, 14374 : 335 - 344
  • [6] Splicing Forgery Detection and the Impact of Image Resolution
    Devagiri, Vishnu Manasa
    Cheddad, Abbas
    PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON ELECTRONICS, COMPUTERS AND ARTIFICIAL INTELLIGENCE - ECAI 2017, 2017,
  • [7] Image splicing forgery detection by combining synthetic adversarial networks and hybrid dense U-net based on multiple spaces
    Wei, Yang
    Ma, Jianfeng
    Wang, Zhuzhu
    Xiao, Bin
    Zheng, Wenying
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (11) : 8291 - 8308
  • [8] D-Net: A dual-encoder network for image splicing forgery detection and localization
    Yang, Zonglin
    Liu, Bo
    Bi, Xiuli
    Xiao, Bin
    Li, Weisheng
    Wang, Guoyin
    Gao, Xinbo
    PATTERN RECOGNITION, 2024, 155
  • [9] DWT and LBP hybrid feature based deep learning technique for image splicing forgery detection
    Singh, Mahesh K.
    Soft Computing, 2024, 28 (20) : 12207 - 12215
  • [10] Image splicing forgery detection using noise level estimation
    Meena, Kunj Bihari
    Tyagi, Vipin
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (09) : 13181 - 13198