Text Steganalysis Based on Hierarchical Supervised Learning and Dual Attention Mechanism

被引:2
|
作者
Peng, Wanli [1 ,2 ]
Li, Sheng [1 ,2 ]
Qian, Zhenxing [1 ,2 ]
Zhang, Xinpeng [1 ,2 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai 200082, Peoples R China
[2] Fudan Univ, Key Lab Culture & Tourism Intelligent Comp Minist, Shanghai 200082, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Text steganalysis; hierarchical supervised learning; dual attention mechanism; LINGUISTIC STEGANALYSIS;
D O I
10.1109/TASLP.2023.3319975
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Recent methods with deep neural networks for text steganalysis have succeeded in mining various feature representations. However, a limited number of studies have explicitly analyzed potential security issues of generative text steganography. Furthermore, current text steganalysis approaches lack detailed consideration in the intricate design of deep learning architectures tailored to these challenges. In this article, in order to tackle these problems, we first theoretically and empirically analyze the inevitable embedding distortions of generative text steganography at a semantic and statistical levels. In light of this, we then propose an innovative text steganalysis method based on hierarchical supervised learning and a dual attention mechanism. Concretely, to extract highly effective semantic features, the proposed method involves fine-tuning a BERT extractor through the hierarchical supervised learning that combines signals from multiple softmax classifiers, rather than relying solely on the final one. The mean and standard deviation values in the Gaussian distribution of cover and stego texts are then estimated using an encoder of variational autoencoders and used to capture features representing the statistical distortion of generative text steganography. Subsequently, we introduce a dual attention mechanism that dynamically fuses the semantic and statistical features, thereby creating discriminative feature representations essential for text steganalysis. The experimental results demonstrate that our proposed text steganalysis method surpasses the current state-of-the-art techniques across three distinct text steganalysis scenarios: specific text steganalysis, semi-blind text steganalysis, and blind text steganalysis.
引用
收藏
页码:3513 / 3526
页数:14
相关论文
共 50 条
  • [1] Image Steganalysis Network Based on Dual-Attention Mechanism
    Zhang, Xuanbo
    Zhang, Xinpeng
    Feng, Guorui
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1287 - 1291
  • [2] A Weakly Supervised Text Detection Based on Attention Mechanism
    Dong, Lanfang
    Zhou, Diancheng
    Liu, Hanchao
    [J]. IMAGE AND GRAPHICS, ICIG 2019, PT I, 2019, 11901 : 406 - 417
  • [3] Image steganalysis algorithm based on deep learning and attention mechanism for computer communication
    Li, Huan
    Dong, Shi
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (01)
  • [4] Image Steganalysis of Low Embedding Rate Based on the Attention Mechanism and Transfer Learning
    Liu, Shouyue
    Zhang, Chunying
    Wang, Liya
    Yang, Pengchao
    Hua, Shaona
    Zhang, Tong
    [J]. ELECTRONICS, 2023, 12 (04)
  • [5] Hierarchical Attention Based Semi-supervised Network Representation Learning
    Liu, Jie
    Deng, Junyi
    Xu, Guanghui
    He, Zhicheng
    [J]. NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT I, 2018, 11108 : 237 - 249
  • [6] HGA: Hierarchical Feature Extraction With Graph and Attention Mechanism for Linguistic Steganalysis
    Fu, Zhangjie
    Yu, Qi
    Wang, Fan
    Ding, Changhao
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 1734 - 1738
  • [7] Flexible scene text recognition based on dual attention mechanism
    Tian, Zhiqiang
    Wang, Chunhui
    Xiao, Youzi
    Lin, Yuping
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2021, 33 (22):
  • [8] Weakly Supervised Learning for Object Localization Based on an Attention Mechanism
    Park, Nojin
    Ko, Hanseok
    [J]. APPLIED SCIENCES-BASEL, 2021, 11 (22):
  • [9] Distant Supervised Relation Extraction with Hierarchical Attention Mechanism
    Liu, Jianyi
    Chen, Liandong
    Shi, Rui
    Xu, Jie
    Liu, An
    [J]. 2021 THE 7TH INTERNATIONAL CONFERENCE ON COMMUNICATION AND INFORMATION PROCESSING, ICCIP 2021, 2021, : 44 - 50
  • [10] Abstractive text summarization model combining a hierarchical attention mechanism and multiobjective reinforcement learning
    Sun, Yujia
    Platos, Jan
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 248