Fine-Grained Image Quality Caption With Hierarchical Semantics Degradation

被引:0
|
作者
Yang, Wen [1 ]
Wu, Jinjian [1 ]
Tian, Shiwei [2 ]
Li, Leida [1 ]
Dong, Weisheng [1 ]
Shi, Guangming [1 ]
机构
[1] Xidian Univ, Sch Artificial Intelligence, Xian 710071, Peoples R China
[2] Natl Innovat Inst Def Technol, Beijing 100010, Peoples R China
基金
中国国家自然科学基金;
关键词
Semantics; Degradation; Image quality; Feature extraction; Distortion; Databases; Bidirectional control; Image quality assessment; quality caption; hierarchical semantics degradation; deep neural network;
D O I
10.1109/TIP.2022.3171445
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Blind image quality assessment (BIQA), which is capable of precisely and automatically estimating human perceived image quality with no pristine image for comparison, attracts extensive attention and is of wide applications. Recently, many existing BIQA methods commonly represent image quality with a quantitative value, which is inconsistent with human cognition. Generally, human beings are good at perceiving image quality in terms of semantic description rather than quantitative value. Moreover, cognition is a needs-oriented task where humans are able to extract image contents with local to global semantics as they need. The mediocre quality value represents coarse or holistic image quality and fails to reflect degradation on hierarchical semantics. In this paper, to comply with human cognition, a novel quality caption model is inventively proposed to measure fine-grained image quality with hierarchical semantics degradation. Research on human visual system indicates there are hierarchy and reverse hierarchy correlations between hierarchical semantics. Meanwhile, empirical evidence shows that there are also bi-directional degradation dependencies between them. Thus, a novel bi-directional relationship-based network (BDRNet) is proposed for semantics degradation description, through adaptively exploring those correlations and degradation dependencies in a bi-directional manner. Extensive experiments demonstrate that our method outperforms the state-of-the-arts in terms of both evaluation performance and generalization ability.
引用
收藏
页码:3578 / 3590
页数:13
相关论文
共 50 条
  • [1] Fine-grained attention for image caption generation
    Chang, Yan-Shuo
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (03) : 2959 - 2971
  • [2] Fine-grained attention for image caption generation
    Yan-Shuo Chang
    [J]. Multimedia Tools and Applications, 2018, 77 : 2959 - 2971
  • [3] Fine-grained Image Caption based on Multi-level Attention
    Yang Zhenyu
    Zhang Jiao
    [J]. PROCEEDINGS OF 2019 IEEE 7TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND NETWORK TECHNOLOGY (ICCSNT 2019), 2019, : 72 - 78
  • [4] Hierarchical Fine-Grained Image Forgery Detection and Localization
    Guo, Xiao
    Liu, Xiaohong
    Ren, Zhiyuan
    Grosz, Steven
    Masi, Iacopo
    Liu, Xiaoming
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 3155 - 3165
  • [5] Angelic semantics of fine-grained concurrency
    Ghica, DR
    Murawski, AS
    [J]. FOUNDATIONS OF SOFTWARE SCIENCE AND COMPUTATION STRUCTURES, PROCEEDINGS, 2004, 2987 : 211 - 225
  • [6] Angelic semantics of fine-grained concurrency
    Ghica, Dan R.
    Murawski, Andrzej S.
    [J]. ANNALS OF PURE AND APPLIED LOGIC, 2008, 151 (2-3) : 89 - 114
  • [7] Fine-grained semantics for attitude reports
    Lederman, Harvey
    [J]. SEMANTICS & PRAGMATICS, 2021, 14
  • [8] A Novel Semantics-Preserving Hashing for Fine-Grained Image Retrieval
    Sun, Han
    Fan, Yejia
    Shen, Jiaquan
    Liu, Ningzhong
    Liang, Dong
    Zhou, Huiyu
    [J]. IEEE ACCESS, 2020, 8 : 26199 - 26209
  • [9] Fine-Grained Image Search
    Xie, Lingxi
    Wang, Jingdong
    Zhang, Bo
    Tian, Qi
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2015, 17 (05) : 636 - 647
  • [10] Siamese transformer with hierarchical concept embedding for fine-grained image recognition
    Yilin LYU
    Liping JING
    Jiaqi WANG
    Mingzhe GUO
    Xinyue WANG
    Jian YU
    [J]. Science China(Information Sciences), 2023, 66 (03) : 188 - 203