ATTENTION BASED NETWORK FOR NO-REFERENCE UGC VIDEO QUALITY ASSESSMENT

被引:16
|
作者
Yi, Fuwang [1 ,2 ]
Chen, Mianyi [2 ]
Sun, Wei [1 ]
Min, Xiongkuo [1 ]
Tian, Yuan [1 ]
Zhai, Guangtao [1 ]
机构
[1] Shanghai Jiao Tong Univ, Inst Image Commun & Network Engn, Shanghai, Peoples R China
[2] Tencent, Social Commun Lab, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
No-reference video quality assessment; user-generated content videos; attention mechanism;
D O I
10.1109/ICIP42928.2021.9506420
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The quality assessment of user-generated content (UGC) videos is a challenging problem due to the absence of reference videos and their complex distortions. Traditional no-reference video quality assessment (NR-VQA) algorithms mainly target specific synthetic distortions. Less attention has been paid to authentic distortions in UGC videos, which are not distributed evenly in both the spatial and temporal domains. In this paper, we propose an end-to-end neural network model for UGC video quality assessment based on the attention mechanism. The key step in our approach is to embed the attention modules in the feature extraction network, which effectively extracts local distortion information. In addition, to exploit the temporal perception mechanism of the human visual system (HVS), the gated recurrent unit (GRU) and temporal pooling layer are integrated into the proposed model. We validate the proposed model on three public in-the-wild VQA databases: KoNViD-1k, CVD2014, and LIVE-Qualcomm. Experimental results demonstrate that the proposed method outperforms state-of-the-art NR-VQA models. The implementation of our method is released at https://github.com/qingshangithub/AB-VQA.
引用
收藏
页码:1414 / 1418
页数:5
相关论文
共 50 条
  • [1] Conformer Based No-Reference Quality Assessment for UGC Video
    Yang, Zike
    Zhang, Yingxue
    Si, Zhanjun
    [J]. ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT VI, ICIC 2024, 2024, 14867 : 464 - 472
  • [2] A Deep Learning based No-reference Quality Assessment Model for UGC Videos
    Sun, Wei
    Min, Xiongkuo
    Lu, Wei
    Zhai, Guangtao
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022,
  • [3] No-reference Objective Stereo Video Quality Assessment based on Visual Attention and Edge Difference
    Zhao, Wei
    Ye, Long
    Wang, Jingling
    Zhang, Qin
    [J]. 2015 IEEE ADVANCED INFORMATION TECHNOLOGY, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (IAEAC), 2015, : 523 - 526
  • [4] No-reference High Definition Video Quality Assessment based on BP Neural Network
    Xu, Jiangbo
    Jiang, Xiuhua
    [J]. 2011 INTERNATIONAL CONFERENCE ON FUTURE COMPUTER SCIENCE AND APPLICATION (FCSA 2011), VOL 1, 2011, : 384 - 387
  • [5] Feature Selection for Neural-Network Based No-Reference Video Quality Assessment
    Culibrk, Dubravko
    Kukolj, Dragan
    Vasiljevic, Petar
    Pokric, Maja
    Zlokolica, Vladimir
    [J]. ARTIFICIAL NEURAL NETWORKS - ICANN 2009, PT II, 2009, 5769 : 633 - 642
  • [6] A NO-REFERENCE VIDEO QUALITY ASSESSMENT BASED ON LAPLACIAN PYRAMIDS
    Zhu, Kongfeng
    Hirakawa, Keigo
    Asari, Vijayan
    Saupe, Dietmar
    [J]. 2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013), 2013, : 49 - 53
  • [7] A No-Reference Video Quality Assessment Metric Based On ROI
    Jia, Lixiu
    Zhong, Xuefei
    Tu, Yan
    Niu, Wenjuan
    [J]. IMAGE QUALITY AND SYSTEM PERFORMANCE XII, 2015, 9396
  • [8] No-reference model for video quality assessment based on SVM
    Wu, Lili
    Yu, Chunyan
    [J]. ADVANCES IN MECHATRONICS, AUTOMATION AND APPLIED INFORMATION TECHNOLOGIES, PTS 1 AND 2, 2014, 846-847 : 1024 - 1030
  • [9] No-Reference Video Quality Assessment Using Distortion Learning and Temporal Attention
    Kossi, Koffi
    Coulombe, Stephane
    Desrosiers, Christian
    Gagnon, Ghyslain
    [J]. IEEE ACCESS, 2022, 10 : 41010 - 41022
  • [10] Reconstruction-based No-Reference Video Quality Assessment
    Wu, Zhenyu
    Hu, Hong
    [J]. PROCEEDINGS OF THE 2016 IEEE REGION 10 CONFERENCE (TENCON), 2016, : 3075 - 3078