Deep Frame Prediction for Video Coding

被引:45
|
作者
Choi, Hyomin [1 ]
Bajic, Ivan V. [1 ]
机构
[1] Simon Fraser Univ, Sch Engn Sci, Burnaby, BC V5A 1S6, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Video compression; frame prediction; texture prediction; deep neural network (DNN); deep learning; DESIGN;
D O I
10.1109/TCSVT.2019.2924657
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We propose a novel frame prediction method using a deep neural network (DNN), with the goal of improving the video coding efficiency. The proposed DNN makes use of decoded frames, at both the encoder and decoder to predict the textures of the current coding block. Unlike conventional inter-prediction, the proposed method does not require any motion information to be transferred between the encoder and the decoder. Still, both the uni-directional and bi-directional predictions are possible using the proposed DNN, which is enabled by the use of the temporal index channel, in addition to the color channels. In this paper, we developed a jointly trained DNN for both uni-directional and bi-directional predictions, as well as separate networks for uni-directional and bi-directional predictions, and compared the efficacy of both the approaches. The proposed DNNs were compared with the conventional motion-compensated prediction in the latest video coding standard, High Efficiency Video Coding (HEVC), in terms of the BD-bitrate. The experiments show that the proposed joint DNN (for both uni-directional and bi-directional predictions) reduces the luminance bitrate by about 4.4%, 2.4%, and 23% in the low delay P, low delay, and random access configurations, respectively. In addition, using the separately trained DNNs brings further bit savings of about 03%-0.5%.
引用
收藏
页码:1843 / 1855
页数:13
相关论文
共 50 条
  • [41] Deep Learning For Intra Frame Coding
    Amna, Maraoui
    Imen, Werda
    Ezahra, Sayadi Fatma
    [J]. 2021 7TH INTERNATIONAL CONFERENCE ON ENGINEERING AND EMERGING TECHNOLOGIES (ICEET 2021), 2021, : 843 - 846
  • [42] Coding the displaced frame difference for video compression
    Ratakonda, K
    Yoon, SC
    Ahuja, N
    [J]. INTERNATIONAL CONFERENCE ON IMAGE PROCESSING - PROCEEDINGS, VOL I, 1997, : 353 - 356
  • [43] Segmental Prediction for Video Coding
    Zhang, Kai
    An, Jicheng
    Huang, Han
    Lin, Jian-Liang
    Huang, Yu-Wen
    Lei, Shaw-Min
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2017, 27 (11) : 2425 - 2436
  • [44] Interweaved Prediction for Video Coding
    Zhang, Kai
    Zhang, Li
    Liu, Hongbin
    Xu, Jizheng
    Deng, Zhipin
    Wang, Yue
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 6422 - 6437
  • [45] Prediction Matching for Video Coding
    Zheng, Yunfei
    Yin, Peng
    Divorra Escoda, Oscar
    Sole, Joel
    Gomila, Cristina
    [J]. VISUAL INFORMATION PROCESSING AND COMMUNICATION, 2010, 7543
  • [46] Deep Multi-Domain Prediction for 3D Video Coding
    Lei, Jianjun
    Shi, Yanan
    Pan, Zhaoqing
    Liu, Dong
    Jin, Dengchao
    Chen, Ying
    Ling, Nam
    [J]. IEEE TRANSACTIONS ON BROADCASTING, 2021, 67 (04) : 813 - 823
  • [47] Content-based irregularly shaped macroblock partition for inter frame prediction in video coding
    Li, Zhibin
    Chang, Yilin
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2010, 25 (08) : 610 - 621
  • [48] Multiple Resolution Prediction With Deep Up-Sampling for Depth Video Coding
    Li, Ge
    Lei, Jianjun
    Pan, Zhaoqing
    Peng, Bo
    Ling, Nam
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (09) : 6337 - 6346
  • [49] Deep Learning-Based Chroma Prediction for Intra Versatile Video Coding
    Zhu, Linwei
    Zhang, Yun
    Wang, Shiqi
    Kwong, Sam
    Jin, Xin
    Qiao, Yu
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (08) : 3168 - 3181
  • [50] Deep region segmentation-based intra prediction for depth video coding
    Zhang, Jing
    Hou, Yonghong
    Zhang, Zhe
    Jin, Dengchao
    Zhang, Peihan
    Li, Ge
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (25) : 35953 - 35964