JNMR: Joint Non-Linear Motion Regression for Video Frame Interpolation

被引:2
|
作者
Liu M. [1 ,2 ]
Xu C. [1 ,2 ]
Yao C. [3 ]
Lin C. [1 ,2 ]
Zhao Y. [1 ,2 ]
机构
[1] Beijing Jiaotong University, Institute of Information Science, Beijing
[2] Beijing Key Laboratory of Advanced Information Science and Network Technology, Beijing
[3] University of Science and Technology Beijing, School of Computer and Communication Engineering, Beijing
来源
关键词
deformable convolution; interpolation modeling; motion estimation; multi-variable non-linear regression; Video frame interpolation;
D O I
10.1109/TIP.2023.3315122
中图分类号
学科分类号
摘要
Video frame interpolation (VFI) aims to generate predictive frames by motion-warping from bidirectional references. Most examples of VFI utilize spatiotemporal semantic information to realize motion estimation and interpolation. However, due to variable acceleration, irregular movement trajectories, and camera movement in real-world cases, they can not be sufficient to deal with non-linear middle frame estimation. In this paper, we present a reformulation of the VFI as a joint non-linear motion regression (JNMR) strategy to model the complicated inter-frame motions. Specifically, the motion trajectory between the target frame and multiple reference frames is regressed by a temporal concatenation of multi-stage quadratic models. Then, a comprehensive joint distribution is constructed to connect all temporal motions. Moreover, to reserve more contextual details for joint regression, the feature learning network is devised to explore clarified feature expressions with dense skip-connection. Later, a coarse-to-fine synthesis enhancement module is utilized to learn visual dynamics at different resolutions with multi-scale textures. The experimental VFI results show the effectiveness and significant improvement of joint motion regression over the state-of-the-art methods. The code is available at https://github.com/ruhig6/JNMR. © 1992-2012 IEEE.
引用
收藏
页码:5283 / 5295
页数:12
相关论文
共 50 条
  • [1] Non-linear Motion Estimation for Video Frame Interpolation using Space-time Convolutions
    Dutta, Saikat
    Subramaniam, Arulkumar
    Mittal, Anurag
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 1725 - 1730
  • [2] Non-linear motion-compensated interpolation for low bit rate video
    Liu, S
    Kim, J
    Kuo, CCJ
    APPLICATIONS OF DIGITAL IMAGE PROCESSING XXIII, 2000, 4115 : 203 - 213
  • [3] Video Decoder Monitoring using Non-linear Regression
    Ekobo Akoa, Brice
    Simeu, Emmanuel
    Lebowsky, Fritz
    PROCEEDINGS OF THE 2013 IEEE 19TH INTERNATIONAL ON-LINE TESTING SYMPOSIUM (IOLTS), 2013, : 175 - 178
  • [4] Progressive Motion Boosting for Video Frame Interpolation
    Xiao, Jing
    Xu, Kangmin
    Hu, Mengshun
    Liao, Liang
    Wang, Zheng
    Lin, Chia-Wen
    Wang, Mi
    Satoh, Shin'ichi
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8076 - 8090
  • [5] Motion-Aware Video Frame Interpolation
    Han, Pengfei
    Zhang, Fuhua
    Zhao, Bin
    Li, Xuelong
    NEURAL NETWORKS, 2024, 178
  • [6] A Motion Distillation Framework for Video Frame Interpolation
    Zhou, Shili
    Tan, Weimin
    Yan, Bo
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 3728 - 3740
  • [7] MOTION FEEDBACK DESIGN FOR VIDEO FRAME INTERPOLATION
    Hu, Mengshun
    Liao, Liang
    Xiao, Jing
    Gu, Lin
    Satoh, Shin'ichi
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 4347 - 4351
  • [8] Theory of Continued Fraction Interpolation and Its Application in Non-linear Regression
    Cao, An-zhao
    Zhu, Xiao-lin
    Zhou, Jin-ming
    Gao, Ting-ting
    2008 7TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-23, 2008, : 8078 - +
  • [9] GENERALIZED NON-LINEAR INTERPOLATION
    LITVIN, ON
    DOPOVIDI AKADEMII NAUK UKRAINSKOI RSR SERIYA A-FIZIKO-MATEMATICHNI TA TECHNICHNI NAUKI, 1980, (01): : 10 - 14
  • [10] Non-linear view interpolation
    Bao, HJ
    Chen, L
    Ying, JG
    Peng, QS
    JOURNAL OF VISUALIZATION AND COMPUTER ANIMATION, 1999, 10 (04): : 233 - 241