FewarNet: An Efficient Few-Shot View Synthesis Network Based on Trend Regularization

被引:0
|
作者
Song, Chenxi [1 ]
Wang, Shigang [1 ]
Wei, Jian [1 ]
Zhao, Yan [1 ]
机构
[1] Jilin Univ, Coll Commun Engn, Changchun 130012, Peoples R China
基金
中国国家自然科学基金;
关键词
Three-dimensional displays; Market research; Cameras; Costs; Geometry; Training; Estimation; Depth estimation; few-shot view synthesis; regularization constraint; prior depth; VIDEO;
D O I
10.1109/TCSVT.2024.3395447
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Novel view synthesis from existing inputs remains a research focus in computer vision. Predicting views becomes more challenging when only a limited number of views are available. This challenge is commonly referred to as the few-shot view synthesis problem. Recently, various strategies have emerged for few-shot view synthesis, such as transfer learning, depth supervision, and regularization constraints. However, transfer learning relies on massive scene data, depth supervision is affected by input depth quality, and regularization causes increased computational costs or impaired generalization. To address these issues, we propose a new few-shot view synthesis framework called FewarNet that introduces trend regularization to leverage depth structural features and a warping loss to supervise depth estimation, possessing the advantages of existing few-shot strategies, enabling high-quality novel view prediction with generalization and efficiency. Specifically, FewarNet consists of three stages: fusion, warping, and rectification. In the fusion stage, a fusion network is introduced to estimate depths using scene priors from coarse depths. In the warping stage, the predicted depths are used to guide the warping of the input views, and a distance-weighted warping loss is proposed to correctly guide depth estimation. To further improve prediction accuracy, we propose trend regularization which imposes penalties on depth variation trends to provide depth structural constraints. In the rectification stage, a rectification network is introduced to refine occluded regions in each warped view to generate novel views. Additionally, a rapid view synthesis strategy that leverages depth interpolation is designed to improve efficiency. We validate the method's effectiveness and generalization on various datasets. Given the same sparse inputs, our method demonstrates superior performance in quality and efficiency over state-of-the-art few-shot view synthesis methods.
引用
收藏
页码:9264 / 9280
页数:17
相关论文
共 50 条
  • [31] GRAPH AFFINITY NETWORK FOR FEW-SHOT SEGMENTATION
    Luo, Xiaoliu
    Zhang, Taiping
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 609 - 613
  • [32] Intermediate prototype network for few-shot segmentation
    Luo, Xiaoliu
    Duan, Zhao
    Zhang, Taiping
    SIGNAL PROCESSING, 2023, 203
  • [33] Spatial Attention Network for Few-Shot Learning
    He, Xianhao
    Qiao, Peng
    Dou, Yong
    Niu, Xin
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: DEEP LEARNING, PT II, 2019, 11728 : 567 - 578
  • [34] Mutual Correlation Network for few-shot learning
    Chen, Derong
    Chen, Feiyu
    Ouyang, Deqiang
    Shao, Jie
    NEURAL NETWORKS, 2024, 175
  • [35] A Difference Measuring Network for Few-Shot Learning
    Wang, Yu
    Bao, Junpeng
    Li, Yanhua
    Feng, Zhonghui
    ARTIFICIAL INTELLIGENCE APPLICATIONS AND INNOVATIONS, AIAI 2023, PT II, 2023, 676 : 235 - 249
  • [36] Cross Attention Network for Few-shot Classification
    Hou, Ruibing
    Chang, Hong
    Ma, Bingpeng
    Shan, Shiguang
    Chen, Xilin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [37] Attentive matching network for few-shot learning
    Mai, Sijie
    Hu, Haifeng
    Xu, Jia
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2019, 187
  • [38] TRANSDUCTIVE PROTOTYPICAL NETWORK FOR FEW-SHOT CLASSIFICATION
    Liu, Xinyue
    Liu, Pengxin
    Zong, Linlin
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 1671 - 1675
  • [39] Few-Shot NeRF-Based View Synthesis for Viewpoint-Biased Camera Pose Estimation
    Ito, Sota
    Aizawa, Hiroaki
    Kato, Kunihito
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT II, 2023, 14255 : 308 - 319
  • [40] Few-shot Network Traffic Anomaly Detection Based on Siamese Neural Network
    Xu, Simin
    Han, Xueying
    Tian, Tian
    Jiang, Bo
    Lu, Zhigang
    Zhang, Chen
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 3012 - 3017