Make One-Shot Video Object Segmentation Efficient Again

被引:0
|
作者
Meinhardt, Tim [1 ]
Leal-Taixe, Laura [1 ]
机构
[1] Tech Univ Munich, Munich, Germany
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video object segmentation (VOS) describes the task of segmenting a set of objects in each frame of a video. In the semi-supervised setting, the first mask of each object is provided at test time. Following the one-shot principle, fine-tuning VOS methods train a segmentation model separately on each given object mask. However, recently the VOS community has deemed such a test time optimization and its impact on the test runtime as unfeasible. To mitigate the inefficiencies of previous fine-tuning approaches, we present efficient One-Shot Video Object Segmentation (e-OSVOS). In contrast to most VOS approaches, e-OSVOS decouples the object detection task and predicts only local segmentation masks by applying a modified version of Mask R-CNN. The one-shot test runtime and performance are optimized without a laborious and handcrafted hyperparameter search. To this end, we meta learn the model initialization and learning rates for the test time optimization. To achieve an optimal learning behavior, we predict individual learning rates at a neuron level. Furthermore, we apply an online adaptation to address the common performance degradation throughout a sequence by continuously fine-tuning the model on previous mask predictions supported by a frame-to-frame bounding box propagation. e-OSVOS provides state-of-the-art results on DAVIS 2016, DAVIS 2017 and YouTube-VOS for one-shot fine-tuning methods while reducing the test runtime substantially. Code is available at https://github.com/dvl-tum/e-osvos.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] One-Shot Video Object Segmentation
    Caelles, S.
    Maninis, K. -K.
    Pont-Tuset, J.
    Leal-Taixe, L.
    Cremers, D.
    Van Gool, L.
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 5320 - 5329
  • [2] One-Shot Video Object Segmentation Using Attention Transfer
    Chanda, Omit
    Wang, Yang
    2019 IEEE 21ST INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP 2019), 2019,
  • [3] A Spatiotemporal Mask Autoencoder for One-shot Video Object Segmentation
    Chen, Baiyu
    Zhao, Li
    Chan, Sixian
    PROCEEDINGS OF 2024 3RD INTERNATIONAL CONFERENCE ON FRONTIERS OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING, FAIML 2024, 2024, : 6 - 12
  • [4] CSMOT: Make One-Shot Multi-Object Tracking in Crowded Scenes Great Again
    Hou, Haoxiong
    Shen, Chao
    Zhang, Ximing
    Gao, Wei
    SENSORS, 2023, 23 (07)
  • [5] Semi-supervised one-shot learning for video object segmentation in dynamic environments
    Dinesh Elayaperumal
    Sachin Sakthi K S
    Jae Hoon Jeong
    Young Hoon Joo
    Multimedia Tools and Applications, 2025, 84 (6) : 3095 - 3115
  • [6] Exploring the Adversarial Robustness of Video Object Segmentation via One-shot Adversarial Attacks
    Jiang, Kaixun
    Hong, Lingyi
    Chen, Zhaoyu
    Guo, Pinxue
    Tao, Zeng
    Wang, Yan
    Zhang, Wenqiang
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 8598 - 8607
  • [7] Fully Convolutional One-Shot Object Segmentation for Industrial Robotics
    Schnieders, Benjamin
    Luo, Shan
    Palmer, Gregory
    Tuyls, Karl
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 1161 - 1169
  • [8] One-Shot Learning-Based Animal Video Segmentation
    Xue, Tengfei
    Qiao, Yongliang
    Kong, He
    Su, Daobilige
    Pan, Shirui
    Rafique, Khalid
    Sukkarieh, Salah
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (06) : 3799 - 3807
  • [9] AGUnet: Annotation-guided U-net for fast one-shot video object segmentation
    Yin, Yingjie
    Xu, De
    Wang, Xingang
    Zhang, Lei
    PATTERN RECOGNITION, 2021, 110
  • [10] One-Shot Segmentation in Clutter
    Michaelis, Claudio
    Bethge, Matthias
    Ecker, Alexander S.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80