A temporal attention based appearance model for video object segmentation

被引:2
|
作者
Wang, Hui [1 ]
Liu, Weibin [1 ]
Xing, Weiwei [2 ]
机构
[1] Beijing Jiaotong Univ, Inst Informat Sci, Beijing 100044, Peoples R China
[2] Beijing Jiaotong Univ, Sch Software Engn, Beijing 100044, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
Video object segmentation; Convolutional neural networks; Appearance model; Mixture loss;
D O I
10.1007/s10489-021-02547-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
More and more researchers have recently paid attention to video object segmentation because it is an important building block for numerous computer vision applications. Although many algorithms promote its development, there are still some open challenges. Efficient and robust pipelines are needed to address appearance changes and the distraction from similar background objects in the video object segmentation. This paper proposes a novel neural network that integrates a temporal attention based appearance model and a boundary-aware loss. The appearance model fuses the appearance information of the first frame, the previous frame, and the current frame in the feature space, which assists the proposed method to learn a discriminative and robust target representation and avoid the drift problem of traditional propagation schemes. Moreover, the boundary-aware loss is employed for network training. Equipped with the boundary-aware loss, the proposed method achieves more accurate segmentation results with clear boundaries. The proposed method is compared with several recent state-of-the-art algorithms on popular benchmark datasets. Comprehensive experiments show that the proposed method achieves favorable performance with a high frame rate.
引用
收藏
页码:2290 / 2300
页数:11
相关论文
共 50 条
  • [31] Object-based video segmentation using spatio-temporal energy
    Bao, HQ
    Zhang, ZY
    2004 7TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING PROCEEDINGS, VOLS 1-3, 2004, : 1260 - 1263
  • [32] Temporal Context Enhanced Referring Video Object Segmentation
    Hu, Xiao
    Hampiholi, Basavaraj
    Neumann, Heiko
    Lang, Jochen
    2024 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION, WACV 2024, 2024, : 5562 - 5571
  • [33] MAHC: motion-appearance video object segmentation via hierarchical attention and multi-level clusteringMAHC: motion-appearance video object segmentation via hierarchical...H. Cao et al.
    Honghui Cao
    Yang Yang
    Lvchen Cao
    Chenyang Liu
    Yandong Hou
    Jun Wang
    The Journal of Supercomputing, 81 (5)
  • [34] A feature temporal attention based interleaved network for fast video object detection
    Yanni Yang
    Huansheng Song
    Shijie Sun
    Yan Chen
    Xinyao Tang
    Qin Shi
    Journal of Ambient Intelligence and Humanized Computing, 2023, 14 : 497 - 509
  • [35] A feature temporal attention based interleaved network for fast video object detection
    Yang, Yanni
    Song, Huansheng
    Sun, Shijie
    Chen, Yan
    Tang, Xinyao
    Shi, Qin
    JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2021, 14 (1) : 497 - 509
  • [36] Model-based temporal object verification using video
    Li, BX
    Chellappa, R
    Zheng, QF
    Der, SZ
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2001, 10 (06) : 897 - 908
  • [37] Video Object Segmentation Based on Disparity
    Xingming, Ouyang
    Wei, Wei
    ADVANCES IN WEB AND NETWORK TECHNOLOGIES, AND INFORMATION MANAGEMENT, 2009, 5731 : 36 - 44
  • [38] Saliency-based dual-attention network for unsupervised video object segmentation
    Zhang, Guifang
    Wong, Hon-Cheng
    JOURNAL OF SUPERCOMPUTING, 2024, 80 (04): : 4996 - 5010
  • [39] Guided Interactive Video Object Segmentation Using Reliability-Based Attention Maps
    Heo, Yuk
    Koh, Yeong Jun
    Kim, Chang-Su
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 7318 - 7326
  • [40] Saliency-based dual-attention network for unsupervised video object segmentation
    Guifang Zhang
    Hon-Cheng Wong
    The Journal of Supercomputing, 2024, 80 (4) : 4996 - 5010