Hierarchical frame based spatial-temporal recovery for video compressive sensing coding

被引:7
|
作者
Gao, Xinwei [1 ]
Jiang, Feng [1 ]
Liu, Shaohui [1 ]
Che, Wenbin [1 ]
Fan, Xiaopeng [1 ]
Zhao, Debin [1 ]
机构
[1] Harbin Inst Technol, Dept Comp Sci & Technol, Harbin 150001, Peoples R China
基金
中国国家自然科学基金;
关键词
Video compressed sensing; Hierarchical structure framework; Spatial-temporal sparse representation; SPARSE REPRESENTATION; ALGORITHM; RECONSTRUCTION;
D O I
10.1016/j.neucom.2015.07.110
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, the divide-and-conquer based hierarchical video compressive sensing (CS) coding framework is proposed, in which the whole video is independently divided into non-overlapped blocks of the hierarchical frames. The proposed framework outperforms the traditional framework through the better exploitation of frames correlation with reference frames, the unequal sample subrates setting among frames in different layers and the reduction of the error propagation. At the encoder, compared with the video/frame based CS, the proposed hierarchical block based CS matrix can be easily implemented and stored in hardware. Each measurement of the block in a different hierarchical frame is obtained with the different sample subrate. At the decoder, by considering the spatial and temporal correlations of the video sequence, a spatial-temporal sparse representation based recovery is proposed, in which the similar blocks in the current frame and these recovered reference frames are organized as a spatial-temporal group unit to be represented sparsely. Finally, the recovery problem of video compressive sensing coding can be solved by adopting the split Bregman iteration. Experimental results show that the proposed method achieves better performance against many state-of-the-art still-image CS and video CS recovery algorithms. (C) 2015 Elsevier B.V. All rights reserved.
引用
收藏
页码:404 / 412
页数:9
相关论文
共 50 条
  • [21] Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks
    Zheng, Haifeng
    Li, Jiayin
    Feng, Xinxin
    Guo, Wenzhong
    Chen, Zhonghui
    Xiong, Neal
    SENSORS, 2017, 17 (11)
  • [22] Video Coding Based on Compressive Sensing via CoSaMP
    章琳
    JournalofDonghuaUniversity(EnglishEdition), 2014, 31 (05) : 727 - 730
  • [23] Compressive sensing in block based image/video coding
    Han, Bing
    Xu, Jun
    Wu, Dapeng
    Tian, Jun
    MOBILE MULTIMEDIA/IMAGE PROCESSING, SECURITY, AND APPLICATIONS 2010, 2010, 7708
  • [24] Temporal Compressive Sensing for Video
    Llull, Patrick
    Yuan, Xin
    Liao, Xuejun
    Yang, Jianbo
    Kittle, David
    Carin, Lawrence
    Sapiro, Guillermo
    Brady, David J.
    COMPRESSED SENSING AND ITS APPLICATIONS, 2015, : 41 - 74
  • [25] Conditional Neural Video Coding with Spatial-Temporal Super-Resolution
    Wang, Henan
    Pan, Xiaohan
    Feng, Runsen
    Guo, Zongyu
    Chen, Zhibo
    2024 DATA COMPRESSION CONFERENCE, DCC, 2024, : 591 - 591
  • [26] Snapshot spatial-temporal compressive imaging
    Qiao, Mu
    Liu, Xuan
    Yuan, Xin
    OPTICS LETTERS, 2020, 45 (07) : 1659 - 1662
  • [27] Spatial-Temporal Network Coding Based on BATS Code
    Xu, Xiaoli
    Guan, Yong Liang
    Zeng, Yong
    Chui, Chee-Cheon
    IEEE COMMUNICATIONS LETTERS, 2017, 21 (03) : 620 - 623
  • [28] Hierarchical Attention Based Spatial-Temporal Graph-to-Sequence Learning for Grounded Video Description
    Shen, Kai
    Wu, Lingfei
    Xu, Fangli
    Tang, Siliang
    Xiao, Jun
    Zhuang, Yueting
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 941 - 947
  • [29] Spatial-temporal segmentation scheme for object-oriented video coding based on wavelet and MMRF
    Zheng, L
    Chan, AK
    Liu, JC
    WAVELET APPLICATIONS IN SIGNAL AND IMAGE PROCESSING VII, 1999, 3813 : 822 - 831
  • [30] Video Captioning Based on the Spatial-Temporal Saliency Tracing
    Zhou, Yuanen
    Hu, Zhenzhen
    Liu, Xueliang
    Wang, Meng
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING, PT I, 2018, 11164 : 59 - 70