Segmentation-Adaptive Surveillance Video Synopsis

被引:0
|
作者
Zhang Y. [1 ]
Zhu P. [1 ]
Zheng T. [1 ]
Li W. [1 ]
Zhang T. [1 ]
机构
[1] School of Information Science and Technology, Shijiazhuang Tiedao University, Shijiazhuang
关键词
interactive behavior; segmentation; self-adaption; surveillance video; video synopsis;
D O I
10.3724/SP.J.1089.2023.19520
中图分类号
学科分类号
摘要
To solve the problems of existing video synopsis methods such as incomplete tracks and difficulty to retain interactive behaviors in complex scenes, a segmentation-adaptive video synopsis method is proposed. Firstly, a video segmentation module is proposed, which detects the crowding degree of each frame of the input video, and divides the video into sparse and crowded segmentation using self-adaption threshold, and links the interrupted track to form extending crowded segmentation. Secondly, an interactive behavior judgment module is designed, which combines spatial distance and video self-adaption threshold to comprehensively judge and retain interactive behavior between objects. Finally, a segmentation-adaptive rearrangement module is proposed, which combines collision constraints, space proportion constraint, interactive constraints and temporal constraints to generate the optimal time tag, and fuses the background to generate synopsis video. Experimental results on public dataset VISOR, BEHAVE and CAVIAR show that compared with the current mainstream methods, the proposed method reduces 0.136 and 0.011 frame compression rate and collision rate respectively, and the time cost is reduced by 120.03s. © 2023 Institute of Computing Technology. All rights reserved.
引用
收藏
页码:944 / 952
页数:8
相关论文
共 30 条
  • [1] Baskurt K B, Samet R., Video synopsis: a survey, Computer Vision and Image Understanding, 181, pp. 26-38, (2019)
  • [2] Li Xuelong, Zhao Bin, Video distillation, Scientia Sinica (Informationis), 51, 5, pp. 695-734, (2021)
  • [3] Ghatak S, Rup S., Single camera surveillance video synopsis: a review and taxonomy, Proceedings of the International Conference on Information Technology, pp. 483-488, (2020)
  • [4] Wang Feiyue, Xie Yujia, Mao Bo, Multi-video synopsis in geographic scene considering virtual viewpoint area of camera, Geomatics and Information Science of Wuhan University, 46, 4, pp. 595-600, (2021)
  • [5] Li Yiyi, Wang Jilong, Self-attention based video summarization, Journal of Computer-Aided Design & Computer Graphics, 32, 4, pp. 652-659, (2020)
  • [6] Rav-Acha A, Pritch Y, Peleg S., Making a long video short: dynamic video synopsis, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 435-441, (2006)
  • [7] Liu Yujie, Tang Shunjing, Gao Yongbiao, Et al., Label distribution learning for video summarization, Journal of Computer-Aided Design & Computer Graphics, 31, 1, pp. 104-110, (2019)
  • [8] Pritch Y, Rav-Acha A, Gutman A, Et al., Webcam synopsis: peeking around the world, Proceedings of the 11th IEEE International Conference on Computer Vision, pp. 1-8, (2007)
  • [9] Namitha K, Narayanan A., Preserving interactions among moving objects in surveillance video synopsis, Multimedia Tools and Applications, 79, 43, pp. 32331-32360, (2020)
  • [10] Nie Y W, Xiao C X, Sun H Q, Et al., Compact video synopsis via global spatiotemporal optimization, IEEE Transactions on Visualization and Computer Graphics, 19, 10, pp. 1664-1676, (2013)