Improved deeplab v3+ with metadata extraction for small object detection in intelligent visual surveillance systems

被引:0
|
作者
Oh H. [1 ]
Lee M. [1 ]
Kim H. [1 ]
Paik J. [1 ]
机构
[1] Department of Image Engineering, Processing and Intelligent Systems Laboratory, Graduate School of Advanced Imaging Science, Multimedia and Film, Chung-Ang University /, Seoul
关键词
Metadata; Object segmentation; Surveillance system;
D O I
10.5573/IEIESPC.2021.10.3.209
中图分类号
学科分类号
摘要
A surveillance system deploys multiple cameras to monitor a wide area in real time to detect abnormal situations such as a crime scene, traffic accident, and natural disaster. An Increased number of cameras results in the same number of monitors, which makes human decisions or automatic decisions difficult. To solve the problem, a smart surveillance scheme has recently been proposed. The smart surveillance system automatically detects an object and provides an alarm to a surveillant. In this paper, we present a metadata extraction method for object-based video summary. The proposed method adopts deep learning-based object detection and background elimination to correctly estimate an object region. Finally, metadata extraction is performed on the estimated object information. The proposed metadata consists of the representative color, size, aspect ratio, and patch of an object. The proposed method can extract reliable metadata without motion features in both static and dynamic cameras. The proposed method can be applied to various object detection areas using complex metadata. Copyrights © 2021 The Institute of Electronics and Information Engineers
引用
收藏
页码:209 / 218
页数:9
相关论文
共 38 条
  • [21] AFO-SLAM: an improved visual SLAM in dynamic scenes using acceleration of feature extraction and object detection
    Wei, Jinbi
    Deng, Heng
    Wang, Jihong
    Zhang, Liguo
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2024, 35 (11)
  • [22] Intelligent Small Object Detection for Digital Twin in Smart Manufacturing With Industrial Cyber-Physical Systems
    Zhou, Xiaokang
    Xu, Xuesong
    Liang, Wei
    Zeng, Zhi
    Shimizu, Shohei
    Yang, Laurence T.
    Jin, Qun
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (02) : 1377 - 1386
  • [23] Efficient and accurate object detection for 3D point clouds in intelligent visual internet of things
    Hui Li
    Junyin Wang
    Lingwei Xu
    Shujun Zhang
    Ye Tao
    Multimedia Tools and Applications, 2021, 80 : 31297 - 31334
  • [24] Efficient and accurate object detection for 3D point clouds in intelligent visual internet of things
    Li, Hui
    Wang, Junyin
    Xu, Lingwei
    Zhang, Shujun
    Tao, Ye
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (20) : 31297 - 31334
  • [25] ASSD-YOLO: a small object detection method based on improved YOLOv7 for airport surface surveillance
    Wentao Zhou
    Chengtao Cai
    Liying Zheng
    Chenming Li
    Daohui Zeng
    Multimedia Tools and Applications, 2024, 83 : 55527 - 55548
  • [26] ASSD-YOLO: a small object detection method based on improved YOLOv7 for airport surface surveillance
    Zhou, Wentao
    Cai, Chengtao
    Zheng, Liying
    Li, Chenming
    Zeng, Daohui
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (18) : 55527 - 55548
  • [27] Object Detection of UAV for Anti-UAV Based on Improved YOLO v3
    Hu, Yuanyuan
    Wu, Xinjian
    Zheng, Guangdi
    Liu, Xiaofei
    PROCEEDINGS OF THE 38TH CHINESE CONTROL CONFERENCE (CCC), 2019, : 8386 - 8390
  • [28] Small-tufted ganglion cells and two visual systems for the detection of object motion in rabbit retina
    Famiglietti, EV
    VISUAL NEUROSCIENCE, 2005, 22 (04) : 509 - 534
  • [29] Improved YOLO V3 Algorithm and Its Application in Small Target Detection
    Ju Moran
    Luo Haibo
    Wang Zhongbo
    He Miao
    Chang Zheng
    Hui Bin
    ACTA OPTICA SINICA, 2019, 39 (07)
  • [30] Detection of maize leaf diseases using improved MobileNet V3-small
    Gao, Ang
    Geng, Aijun
    Song, Yuepeng
    Ren, Longlong
    Zhang, Yue
    Han, Xiang
    INTERNATIONAL JOURNAL OF AGRICULTURAL AND BIOLOGICAL ENGINEERING, 2023, 16 (03) : 225 - 232