Pedestrian detection via deep segmentation and context network

被引:0
|
作者
Zhaoqing Li
Zhenxue Chen
Q. M. Jonathan Wu
Chengyun Liu
机构
[1] Shandong University,School of Control Science and Engineering
[2] University of Windsor,Department of Electrical and Computer Engineering
来源
关键词
Pedestrian detection; Segmentation information; Context information; Multi-channel feature; Deep network;
D O I
暂无
中图分类号
学科分类号
摘要
For pedestrian detection, many deep learning approaches have shown effectiveness, but they are not accurate enough for the positioning of obstructed pedestrians. A novel segmentation and context network (SCN) structure is proposed that combines the segmentation and context information for improving the accuracy of bounding box regression for pedestrian detection. The SCN model contains the segmentation sub-model and the context sub-model. For separating the pedestrian instance from the background and solving the pedestrian occlusion problem, this paper uses the segmentation sub-model for extracting pedestrian segmentation information to generate more accurate pedestrian regions. Considering that different pedestrian instances need different context information, this paper uses context regions with different scales to extract context information. For improving the detection performance, this paper uses the hole algorithm in the context sub-model to expand the receptive field of the output feature maps and connect the multi-channel features with the skip layer. Finally, the loss functions of the two sub-models outputs are fused. The experimental results on different datasets validate the effectiveness of our SCN model, and the deeply supervised algorithm has a good trade-off between accuracy and complexity.
引用
收藏
页码:5845 / 5857
页数:12
相关论文
共 50 条
  • [21] Automatic Thin Crack Segmentation with Deep Context Aggregation Network
    Zhao, Xiaoyu
    Huang, Wenlian
    Chen, Jie
    Chen, Zhuangzhuang
    Li, Jianqiang
    2022 INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2022), 2022, : 206 - 212
  • [22] DCANet: deep context attention network for automatic polyp segmentation
    Muhammad, Zaka-Ud-Din
    Huang, Zhangjin
    Gu, Naijie
    Muhammad, Usman
    VISUAL COMPUTER, 2022,
  • [23] DCANet: deep context attention network for automatic polyp segmentation
    Zaka-Ud-Din Muhammad
    Zhangjin Huang
    Naijie Gu
    Usman Muhammad
    The Visual Computer, 2023, 39 : 5513 - 5525
  • [24] DCANet: deep context attention network for automatic polyp segmentation
    Muhammad, Zaka-Ud-Din
    Huang, Zhangjin
    Gu, Naijie
    Muhammad, Usman
    VISUAL COMPUTER, 2023, 39 (11): : 5513 - 5525
  • [25] Enhanced Pedestrian Detection using Deep Learning based Semantic Image Segmentation
    Liu, Tianrui
    Stathaki, Tania
    2017 22ND INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP), 2017,
  • [26] Semantic Image Segmentation via Deep Parsing Network
    Liu, Ziwei
    Li, Xiaoxiao
    Luo, Ping
    Loy, Chen Change
    Tang, Xiaoou
    2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1377 - 1385
  • [27] Robust Pedestrian Detection via a Recursive Convolution Neural Network
    Tran Thi Dinh
    Nguyen Dinh Vinh
    Wook, Jeon Jae
    2018 19TH IEEE/ACIS INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, ARTIFICIAL INTELLIGENCE, NETWORKING AND PARALLEL/DISTRIBUTED COMPUTING (SNPD), 2018, : 281 - 286
  • [28] Semantic Image Segmentation for Pedestrian Detection
    Nurhadiyatna, Adi
    Loncaric, Sven
    PROCEEDINGS OF THE 10TH INTERNATIONAL SYMPOSIUM ON IMAGE AND SIGNAL PROCESSING AND ANALYSIS, 2017, : 153 - 158
  • [29] Small-Scale Pedestrian Detection Based on Deep Neural Network
    Han, Bing
    Wang, Yunhao
    Yang, Zheng
    Gao, Xinbo
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2020, 21 (07) : 3046 - 3055
  • [30] Pedestrian Detection based on Deep Fusion Network using Feature Correlation
    Lee, Yongwoo
    Bui, Toan Duc
    Shin, Jitae
    2018 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2018, : 694 - 699