Object Detection Using Dual Graph Network

被引:3
|
作者
Chen, Shengjia [1 ]
Li, Zhixin [1 ]
Huang, Feicheng [1 ]
Zhang, Canlong [1 ]
Ma, Huifang [1 ,2 ]
机构
[1] Guangxi Normal Univ, Guangxi Key Lab Multisource Informat Min & Secur, Guilin 541004, Peoples R China
[2] Northwest Normal Univ, Coll Comp Sci & Engn, Lanzhou 730070, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICPR48806.2021.9412825
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most object detection methods focus only on the local information near the region proposal and ignore the object's global semantic relation and local spatial relation information, resulting in limited performance. To capture and explore these important relations, we propose a detection method based on a graph convolutional network (GCN). Two independent relation graph networks are used to obtain the global semantic information of the object in labels and the local spatial information in images. Semantic relation networks can implicitly acquire global knowledge, and by constructing a directed graph on the dataset, each node is represented by the word embedding of labels and then sent to the GCN to obtain high-level semantic representation. The spatial relation network encodes the relation by the positional relation module and the visual connection module, and enriches the object features through local key information from objects. The feature representation is further improved by aggregating the outputs of the two networks. Instead of directly disseminating visual features in the network, the dual-graph network explores more advanced feature information, giving the detector the ability to obtain key relations in labels and region proposals. Experiments on the PASCAL VOC and MS COCO datasets demonstrate that key relation information significantly improve the performance of detection with better ability to detect small objects and reasonable boduning box. The results on COCO dataset demonstrate our method obtains around 32.3% improvement on AP in terms of small objects.
引用
收藏
页码:3280 / 3287
页数:8
相关论文
共 50 条
  • [1] DGRNet: A Dual-Level Graph Relation Network for Video Object Detection
    Qi, Qiang
    Hou, Tianxiang
    Lu, Yang
    Yan, Yan
    Wang, Hanzi
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 4128 - 4141
  • [2] Object detection based on knowledge graph network
    Li, Jianping
    Tan, Guozhen
    Ke, Xiao
    Si, Huaiwei
    Peng, Yanfei
    [J]. APPLIED INTELLIGENCE, 2023, 53 (12) : 15045 - 15066
  • [3] Object detection based on knowledge graph network
    Jianping Li
    Guozhen Tan
    Xiao Ke
    Huaiwei Si
    Yanfei Peng
    [J]. Applied Intelligence, 2023, 53 : 15045 - 15066
  • [4] Adaptive graph reasoning network for object detection
    Zhong, Xinfang
    Kuang, Wenlan
    Li, Zhixin
    [J]. IMAGE AND VISION COMPUTING, 2024, 151
  • [5] Dual pyramid network for salient object detection
    Xu, Xuemiao
    Chen, Jiaxing
    Zhang, Huaidong
    Han, Guoqiang
    [J]. NEUROCOMPUTING, 2020, 375 : 113 - 123
  • [6] GraphFPN: Graph Feature Pyramid Network for Object Detection
    Zhao, Gangming
    Ge, Weifeng
    Yu, Yizhou
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 2743 - 2752
  • [7] Robust Dual-Graph Regularized Moving Object Detection
    Qin, Jing
    Shen, Ruilong
    Zhu, Ruihan
    Xie, Biyun
    [J]. PROCEEDINGS OF 2022 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (IEEE ICMA 2022), 2022, : 487 - 492
  • [8] Dual graph neural network for overlapping community detection
    Li, Xiaohong
    Peng, Qixuan
    Li, Ruihong
    Ma, Huifang
    [J]. JOURNAL OF SUPERCOMPUTING, 2024, 80 (02): : 2196 - 2222
  • [9] Dual graph neural network for overlapping community detection
    Xiaohong Li
    Qixuan Peng
    Ruihong Li
    Huifang Ma
    [J]. The Journal of Supercomputing, 2024, 80 : 2196 - 2222
  • [10] Domain Generalized Object Detection with Triple Graph Reasoning Network
    Rao, Zhijie
    Tang, Luyao
    Huang, Yue
    Ding, Xinghao
    [J]. NEURAL INFORMATION PROCESSING, ICONIP 2023, PT III, 2024, 14449 : 314 - 327