Hierarchical Reasoning Network for Pedestrian Attribute Recognition

被引:13
|
作者
An, Haoran [1 ]
Hu, Hai-Miao [1 ]
Guo, Yuanfang [1 ]
Zhou, Qianli [2 ]
Li, Bo [1 ]
机构
[1] Beihang Univ, Sch Comp Sci & Engn, Beijing 100191, Peoples R China
[2] Peoples Publ Secur Univ China, Beijing 100038, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Cognition; Semantics; Task analysis; Machine learning; Correlation; Image color analysis; Pedestrian attribute recognition; video surveillance; abstraction levels; hierarchical; reason; CLASSIFICATION; RETRIEVAL; ALIGNMENT;
D O I
10.1109/TMM.2020.2975417
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Pedestrian attribute recognition, which can benefit other tasks such as person re-identification and pedestrian retrieval, is very important in video surveillance related tasks. In this paper, we observe that the existing methods tackle this problem from the perspective of multi-label classification without considering the hierarchical relationships among the attributes. In human cognition, the attributes can be categorized according to their semantic/abstraction levels. The high-level attributes can be predicted by reasoning from the low-level and medium-level attributes, while the recognition of the low-level and medium-level attributes can be guided by the high-level attributes. Based on this attribute categorization, we propose a novel Hierarchical Reasoning Network (HR-Net), which can hierarchically predict the attributes at different abstraction levels in different stages of the network. We also propose an attribute reasoning structure to exploit the relationships among the attributes at different semantic levels. Experimental results demonstrate that the proposed network gives superior performances compared to the state-of-the-art techniques.
引用
收藏
页码:268 / 280
页数:13
相关论文
共 50 条
  • [31] Recurrent Attention Model for Pedestrian Attribute Recognition
    Zhao, Xin
    Sang, Liufang
    Ding, Guiguang
    Han, Jungong
    Di, Na
    Yan, Chenggang
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 9275 - 9282
  • [32] Pedestrian Attribute Recognition Based on Deep Learning
    Yuan Peipei
    Zhang Liang
    LASER & OPTOELECTRONICS PROGRESS, 2020, 57 (06)
  • [33] Explicit Attention Modeling for Pedestrian Attribute Recognition
    Fang, Jinyi
    Zhu, Bingke
    Chen, Yingying
    Wang, Jinqiao
    Tang, Ming
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2075 - 2080
  • [34] Pedestrian Attribute Recognition Based on Multimodal Transformer
    Liu, Dan
    Song, Wei
    Zhao, Xiaobing
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT I, 2024, 14425 : 422 - 433
  • [35] Pedestrian Attribute Recognition in Surveillance Scenes: A Survey
    Jia J.
    Chen X.-T.
    Huang K.-Q.
    Jisuanji Xuebao/Chinese Journal of Computers, 2022, 45 (08): : 1765 - 1793
  • [36] DEEP PEDESTRIAN ATTRIBUTE RECOGNITION BASED ON LSTM
    Ji, Zhong
    Zheng, Weixiong
    Pang, Yanwei
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 151 - 155
  • [37] PARFormer: Transformer-Based Multi-Task Network for Pedestrian Attribute Recognition
    Fan, Xinwen
    Zhang, Yukang
    Lu, Yang
    Wang, Hanzi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (01) : 411 - 423
  • [38] Saliency guided self-attention network for pedestrian attribute recognition in surveillance scenarios
    Li N.
    Wu Y.
    Liu Y.
    Li D.
    Gao J.
    Journal of China Universities of Posts and Telecommunications, 2022, 29 (05): : 21 - 29
  • [39] SAFE-NET: SOLID AND ABSTRACT FEATURE EXTRACTION NETWORK FOR PEDESTRIAN ATTRIBUTE RECOGNITION
    Gao, Daiheng
    Wu, Zhenzhi
    Zhang, Weihao
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 1655 - 1659
  • [40] Pedestrian attribute recognition using two-branch trainable Gabor wavelets network
    Junejo, Imran N.
    PLOS ONE, 2021, 16 (06):