SFPD: Simultaneous Face and Person Detection in Real-Time for Human-Robot Interaction

被引:7
|
作者
Fiedler, Marc-Andre [1 ]
Werner, Philipp [1 ]
Khalifa, Aly [1 ]
Al-Hamadi, Ayoub [1 ]
机构
[1] Otto von Guericke Univ, Neuroinformat Technol Grp, D-39106 Magdeburg, Germany
关键词
face detection; person detection; multi-task learning; real-time detection;
D O I
10.3390/s21175918
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Face and person detection are important tasks in computer vision, as they represent the first component in many recognition systems, such as face recognition, facial expression analysis, body pose estimation, face attribute detection, or human action recognition. Thereby, their detection rate and runtime are crucial for the performance of the overall system. In this paper, we combine both face and person detection in one framework with the goal of reaching a detection performance that is competitive to the state of the art of lightweight object-specific networks while maintaining real-time processing speed for both detection tasks together. In order to combine face and person detection in one network, we applied multi-task learning. The difficulty lies in the fact that no datasets are available that contain both face as well as person annotations. Since we did not have the resources to manually annotate the datasets, as it is very time-consuming and automatic generation of ground truths results in annotations of poor quality, we solve this issue algorithmically by applying a special training procedure and network architecture without the need of creating new labels. Our newly developed method called Simultaneous Face and Person Detection (SFPD) is able to detect persons and faces with 40 frames per second. Because of this good trade-off between detection performance and inference time, SFPD represents a useful and valuable real-time framework especially for a multitude of real-world applications such as, e.g., human-robot interaction.
引用
下载
收藏
页数:17
相关论文
共 50 条
  • [1] Real-time Face Tracking for Human-Robot Interaction
    Putro, Muhamad Dwisnanto
    Jo, Kang-Hyun
    2018 INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY ROBOTICS (ICT-ROBOT), 2018,
  • [2] Real-Time Face Recognition for Human-Robot Interaction
    Cruz, Claudia
    Enrique Sucar, L.
    Morales, Eduardo F.
    2008 8TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2008), VOLS 1 AND 2, 2008, : 679 - 684
  • [3] Real-Time Face and Gesture Analysis for Human-Robot Interaction
    Wallhoff, Frank
    Rehrl, Tobias
    Mayer, Christoph
    Radig, Bernd
    REAL-TIME IMAGE AND VIDEO PROCESSING 2010, 2010, 7724
  • [4] Real-time Face Detection for Human Robot Interaction
    Pan, Yaozhang
    Ge, Shuzhi Sam
    He, Hongsheng
    Chen, Lei
    RO-MAN 2009: THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2, 2009, : 15 - +
  • [5] Real-Time Coordination in Human-Robot Interaction Using Face and Voice
    Skantze, Gabriel
    AI MAGAZINE, 2016, 37 (04) : 19 - 31
  • [6] Real-time person tracking and pointing gesture recognition for human-robot interaction
    Nickel, K
    Stiefelhagen, R
    COMPUTER VISION IN HUMAN-COMPUTER INTERACTION, PROCEEDINGS, 2004, 3058 : 28 - 38
  • [7] Real-time safety for human-robot interaction
    Kulic, D
    Croft, EA
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2006, 54 (01) : 1 - 12
  • [8] Real-time safety for human-robot interaction
    Kulic, D
    Croft, EA
    2005 12TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS, 2005, : 719 - 724
  • [9] Real-time multi-view face tracking for human-robot interaction
    An, KH
    Yoo, DH
    Jung, SU
    Chung, MJ
    2005 4TH IEEE INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING, 2005, : 135 - 140
  • [10] Real-time Framework for Multimodal Human-Robot Interaction
    Gast, Juergen
    Bannat, Alexander
    Rehrl, Tobias
    Wallhoff, Frank
    Rigoll, Gerhard
    Wendt, Cornelia
    Schmidt, Sabrina
    Popp, Michael
    Faerber, Berthold
    HSI: 2009 2ND CONFERENCE ON HUMAN SYSTEM INTERACTIONS, 2009, : 273 - 280