Depth-guided Robust Face Morphing Attack Detection

被引:0
|
作者
Rachalwar, Harsh [1 ]
Fang, Meiling [2 ]
Damer, Naser [2 ,3 ]
Das, Abhijit [1 ]
机构
[1] Birla Inst Technol & Sci, Secunderabad 500078, Telangana, India
[2] Fraunhofer Inst Comp Graph Res IGD, Darmstadt, Germany
[3] Tech Univ Darmstadt, Dept Comp Sci, Darmstadt, Germany
关键词
D O I
10.1109/IJCB57857.2023.10449186
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, morphing attack detection (MAD) solutions have achieved remarkable success with the aid of deep learning techniques. Despite the good performance achieved by binary label or binary pixel-wise supervised MAD models, the robustness of such models drops when facing variations in morphing attacks. In this work, we propose a novel process that leverages facial depth information to build a robust and generalized MAD. The depth map, representing the 3D shape of the face in a 2D image, is more informative compared to binary and binary pixel-wise map labels. To validate the idea we synthetically generated 3D depth map ground truth. Furthermore, we introduce a novel MAD architecture designed to capture subtle information from the 3D depth data. In addition, we analyze the training loss formulation to further enhance the MAD performance. Driven by the need for developing MAD solutions while preserving the privacy of individuals for legal and ethical reasons, we conduct our experiments on privacy-friendly synthetic training data and authentic evaluation data. The experimental results on existing public datasets in SYN-MAD 22 competition demonstrate the effectiveness of our proposed solution in terms of both robustness and generalization.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] DGT: Depth-guided RGB-D occluded target detection with transformers
    Kelei Xu
    Chunyan Wang
    Wanzhong Zhao
    Jinqiang Liu
    Applied Intelligence, 2025, 55 (5)
  • [22] MonoDETR: Depth-guided Transformer for Monocular 3D Object Detection
    Zhang, Renrui
    Qiu, Han
    Wang, Tai
    Guo, Ziyu
    Cui, Ziteng
    Qiao, Yu
    Li, Hongsheng
    Gao, Peng
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 9121 - 9132
  • [23] A Depth-Guided Attention Strategy for Crowd Counting
    Chen, Hao
    Li, Zhan
    Bhanu, Bir
    Lu, Dongping
    Han, Xuming
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PART X, 2023, 14263 : 25 - 37
  • [24] Learning Depth-Guided Convolutions for Monocular 3D Object Detection
    Ng, Mingyu
    Huo, Yuqi
    Yi, Hongwei
    Wang, Zhe
    Shi, Jianping
    Lu, Zhiwu
    Luo, Ping
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 4306 - 4315
  • [25] Tracking Based Depth-guided Video Inpainting
    Hatheele, Saroj
    Zaveri, Mukesh A.
    2013 FOURTH NATIONAL CONFERENCE ON COMPUTER VISION, PATTERN RECOGNITION, IMAGE PROCESSING AND GRAPHICS (NCVPRIPG), 2013,
  • [26] Single Image Face Morphing Attack Detection Using Ensemble of Features
    Venkatesh, Sushma
    Ramachandra, Raghavendra
    Raja, Kiran
    Busch, Christoph
    PROCEEDINGS OF 2020 23RD INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION 2020), 2020, : 1094 - 1099
  • [27] Multimodality for Reliable Single Image Based Face Morphing Attack Detection
    Raghavendra, Ramachandra
    Li, Guoqiang
    IEEE ACCESS, 2022, 10 : 82418 - 82433
  • [28] Revisiting Depth-guided Methods for Monocular 3D Object Detection by Hierarchical Balanced Depth
    Chen, Yi-Rong
    Tseng, Ching-Yu
    Liou, Yi-Syuan
    Wu, Tsung-Han
    Hsu, Winston H.
    CONFERENCE ON ROBOT LEARNING, VOL 229, 2023, 229
  • [29] Unsupervised Face Morphing Attack Detection via Self-paced Anomaly Detection
    Fang, Meiling
    Boutros, Fadi
    Damer, Naser
    2022 IEEE INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB), 2022,
  • [30] CrossDTR: Cross-view and Depth-guided Transformers for 3D Object Detection
    Tseng, Ching-Yu
    Chen, Yi-Rong
    Lee, Hsin-Ying
    Wu, Tsung-Han
    Chen, Wen-Chin
    Hsu, Winston H.
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 4850 - 4857