Aneurysm Pose Estimation with Deep Learning

被引:2
|
作者
Assis, Youssef [1 ]
Liao, Liang [1 ,2 ,3 ]
Pierre, Fabien [1 ]
Anxionnat, Rene [2 ,3 ]
Kerrien, Erwan [1 ]
机构
[1] Univ Lorraine, CNRS, INRIA, LORIA, F-54000 Nancy, France
[2] Univ Lorraine, CHRU Nancy, Dept Diagnost & Therapeut Intervent Neuroradiol, F-54000 Nancy, France
[3] Univ Lorraine, INSERM, IADI, F-54000 Nancy, France
关键词
Object Pose Estimation; 3D YOLO; Intracranial Aneurysms; INTRACRANIAL ANEURYSMS;
D O I
10.1007/978-3-031-43895-0_51
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The diagnosis of unruptured intracranial aneurysms from time-of-flight Magnetic Resonance Angiography (TOF-MRA) images is a challenging clinical problem that is extremely difficult to automate. We propose to go beyond the mere detection of each aneurysm and also estimate its size and the orientation of its main axis for an immediate visualization in appropriate reformatted cut planes. To address this issue, and inspired by the idea behind YOLO architecture, a novel one-stage deep learning approach is described to simultaneously estimate the localization, size and orientation of each aneurysm in 3D images. It combines fast and approximate annotation, data sampling and generation to tackle the class imbalance problem, and a cosine similarity loss to optimize the orientation. We evaluate our approach on two large datasets containing 416 patients with 317 aneurysms using a 5-fold cross-validation scheme. Our method achieves a median localization error of 0.48mm and a median 3D orientation error of 12.27 degrees C, demonstrating an accurate localization of aneurysms and an orientation estimation that comply with clinical practice. Further evaluation is performed in a more classical detection setting to compare with state-of-the-art nnDetecton and nnUNet methods. Competitive performance is reported with an average precision of 76.60%, a sensitivity score of 82.93%, and 0.44 false positives per case. Code and annotations are publicly available at https://gitlab.inria.fr/yassis/DeepAnePose.
引用
收藏
页码:543 / 553
页数:11
相关论文
共 50 条
  • [1] Pose estimation with deep learning
    Vogt, Nina
    NATURE METHODS, 2019, 16 (12) : 1205 - 1205
  • [2] Pose estimation with deep learning
    Nina Vogt
    Nature Methods, 2019, 16 : 1205 - 1205
  • [3] Deep Learning for Head Pose Estimation: A Survey
    Asperti A.
    Filippini D.
    SN Computer Science, 4 (4)
  • [4] Head Pose Estimation Algorithm Based on Deep Learning
    Cao, Yuanming
    Liu, Yijun
    MATERIALS SCIENCE, ENERGY TECHNOLOGY, AND POWER ENGINEERING I, 2017, 1839
  • [5] A Facial Pose Estimation Algorithm Using Deep Learning
    Xu, Xiao
    Wu, Lifang
    Wang, Ke
    Ma, Yukun
    Qi, Wei
    BIOMETRIC RECOGNITION, CCBR 2015, 2015, 9428 : 669 - 676
  • [6] Estimation of Artificial Reef Pose Based on Deep Learning
    Song, Yifan
    Wu, Zuli
    Zhang, Shengmao
    Quan, Weimin
    Shi, Yongchuang
    Xiong, Xinquan
    Li, Penglong
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2024, 12 (05)
  • [7] Deep Learning for Integrated Hand Detection and Pose Estimation
    Chen, Tzu-Yang
    Wu, Min-Yu
    Hsieh, Yu-Hsun
    Fu, Li-Chen
    2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 615 - 620
  • [8] Human Pose Estimation Based on ISAR and Deep Learning
    Javadi, S. Hamed
    Bourdoux, Andre
    Deligiannis, Nikos
    Sahli, Hichem
    IEEE SENSORS JOURNAL, 2024, 24 (17) : 28324 - 28337
  • [9] Object Recognition and Pose Estimation base on Deep Learning
    Xue, Li-wei
    Chen, Li-guo
    Liu, Ji-zhu
    Wang, Yang-jun
    Shen, Qi
    Huang, Hai-bo
    2017 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (IEEE ROBIO 2017), 2017, : 1288 - 1293
  • [10] Deep Reinforcement Learning for Active Human Pose Estimation
    Gartner, Erik
    Pirinen, Aleksis
    Sminchisescu, Cristian
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 10835 - 10844